A theory of transfer learning with applications to active learning

We explore a transfer learning setting, in which a finite sequence of target concepts are sampled independently with an unknown distribution from a known family. We study the total number of labeled examples required to learn all targets to an arbitrary specified expected accuracy, focusing on the asymptotics in the number of tasks and the desired accuracy. Our primary interest is formally understanding the fundamental benefits of transfer learning, compared to learning each target independently from the others. Our approach to the transfer problem is general, in the sense that it can be used with a variety of learning protocols. As a particularly interesting application, we study in detail the benefits of transfer for self-verifying active learning; in this setting, we find that the number of labeled examples required for learning with transfer is often significantly smaller than that required for learning each target independently.


Introduction
Transfer learning reuses knowledge from past related tasks to ease the process of learning to perform a new task.The goal of transfer learning is to leverage previous learning and experience to more efficiently learn novel, but related, concepts, compared to what would be possible without this prior experience.The utility of transfer learning is typically measured by a reduction in the number of training examples required to achieve a target performance on a sequence of related learning problems, compared to the number required for unrelated problems: i.e., reduced sample complexity.In many real-life scenarios, just a few training examples of a new concept or process is often sufficient for a human learner to grasp the new concept given knowledge of related ones.For example, learning to drive a van becomes much easier a task if we have already learned how to drive a car.Learning French is somewhat easier if we have already learned English (vs Chinese), and learning Spanish is easier if we know Portuguese (vs German).We are therefore interested in understanding the conditions that enable a learning machine to leverage abstract knowledge obtained as a by-product of learning past concepts, to improve its performance on future learning problems.Furthermore, we are interested in how the magnitude of these improvements grows as the learning system gains more experience from learning multiple related concepts.
The ability to transfer knowledge gained from previous tasks to make it easier to learn a new task can potentially benefit a wide range of real-world applications, including computer vision, natural language processing, cognitive science (e.g., fMRI brain state classification), and speech recognition, to name a few.As an example, consider training a speech recognizer.After training on a number of individuals, a learning system can identify common patterns of speech, such as accents or dialects, each of which requires a slightly different speech recognizer; then, given a new person to train a recognizer for, it can quickly determine the particular dialect from only a few well-chosen examples, and use the previously-learned recognizer for that particular dialect.In this case, we can think of the transferred knowledge as consisting of the common aspects of each recognizer variant and more generally the distribution of speech patterns existing in the population these subjects are from.This same type of distribution-related knowledge transfer can be helpful in a host of applications, including all those mentioned above.
Supposing these target concepts (e.g., speech patterns) are sampled independently from a fixed population, having knowledge of the distribution of concepts in the population may often be quite valuable.More generally, we may consider a general scenario in which the target concepts are sampled i.i.d.according to a fixed distribution.As we show below, the number of labeled examples required to learn a target concept sampled according to this distribution may be dramatically reduced if we have direct knowledge of the distribution.However, since in many real-world learning scenarios, we do not have direct access to this distribution, it is desirable to be able to somehow learn the distribution, based on observations from a sequence of learning problems with target concepts sampled according to that distribution.The hope is that an estimate of the distribution so-obtained might be almost as useful as direct access to the true distribution in reducing the number of labeled examples required to learn subsequent target concepts.The focus of this paper is an approach to transfer learning based on estimating the distribution of the target concepts.Whereas we acknowledge that there are other important challenges in transfer learning, such as exploring improvements obtainable from transfer under various alternative notions of task relatedness [EP04,BDS03], or alternative reuses of knowledge obtained from previous tasks [Thr96], we believe that learning the distribution of target concepts is a central and crucial component in many transfer learning scenarios, and can reduce the total sample complexity across tasks.Note that it is not immediately obvious that the distribution of targets can even be learned in this context, since we do not have direct access to the target concepts sampled according to it, but rather have only indirect access via a finite number of labeled examples for each task; a significant part of the present work focuses on establishing that as long as these finite labeled samples are larger than a certain size, they hold sufficient information about the distribution over concepts for estimation to be possible.In particular, in contrast to standard results on consistent density estimation, our estimators are not directly based on the target concepts, but rather are only indirectly dependent on these via the labels of a finite number of data points from each task.One desideratum we pay particular attention to is minimizing the number of extra labeled examples needed for each task, beyond what is needed for learning that particular target, so that the benefits of transfer learning are obtained almost as a by-product of learning the tar-gets.Our technique is general, in that it applies to any concept space with finite VC dimension; also, the process of learning the target concepts is (in some sense) decoupled from the mechanism of learning the concept distribution, so that we may apply our technique to a variety of learning protocols, including passive supervised learning, active supervised learning, semi-supervised learning, and learning with certain general data-dependent forms of interaction [Han09].For simplicity, we choose to formulate our transfer learning algorithms in the language of active learning; as we show, this problem can benefit significantly from transfer.Formulations for other learning protocols would follow along similar lines, with analogous theorems; only the results in Section 5 are specific to active learning.
Transfer learning is related at least in spirit to much earlier work on case-based and analogical learning [Car83,Car86,VC93,Kol93,Thr96], although that body of work predated modern machine learning, and focused on symbolic reuse of past problem solving solutions rather than on current machine learning problems such as classification, regression or structured learning.More recently, transfer learning (and the closely related problem of multitask learning) has been studied in specific cases with interesting (though sometimes heuristic) approaches [Car97,Sil00,MP04,Bax97,BDS03].This paper considers a general theoretical framework for transfer learning, based on an Empirical Bayes perspective, and derives rigorous theoretical results on the benefits of transfer.We discuss the relation of this analysis to existing theoretical work on transfer learning below.

Active Learning
Active learning is a powerful form of supervised machine learning characterized by interaction between the learning algorithm and data source during the learning process [CAL94,MN98,CCS00,TK01,NS04,BEYL04,DCB07,DC08,HC08].In this work, we consider a variant known as pool-based active learning, in which a learning algorithm is given access to a (typically very large) collection of unlabeled examples, and is able to select any of those examples, request the supervisor to label it (in agreement with the target concept), then after receiving the label, select another example from the pool, etc.This sequential label-requesting process continues until some halting criterion is reached, at which point the algorithm outputs a classifier, and the objective is for this classifier to closely approximate the (unknown) target concept in the future.The primary motivation behind pool-based active learning is that, often, unlabeled examples are inexpensive and available in abundance, while annotating those examples can be costly or time-consuming; as such, we often wish to select only the informative examples to be labeled, thus reducing information-redundancy to some extent, compared to the baseline of selecting the examples to be labeled uniformly at random from the pool (passive learning).
There has recently been an explosion of fascinating theoretical results on the advantages of this type of active learning, compared to passive learning, in terms of the number of labels required to obtain a prescribed accuracy (called the sample complexity): e.g., [FSST97, Das04, DKM09, Das05, Han07b, BHV10, BBL09, Wan09, Kää06, Han07a, DHM08, Fri09, CN08, Now08, BBZ07, Han11, Kol10, Han09, BDL09].In particular, [BHV10] show that in noise-free binary classifier learning, for any passive learn-ing algorithm for a concept space of finite VC dimension, there exists an active learning algorithm with asymptotically much smaller sample complexity for any nontrivial target concept.Thus, it appears there are profound advantages to active learning compared to passive learning.In later work, [Han09] strengthens this result by removing a certain dependence on the underlying distribution of the data in the learning algorithm.
However, the ability to rapidly converge to a good classifier using only a small number of labels is only one desirable quality of a machine learning method, and there are other qualities that may also be important in certain scenarios.In particular, the ability to verify the performance of a learning method is often a crucial part of machine learning applications, as (among other things) it helps us determine whether we have enough data to achieve a desired level of accuracy with the given method.In passive learning, one common practice for this verification is to hold out a random sample of labeled examples as a validation sample to evaluate the trained classifier (e.g., to determine when training is complete).It turns out this technique is not feasible in active learning, since in order to be really useful as an indicator of whether we have seen enough labels to guarantee the desired accuracy, the number of labeled examples in the random validation sample would need to be much larger than the number of labels requested by the active learning algorithm itself, thus (to some extent) canceling the savings obtained by performing active rather than passive learning.Another common practice in passive learning is to examine the training error rate of the returned classifier, which can serve as a reasonable indicator of performance (after adjusting for model complexity).However, again this measure of performance is not necessarily reasonable for active learning, since the set of examples the algorithm requests the labels of is typically distributed very differently from the test examples the classifier will be applied to after training.
This reasoning seems to indicate that performance verification is (at best) a far more subtle issue in active learning than in passive learning.Indeed, [BHV10] note that although the number of labels required to achieve good accuracy in active learning is significantly smaller than passive learning, it is sometimes the case that the number of labels required to verify that the accuracy is good is not significantly improved.In particular, this phenomenon can increase significantly the sample complexity of active learning algorithms that adaptively determine how many labels to request before terminating.In short, if we require the algorithm both to learn an accurate concept and to know that its concept is accurate, then the number of labels required by active learning may sometimes not be significantly smaller than the number required by passive learning.
In the present work, we are interested in the question of whether a form of transfer learning can help to bridge this gap, enabling self-verifying active learning algorithms to obtain the same types of dramatic improvements over passive learning as can be achieved by their non-self-verifying counterparts.

Outline of the paper
The remainder of the paper is organized as follows.In Section 2 we introduce basic notation used throughout, and survey some related work from the existing literature.In Section 3, we describe and analyze our proposed method for estimating the distribution of target concepts, the key ingredient in our approach to transfer learning, which we then present in Section 4. Finally, in Section 5, we investigate the benefits of this type of transfer learning for self-verifying active learning.

Definitions and Related Work
First, we state a few basic notational conventions.We denote N = {1, 2, . ..} and N 0 = N ∪ {0}.For any random variable X, we generally denote by P X the distribution of X (the induced probability measure on the range of X), and by P X|Y the regular conditional distribution of X given Y .For any pair of probability measures µ 1 , µ 2 on a measurable space (Ω, F), we define Next we define the particular objects of interest to our present discussion.Let Θ be an arbitrary set (called the parameter space), (X , B X ) be a Borel space [Sch95] (where X is called the instance space), and D be a fixed distribution on X (called the data distribution).For instance, Θ could be R n and X could be R m , for some n, m ∈ N, though more general scenarios are certainly possible as well, including infinite-dimensional parameter spaces.Let C be a set of measurable classifiers h : X → {−1, +1} (called the concept space), and suppose C has VC dimension d < ∞ [Vap82] (such a space is called a VC class).C is equipped with its Borel σ-algebra B, induced by the pseudo-metric ρ(h, g) = D({x ∈ X : h(x) = g(x)}).Though all of our results can be formulated for general D in slightly more complex terms, for simplicity throughout the discussion below we suppose ρ is actually a metric, in that any h, g ∈ C with h = g have ρ(h, g) > 0; this amounts to a topological assumption on C relative to D.
The general setup for the learning problem is that we have a true parameter value θ ⋆ ∈ Θ, and a collection of C-valued random variables {h * tθ } t∈N,θ∈Θ , where for a fixed θ ∈ Θ the {h * tθ } t∈N variables are i.i.d. with distribution π θ .The learning problem is the following.For each θ ∈ Θ, there is a sequence where {X ti } t,i∈N are i.i.d.D, and for each ) values, we are studying the non-noisy, or realizable-case, setting.
The algorithm receives values ε and T as input, and for each t ∈ {1, 2, . . ., T } in increasing order, it observes the sequence X t1 , X t2 , . .., and may then select an index i 1 , receive label Y ti1 (θ ⋆ ), select another index i 2 , receive label Y ti2 (θ ⋆ ), etc.The algorithm proceeds in this fashion, sequentially requesting labels, until eventually it produces a classifier ĥt .It then increments t and repeats this process until it produces a sequence ĥ1 , ĥ2 , . . ., ĥT , at which time it halts.To be called correct, the algorithm must have a guarantee that ∀θ ⋆ ∈ Θ, ∀t ≤ T, E ρ ĥt , h * tθ⋆ ≤ ε, for any values of T ∈ N and ε > 0 given as input.We will be interested in the expected number of label requests necessary for a correct learning algorithm, averaged over the T tasks, and in particular in how shared information between tasks can help to reduce this quantity when direct access to θ ⋆ is not available to the algorithm.

Relation to Existing Theoretical Work on Transfer Learning
Although we know of no existing work on the theoretical advantages of transfer learning for active learning, the existing literature contains several analyses of the advantages of transfer learning for passive learning.In his classic work, Baxter ( [Bax97] section 4) explores a similar setup for a general form of passive learning, except in a full Bayesian setting (in contrast to our setting, often referred to as "empirical Bayes," which includes a constant parameter θ ⋆ to be estimated from data).Essentially, [Bax97] sets up a hierarchical Bayesian model, in which (in our notation) θ ⋆ is a random variable with known distribution (hyper-prior), but otherwise the specialization of Baxter's setting to the pattern recognition problem is essentially identical to our setup above.This hyperprior does make the problem slightly easier, but generally the results of [Bax97] are of a different nature than our objectives here.Specifically, Baxter's results on learning from labeled examples can be interpreted as indicating that transfer learning can improve certain constant factors in the asymptotic rate of convergence of the average of expected error rates across the learning problems.That is, certain constant complexity terms (for instance, related to the concept space) can be reduced to (potentially much smaller) values related to π θ⋆ by transfer learning.Baxter argues that, as the number of tasks grows large, this effectively achieves close to the known results on the sample complexity of passive learning with direct access to θ ⋆ .A similar claim is discussed by Ando and Zhang [AZ04] (though in less detail) for a setting closer to that studied here, where θ ⋆ is an unknown parameter to be estimated.
There are also several results on transfer learning of a slightly different variety, in which, rather than having a prior distribution for the target concept, the learner initially has several potential concept spaces to choose from, and the role of transfer is to help the learner select from among these concept spaces [Bax00,AZ05].In this case, the idea is that one of these concept spaces has the best average minimum achievable error rate per learning problem, and the objective of transfer learning is to perform nearly as well as if we knew which of the spaces has this property.In particular, if we assume the target functions for each task all reside in one of the concept spaces, then the objective of transfer learning is to perform nearly as well as if we knew which of the spaces contains the targets.Thus, transfer learning results in a sample complexity related to the number of learning problems, a complexity term for this best concept space, and a complexity term related to the diversity of concept spaces we have to choose from.In particular, as with [Bax97], these results can typically be interpreted as giving constant factor improvements from transfer in a passive learning context, at best reducing the complexity constants, from those for the union over the given concept spaces, down to the complexity constants of the single best concept space.
In addition to the above works, there are several analyses of transfer learning and multitask learning of an entirely different nature than our present discussion, in that the objectives of the analysis are somewhat different.Specifically, there is a branch of the literature concerned with task relatedness, not in terms of the underlying process that generates the target concepts, but rather directly in terms of relations between the target concepts themselves.In this sense, several tasks with related target concepts should be much easier to learn than tasks with unrelated target concepts.This is studied in the context of kernel methods by [MP04,EP04,EMP05], and in a more general theoretical framework by [BDS03].As mentioned, our approach to transfer learning is based on the idea of estimating the distribution of target concepts.As such, though interesting and important, these notions of direct relatedness of target concepts are not as relevant to our present discussion.
As with [Bax97], the present work is interested in showing that as the number of tasks grows large, we can effectively achieve a sample complexity close to that achievable with direct access to θ ⋆ .However, in contrast, we are interested in a general approach to transfer learning and the analysis thereof, leading to concrete results for a variety of learning protocols such as active learning and semi-supervised learning.In particular, our analysis of active learning reveals the interesting phenomenon that transfer learning can sometimes improve the asymptotic dependence on ε, rather than merely the constant factors as in the analysis of [Bax97].
Our work contrasts with [Bax97] in another important respect, which significantly changes the way we approach the problem.Specifically, in Baxter's analysis, the results (e.g., [Bax97] Theorems 4, 6) regard the average loss over the tasks, and are stated as a function of the number of samples per task.This number of samples plays a dual role in Baxter's analysis, since these samples are used both by the individual learning algorithm for each task, and also for the global transfer learning process that provides the learners with information about θ ⋆ .Baxter is then naturally interested in the rates at which these losses shrink as the sample sizes grow large, and therefore formulates the results in terms of the asymptotic behavior as the per-task sample sizes grow large.In particular, the results of [Bax97] involve residual terms which become negligible for large sample sizes, but may be more significant for smaller sample sizes.
In our work, we are interested in decoupling these two roles for the sample sizes; in particular, our results regard only the number of tasks as an asymptotic variable, while the number of samples per task remains bounded.First, we note a very practical motivation for this: namely, non-altruistic learners.In many settings where transfer learning may be useful, it is desirable that the number of labeled examples we need to collect from each particular learning problem never be significantly larger than the number of such examples required to solve that particular problem (i.e., to learn that target concept to the desired accuracy).For instance, this is the case when the learning problems are not all solved by the same individual (or company, etc.), but rather a coalition of cooperating individuals (e.g., hospitals sharing data on clinical trials); each individual may be willing to share the data they used to learn their particular concept, in the interest of making others' learning problems easier; however, they may not be willing to collect significantly more data than they themselves need for their own learning problem.We should therefore be particularly interested in studying transfer as a by-product of the usual learning process; failing this, we are interested in the minimum possible number of extra labeled examples per task to gain the benefits of transfer learning.
The issue of non-altruistic learners also presents a further technical problem in that the individuals solving each task may be unwilling to alter their method of gathering data to be more informative for the transfer learning process.That is, we expect the learning process for each task is designed with the sole intention of estimating the target concept, without regard for the global transfer learning problem.To account for this, we model the transfer learning problem in a reduction-style framework, in which we suppose there is some black-box learning algorithm to be run for each task, which takes a prior as input and has a theoretical guarantee of good performance provided the prior is correct.We place almost no restrictions whatsoever on this learning algorithm, including the manner in which it accesses the data.This allows remarkable generality, since this procedure could be passive, active, semi-supervised, or some other kind of query-based strategy.However, because of this generality, we have no guarantee on the information about θ ⋆ reflected in the data used by this algorithm (especially if it is an active learning algorithm).As such, we choose not to use the label information gathered by the learning algorithm for each task when estimating the θ ⋆ , but instead take a small number of additional random labeled examples from each task with which to estimate θ ⋆ .Again, we want to minimize this number of additional samples per task; indeed, in this work we are able to make due with a mere constant number of additional samples per task.To our knowledge, no result of this type (estimating θ ⋆ using a bounded sample size per learning problem) has previously been established at the level of generality studied here.

Estimating the Prior
The advantage of transfer learning in this setting is that each learning problem provides some information about θ ⋆ , so that after solving several of the learning problems, we might hope to be able to estimate θ ⋆ .Then, with this estimate in hand, we can use the corresponding estimated prior distribution in the learning algorithm for subsequent learning problems, to help inform the learning process similarly to how direct knowledge of θ ⋆ might be helpful.However, the difficulty in approaching this is how to define such an estimator.Since we do not have direct access to the h * t values, but rather only indirect observations via a finite number of example labels, the standard results for density estimation from i.i.d.samples cannot be applied.
The idea we pursue below is to consider the distributions on Z tk (θ ⋆ ).These variables are directly observable, by requesting the labels of those examples.Thus, for any finite k ∈ N, this distribution is estimable from observable data.That is, using the i.i.d.values Z 1k (θ ⋆ ), . . ., Z tk (θ ⋆ ), we can apply standard techniques for density estimation to arrive at an estimator of P Z tk (θ⋆) .Then the question is whether the distribution P Z tk (θ⋆) uniquely characterizes the prior distribution π θ⋆ : that is, whether π θ⋆ is identifiable from P Z tk (θ⋆) .
As an example, consider the space of half-open interval classifiers on [0, 1]: In this case, π θ⋆ is not necessarily identifiable from P Zt1(θ⋆) ; for instance, the distributions π θ1 and π θ2 characterized by are not distinguished by these one-dimensional distributions.However, it turns out that for this half-open intervals problem, π θ⋆ is uniquely identifiable from P Zt2(θ⋆) ; for instance, in the θ 1 vs θ 2 scenario, the conditional probability P (Yt1(θi),Yt2(θi))|(Xt1,Xt2) ((+1, +1)|(1/4, 3/4)) will distinguish π θ1 from π θ2 , and this can be calculated from P Zt2(θi) .The crucial element of the analysis below is determining the appropriate value of k to uniquely identify π θ⋆ from P Z tk (θ⋆) in general.As we will see, k = d (the VC dimension) is always sufficient, a key insight for the results that follow.We will also see this is not the case for any k < d.
To be specific, in order to transfer knowledge from one task to the next, we use a few labeled data points from each task to gain information about θ ⋆ .For this, for each task t, we simply take the first d data points in the Z t (θ ⋆ ) sequence.That is, we request the labels and use the points Z td (θ ⋆ ) to update an estimate of θ ⋆ .
The following result shows that this technique does provide a consistent estimator of π θ⋆ .Again, note that this result is not a straightforward application of the standard approach to consistent estimation, since the observations here are not the h * tθ⋆ variables themselves, but rather a number of the Y ti (θ ⋆ ) values.The key insight in this result is that π θ⋆ is uniquely identified by the joint distribution P Z td (θ⋆) over the first d labeled examples; later, we prove this is not necessarily true for P Z tk (θ⋆) for values k < d.This identifiability result is stated below in Corollary 1; as we discuss in Section 3.1, there is a fairly simple direct proof of this result.However, for our purposes, we will actually require the stronger condition that any θ ∈ Θ with small P Z tk (θ) − P Z tk (θ⋆) also has small π θ − π θ⋆ .This stronger requirement adds to the complexity of the proofs.The results in this section are purely concerned with relating distances in the space of P Z td (θ) distributions to the corresponding distances in the space of π θ distributions; as such, they are not specific to active learning or other learning protocols, and hence are of independent interest.Theorem 1.There exists an estimator θT θ⋆ = θT (Z 1d (θ ⋆ ), . . ., Z T d (θ ⋆ )), and functions R : One important detail to note, for our purposes, is that R(T, α) is independent from θ ⋆ , so that the value of R(T, α) can be calculated and used within a learning algorithm.The proof of Theorem 1 will be established via the following sequence of lemmas.Lemma 1 relates distances in the space of priors to distances in the space of distributions on the full data sets.In turn, Lemma 2 relates these distances to distances in the space of distributions on a finite number of examples from the data sets.Lemma 3 then relates the distances between distributions on any finite number of examples to distances between distributions on d examples.Finally, Lemma 4 presents a standard result on the existence of a converging estimator, in this case for the distribution on d examples, for totally bounded families of distributions.Tracing these relations back, they relate convergence of the estimator for the distribution of d examples to convergence of the corresponding estimator for the prior itself.
Lemma 1.For any θ, θ ′ ∈ Θ and t ∈ N, Note that since C has finite VC dimension, so does the collection of sets {{x : h(x) = g(x)} : h, g ∈ C}, so that the uniform strong law of large numbers implies that with probability one, ∀h, g ∈ C, ρ X (h, g) exists and has ρ X (h, g) = ρ(h, g) [Vap82].Consider any θ, θ ′ ∈ Θ, and any and similarly for θ ′ .Any measurable set C for the range of Z t (θ) can be expressed as Likewise, this reasoning holds for θ ′ .Then Analogous reasoning holds for h * tθ ′ .Thus, we have Combining the above, we have P Zt(θ Proof.The left inequality follows from Lemma 1 and the basic definition of • , since The remainder of this proof focuses on the right inequality.Fix θ, θ ′ ∈ Θ, let γ > 0, and let B ⊆ (X × {−1, +1}) ∞ be a measurable set such that Let A be the collection of all measurable subsets of (X ) k and some k ∈ N. In particular, since A is an algebra that generates the product σ-algebra, Carathéodory's extension theorem [Sch95] implies that there exist disjoint sets {A i } i∈N in A such that B ⊆ i∈N A i and Additionally, as these sums are bounded, there must exist n ∈ N such that and therefore In summary, we have π θ − π θ ′ ≤ lim k→∞ P Z tk (θ) − P Z tk (θ ′ ) + 3γ.Since this is true for an arbitrary γ > 0, taking the limit as γ → 0 implies In particular, this implies there exists a sequence r k (θ, θ This would suffice to establish the upper bound if we were allowing r k to depend on the particular θ and θ ′ .However, to guarantee the same rates of convergence for all pairs of parameters requires an additional argument.Specifically, let γ > 0 and let Θ γ denote a minimal subset of Θ such that, ∀θ ∈ Θ, . By triangle inequalities and the left inequality from the lemma statement (established above), we also have Defining r k = inf γ>0 (4γ + r k (γ)), we have the right inequality of the lemma statement, and since r k (γ) = o(1) for each γ > 0, we have r k = o(1).
⊓ ⊔ Lemma 3. ∀t, k ∈ N, ∀θ, θ ′ ∈ Θ, Proof.Fix any t ∈ N, and let X = {X t1 , X t2 , . ..} and Y(θ) = {Y t1 (θ), Y t2 (θ), . ..}, and for k ∈ N let and therefore the result trivially holds.Now suppose k > d.For a sequence z and I ⊆ N, we will use the notation zI = {z i : i ∈ I}.Note that, for any k > d and xk ∈ X k , there is a sequence . Now suppose k > d and take as an inductive hypothesis that there is a measurable set A * ⊆ X ∞ of probability one with the property that ∀x ∈ A * , for every finite This clearly holds for ȳI − ȳ(x I ) 1 /2 = 0, since P YI (θ)|XI (ȳ I |x I ) = 0 in this case, so this will serve as our base case in the inductive proof.Next we inductively extend this to the value k > 0. Specifically, let A * k−1 be the A * guaranteed to exist by the inductive hypothesis, and fix any x ∈ A * , ȳ ∈ {−1, +1} ∞ , and finite I ⊂ N with |I| > d and ȳI − ȳ(x I ) 1 /2 = k.Let i ∈ I be such that ȳi = ȳi (x I ), and let ȳ′ ∈ {−1, +1} have ȳ′ j = ȳj for every j = i, and ȳ′ i = −ȳ i .Then and similarly for θ ′ .By the inductive hypothesis, this means Therefore, by the principle of induction, this inequality holds for all k > d, for every x ∈ A * , ȳ ∈ {−1, +1} ∞ , and finite I ⊂ N, where A * has D ∞ -probability one.
In particular, we have that for θ, θ ′ ∈ Θ, Exchangeability implies this is at most To complete the proof, we need only bound this value by an appropriate function of P Z td (θ) − P Z td (θ ′ ) .Toward this end, suppose for some ỹd .Then either For which ever is the case, let A ε denote the corresponding measurable subset of X d , of probability at least ε/4.Then Therefore, which means
In many contexts (though certainly not all), even a simple maximum likelihood estimator suffices to supply this guarantee.However, to derive results under the more general conditions we consider here, we require a more involved method: specifically, the minimum distance skeleton estimate explored by [Yat85,DL01], specified as follows.

Identifiability from d Points
Inspection of the above proof reveals that the assumption that the family of priors is totally bounded is required only to establish the estimability and bounded minimax rate guarantees.In particular, the implied identifiability condition is, in fact, always satisfied, as stated formally in the following corollary.

Corollary 1. For any priors π
Proof.The described scenario is a special case of our general setting, with Θ = {1, 2}, in which case P Z d (i) = P Z 1d (i) .Thus, if P Z d (1) = P Z d (2) , then Lemma 3 and Lemma 2 combine to imply that π 1 − π 2 ≤ inf k∈N r k = 0.

⊓ ⊔
Since Corollary 1 is interesting in itself, it is worth noting that there is a simple direct proof of this result.Specifically, by an inductive argument based on the observation (1) from the proof of Lemma 3, we quickly find that for any k ∈ N, P Z tk (θ⋆) is identifiable from P Z td (θ⋆) .Then we merely recall that P Zt(θ⋆) is always identifiable from {P Z tk (θ⋆) : k ∈ N} [Kal02], and the argument from the proof of Lemma 1 shows π θ⋆ is identifiable from P Zt(θ⋆) .
It is natural to wonder whether identifiability of π θ⋆ from P Z tk (θ⋆) remains true for some smaller number of points k < d, so that we might hope to create an estimator for π θ⋆ based on an estimator for P Z tk (θ⋆) .However, one can show that d is actually the minimum possible value for which this remains true for all D and all families of priors.Formally, we have the following result, holding for every VC class C. Theorem 2. There exists a data distribution D and priors π 1 , π 2 on C such that, for any positive integer Proof.Note that it suffices to show this is the case for k = d−1, since any smaller k is a marginal of this case.Consider a shatterable set of points S d = {x 1 , x 2 , . . ., x d } ⊆ X , and let D be uniform on . However, π 1 is clearly different from π 2 , since even the sizes of the supports are different.

Transfer Learning
In this section, we look at an application of the techniques from the previous section to transfer learning.Like the previous section, the results in this section are general, in that they are applicable to a variety of learning protocols, including passive supervised learning, passive semi-supervised learning, active learning, and learning with certain general types of data-dependent interaction (see [Han09]).For simplicity, we restrict our discussion to the active learning formulation; the analogous results for these other learning protocols follow by similar reasoning.The result of the previous section implies that an estimator for θ ⋆ based on ddimensional joint distributions is consistent with a bounded rate of convergence R. Therefore, for certain prior-dependent learning algorithms, their behavior should be similar under π θT θ⋆ to their behavior under π θ⋆ .
To make this concrete, we formalize this in the active learning protocol as follows.A prior-dependent active learning algorithm A takes as inputs ε > 0, D, and a distribution π on C. It initially has access to X 1 , X 2 , . . .i.i.d.D; it then selects an index i 1 to request the label for, receives Y i1 = h * (X i1 ), then selects another index i 2 , etc., until it eventually terminates and returns a classifier.Denote by Z = {(X 1 , h * (X 1 )), (X 2 , h * (X 2 )), . ..}.To be correct, A must guarantee that for h * ∼ π, ∀ε > 0, E [ρ(A(ε, D, π), h * )] ≤ ε.We define the random variable N (A, f, ε, D, π) as the number of label requests A makes before terminating, when given ε, D, and π as inputs, and when h * = f is the value of the target function; we make the particular data sequence Z the algorithm is run with implicit in this notation.We will be interested in the expected sample complexity SC(A, ε, We propose the following algorithm A τ for transfer learning, defined in terms of a given correct prior-dependent active learning algorithm A a .We discuss interesting specifications for A a in the next section, but for now the only assumption we require is that for any ε > 0 and D, there is a value s ε < ∞ such that for every π and f ∈ C, N (A a , f, ε, D, π) ≤ s ε ; this is a very mild requirement, and any active learning algorithm can be converted into one that satisfies this without significantly increasing its sample complexities for the priors it is already good for [BHV10].We additionally denote by m ε = 16d ε ln 24 ε , and Algorithm 1 A τ (T, ε): an algorithm for transfer learning, specified in terms of a generic subroutine A a .Recall that θ(t−1)θ⋆ , which is defined by Theorem 1, is a function of the labels requested on previous rounds of the algorithm; R(t − 1, ε/2) is also defined by Theorem 1, and has no dependence on the data (or on θ ⋆ ).The other quantities referred to in Algorithm 1 are defined just prior to Algorithm 1.We suppose the algorithm has access to the value SC(A a , ε/4, D, π θ ) for every θ ∈ Θ.This can sometimes be calculated analytically as a function of θ, or else can typically be approximated via Monte Carlo simulations.In fact, the result below holds even if SC is merely an accessible upper bound on the expected sample complexity.
Theorem 3. The algorithm A τ is correct.Furthermore, if S T (ε) is the total number of label requests made by A τ (T, ε), then lim sup The implication of Theorem 3 is that, via transfer learning, it is possible to achieve almost the same long-run average sample complexity as would be achievable if the target's prior distribution were known to the learner.We will see in the next section that this is sometimes significantly better than the single-task sample complexity.As mentioned, results of this type for transfer learning have previously appeared when A a is a passive learning method [Bax97]; however, to our knowledge, this is the first such result where the asymptotics concern only the number of learning tasks, not the number of samples per task; this is also the first result we know of that is immediately applicable to more sophisticated learning protocols such as active learning.
The algorithm A τ is stated in a simple way here, but Theorem 3 can be improved with some obvious modifications to A τ .The extra "+d" in Theorem 3 is not actually necessary, since we could stop updating the estimator θtθ⋆ (and the corresponding R value) after some o(T ) number of rounds (e.g., √ T ), in which case we would not need to request Y t1 (θ ⋆ ), . . ., Y td (θ ⋆ ) for t larger than this, and the extra d • o(T ) number of labeled examples vanishes in the average as T → ∞.Additionally, the ε/4 term can easily be improved to any value arbitrarily close to ε (even , and using this value in the SC calculations in the definition of θtθ⋆ as well.In fact, for many algorithms A a (e.g., with SC(A a , ε, D, π θ⋆ ) continuous in ε), combining the above two tricks yields lim sup Returning to our motivational remarks from Subsection 2.1, we can ask how many extra labeled examples are required from each learning problem to gain the benefits of transfer learning.This question essentially concerns the initial step of requesting the labels Y t1 (θ ⋆ ), . . ., Y td (θ ⋆ ).Clearly this indicates that from each learning problem, we need at most d extra labeled examples to gain the benefits of transfer.Whether these d label requests are indeed extra depends on the particular learning algorithm A a ; that is, in some cases (e.g., certain passive learning algorithms), A a may itself use these initial d labels for learning, so that in these cases the benefits of transfer learning are essentially gained as a by-product of the learning processes, and essentially no additional labeling effort need be expended to gain these benefits.On the other hand, for some active learning algorithms, we may expect that at least some of these initial d labels would not be requested by the algorithm, so that some extra labeling effort is expended to gain the benefits of transfer in these cases.
One drawback of our approach is that we require the data distribution D to remain fixed across tasks (this contrasts with [Bax97]).However, it should be possible to relax this requirement in the active learning setting in many cases.For instance, if X = R k , then as long as we are guaranteed that the distribution D t for each learning task has a strictly positive density function, it should be possible to use rejection sampling for each task to guarantee the d queried examples from each task have approximately the same distribution across tasks.This is all we require for our consistency results on θT θ⋆ (i.e., it was not important that the d samples came from the true distribution D, only that they came from a distribution under which ρ is a metric).We leave the details of such an adaptive method for future consideration.

Application to Self-Verifying Active Learning
In this section, we examine a specific sample complexity guarantee achievable by active learning, when combined with the above transfer learning procedure.As mentioned, [BHV10] found that there is often a significant gap between the sample complexities achievable by good active learning algorithms in general and the sample complexities achievable by active learning algorithms that themselves adaptively determine how many label requests to make for any given problem (referred to as self-verifying active learning algorithms).Specifically, while the former can always be strictly superior to the sample complexities achievable by passive learning, there are simple examples where this is not the case for self-verifying active learning algorithms.We should note, however, that all of the above considerations were proven for a learning scenario in which the target concept is considered a constant, and no information about the process that generates this concept is available to the learner.Thus, there is a natural question of whether, in the context of the transfer learning setting described above, we might be able to close this gap, so that self-verifying active learning algorithms are able to achieve the same type of guaranteed strict improvements over passive learning that are achievable by their non-self-verifying counterparts.
The considerations of the previous section indicate that this question is in some sense reducible to an analogous question for prior-dependent self-verifying active learning algorithms.The quantity SC(A a , ε, /4, D, π θ⋆ ) then essentially characterizes the achievable average sample complexity among the sequence of tasks.We will therefore focus in this section on characterizing this quantity, for a particularly effective active learning algorithm A a .

Related Work on Prior-dependent Learning
Prior-dependent learning algorithms have been studied in depth in the context of passive learning.In particular, [HKS92] found that for any concept space of finite VC dimension d, for any prior and data distribution, O(d/ε) random labeled examples are sufficient for the expected error rate of the Bayes classifier produced under the posterior distribution to be at most ε.Furthermore, it is easy to construct learning problems for which there is an Ω(1/ε) lower bound on the number of random labeled examples required to achieve expected error rate at most ε, by any passive learning algorithm; for instance, the problem of learning threshold classifiers on [0, 1] under a uniform data distribution and uniform prior is one such scenario.
In contrast, a relatively small amount is known about prior-dependent active learning.[FSST97] analyze the Query By Committee algorithm in this context, and find that if a certain information gain quantity for the points requested by the algorithm is lowerbounded by a value g, then the algorithm requires only O((d/g) log(1/ε)) labels to achieve expected error rate at most ε.In particular, they show that this is satisfied for constant g for linear separators under a near-uniform prior, and a near-uniform data distribution over the unit sphere.This represents a marked improvement over the results of [HKS92] for passive learning, and since the Query by Committee algorithm is selfverifying, this result is highly relevant to the present discussion.However, the condition that the information gains be lower-bounded by a constant is quite restrictive, and many interesting learning problems are precluded by this requirement.Furthermore, there exist learning problems (with finite VC dimension) for which the Query by Committee algorithm makes an expected number of label requests exceeding Ω(1/ε).To date, there has not been a general analysis of how the value of g can behave as a function of ε, though such an analysis would likely be quite interesting.
In the present section, we take a more general approach to the question of priordependent active learning.We are interested in the broad question of whether access to the prior bridges the gap between the sample complexity of learning and the sample complexity of learning with verification.Specifically, we ask the following question.
Can a prior-dependent self-terminating active learning algorithm for a concept class of finite VC dimension always achieve expected error rate at most ε using o(1/ε) label requests?

Prior-Independent Learning Algorithms
One may initially wonder whether we could achieve this o(1/ε) result merely by calculating the expected sample complexity of some prior-independent method, thus precluding the need for novel algorithms.Formally, we say an algorithm A is prior-independent if the conditional distribution of the queries and return value of A(ε, D, π) given Z is functionally independent of π.Indeed, for some C and D, it is known that there are prior-independent active learning algorithms A that have E[N (A, h * , ε, D, π)|h * ] = o(1/ε) (always); for instance, threshold classifiers have this property under any D, homogeneous linear separators have this property under a uniform D on the unit sphere in k dimensions, and intervals with positive width on X = [0, 1] have this property under D = Uniform([0, 1]) (see e.g., [Das05]).It is straightforward to show that any such A will also have SC(A, ε, D, π) = o(1/ε) for every π.In particular, the law of total expectation and the dominated convergence theorem imply In these cases, we can think of SC as a kind of average-case analysis of these algorithms.However, as we discuss next, there are also many C and D for which there is no prior-independent algorithm achieving o(1/ε) sample complexity for all priors.Thus, any general result on o(1/ε) expected sample complexity for π-dependent algorithms would indicate that there is a real advantage to having access to the prior, beyond the apparent smoothing effects of an average-case analysis.

Prior-Dependent Learning: An Example
We begin our exploration of π-dependent active learning with a concrete example, namely interval classifiers under a uniform data density but arbitrary prior, to illustrate how access to the prior can make a difference in the sample complexity.Specifically, consider X = [0, 1], D uniform on [0, 1], and the concept space C of interval classifiers specified in the previous subsection.For each classifier h ∈ C, define w(h) = D(x : h(x) = +1) (the width of the interval h).Note that because we allow a = b in the definition of C, there is a classifier h − ∈ C with w(h − ) = 0.For simplicity, in this example (only) we will suppose the algorithm may request the label of any point in X , not just those in the sequence {X i }; the same ideas can easily be adapted to the setting where queries are restricted to {X i }.Consider an active learning algorithm that sequentially requests the labels h * (x) for points x at 1/2, 1/4, 3/4, 1/8, 3/8, 5/8, 7/8, 1/16, 3/16, etc., until (case 1) it encounters an example x with h * (x) = +1 or until (case 2) the set of classifiers V ⊆ C consistent with all observed labels so far satisfies E[w(h * )|V ] ≤ ε (which ever comes first).In case 2, the algorithm simply halts and returns the constant classifier that always predicts −1: call it h − ; note that ρ(h − , h * ) = w(h * ).In case 1, the algorithm enters a second phase, in which it performs a binary search (repeatedly querying the midpoint between the closest two −1 and +1 points, taking 0 and 1 as known negative points) to the left and right of the observed positive point, halting after log 2 (4/ε) label requests on each side; this results in estimates of the target's endpoints up to ±ε/4, so that returning any classifier among the set V ⊆ C consistent with these labels results in error rate at most ε; in particular, if h is the classifier in V returned, then E[ρ( h, h * )|V ] ≤ ε.
Denoting this algorithm by A [] , and ĥ the classifier it returns, we have E ρ ĥ, h * = E E ρ ĥ, h * V ≤ ε, so that the algorithm is definitely correct.Note that case 2 will definitely be satisfied after at most 2 ε label requests, and if w(h * ) > ε, then case 1 will definitely be satisfied after at most 2 w(h * ) label requests, so that the algorithm never makes more than 2 max{w(h * ),ε} label requests before satisfying one of the two cases.Abbreviating N (h The third and fourth terms in (9) are o(1/ε).Since P(0 < w(h * ) ≤ √ ε) → 0 as ε → 0, the second term in (9) is o(1/ε) as well.If P(w(h * ) = 0) = 0, this completes the proof.We focus the rest of the proof on the first term in (9), in the case that P(w(h * ) = 0) > 0: i.e., there is nonzero probability that the target h * labels the space all negative.Letting V denote the subset of C consistent with all requested labels, note that on the event w(h * ) = 0, after n label requests (for n + 1 a power of 2) we have max h∈V w(h) ≤ 1/n.Thus, for any value γ ∈ (0, 1), after at most 2 γ label requests, on the event that w(h * ) = 0, Now note that, by the dominated convergence theorem, If we define γ ε as the largest value of γ for which E [w(h * )½ [w(h * ) ≤ γ]] ≤ εP(w(h * ) = 0) (or, say, half the supremum if the maximum is not achieved), then we have γ ε ≫ ε.Combined with (10), this implies Thus, all of the terms in (9) are o(1/ε), so that in total E[N (h * )] = o(1/ε).
In conclusion, for this concept space C and data distribution D, we have a correct active learning algorithm A achieving a sample complexity SC(A, ε, D, π) = o(1/ε) for all priors π on C.

A General Result for Self-Verifying Bayesian Active Learning
In this subsection, we present our main result for improvements achievable by priordependent self-verifying active learning: a general result stating that o(1/ε) expected sample complexity is always achievable for some appropriate prior-dependent active learning algorithm, for any (X , C, D, π) for which C has finite VC dimension.Since the known results for the sample complexity of passive learning with access to the prior are typically Θ(1/ε) [HKS92], and since are known learning problems (X , C, D, π) for which every passive learning algorithm requires Ω(1/ε) samples, this o(1/ε) result for active learning represents an improvement over passive learning.
The proof is simple and accessible, yet represents an important step in understanding the problem of self-termination in active learning algorithms, and the general issue of the complexity of verification.Also, since there are problems (X , C, D) where C has finite VC dimension but for which no (single-task) prior-independent correct active learning algorithm (of the self-terminating type studied here) can achieve o(1/ε) expected sample complexity for every π, this also represents a significant step toward understanding the inherent value of having access to the prior in active learning.Additionally, via Theorem 3, this result implies that active transfer learning (of the type discussed above) can provide strictly superior sample complexities compared to the known results for passive learning (even compared to passive learning algorithms having direct access to the prior π θ⋆ ), and often strictly superior to the sample complexities achievable by (prior-independent) active learning without transfer.
First, we have a small lemma.

Lemma 5. For any sequence of functions φ
Proof.For any constant γ ∈ (0, ∞), we have (by Markov's inequality and the dominated convergence theorem) Therefore (by induction), there exists a diverging sequence n i in N such that Inverting this, let i n = max{i ∈ N : n i ≤ n}, and define φn (h Theorem 4. For any VC class C, there is a correct active learning algorithm A a that, for every data distribution D and prior π, achieves expected sample complexity Our approach to proving Theorem 4 is via a reduction to established results about (prior-independent) active learning algorithms that are not self-verifying.Specifically, consider a slightly different type of active learning algorithm than that defined above: namely, an algorithm A b that takes as input a budget n ∈ N on the number of label requests it is allowed to make, and that after making at most n label requests returns as output a classifier ĥn .Let us refer to any such algorithm as a budget-based active learning algorithm.Note that budget-based active learning algorithms are prior-independent (have no direct access to the prior).The following result was proven by [Han09] (see also the related earlier work of [BHV10]).That is, equivalently, for any fixed value for the target function, the expected error rate is o(1/n), where the random variable in the expectation is only the data sequence X 1 , X 2 , . ... Our task in the proof of Theorem 4 is to convert such a budget-based algorithm into one that is correct, self-terminating, and prior-dependent, taking ε as input.This value is accessible based purely on access to π and D. Furthermore, we clearly have (by construction) E ρ ĥnπ,ε , h * ≤ ε.Thus, letting A a denote the active learning algorithm taking (D, π, ε) as input, which runs A b (n π,ε ) and then returns ĥnπ,ε , we have that A a is a correct learning algorithm (i.e., its expected error rate is at most ε).

Proof (Theorem 4). Consider
As for the expected sample complexity SC(A a , ε, D, π) achieved by A a , we have SC(A a , ε, D, π) ≤ n π,ε , so that it remains only to bound n π,ε .By Lemma 5, there is a π-dependent function E(n; π, D) such that Theorem 4 implies that, if we have direct access to the prior distribution of h , regardless of what that prior distribution π is, we can always construct a self-verifying active learning algorithm A a that has a guarantee of E [ρ (A a (ε, D, π), h * )] ≤ ε and its expected number of label requests is o(1/ε).This guarantee is not possible for priorindependent (single-task) self-verifying active learning algorithms.
Additionally, when combined with Theorem 3, Theorem 4 implies that A τ , with this particular algorithm A a as its subroutine, has lim sup T →∞ E[S T (ε)]/T = o(1/ε).Again, since there are known cases where there is no prior-independent self-verifying active learning algorithm with sample complexity o(1/ε), this sometimes represents a significant improvement over the results provable for learning the tasks independently (i.e., without transfer).

Dependence on D in the Learning Algorithm
The dependence on D in the algorithm described in the proof of Theorem 4 is fairly weak, and we can eliminate any direct dependence on D by replacing ρ ĥn , h * by a 1 − ε/2 confidence upper bound based on M ε = Ω 1 ε 2 log 1 ε i.i.d.unlabeled examples X ′ 1 , X ′ 2 , . . ., X ′ Mε independent from the examples used by the algorithm (e.g., set aside in a pre-processing step, where the bound is calculated via Hoeffding's inequality and a union bound over the values of n that we check, of which there are at most O(1/ε)).Then we simply increase the value of n (starting at some constant, such as 1) until The expected value of the smallest value of n for which this occurs is o(1/ε).Note that this only requires access to the prior π, not the data distribution D (the budget-based algorithm A b of [Han09] has no direct dependence on D); if desired for computational efficiency, this dependence may also be estimated by a 1 − ε/4 confidence upper bound based on Ω 1 ε 2 log 1 ε independent samples of h * values with distribution π, where for each sample we simulate the execution of A b (n) for (simulated) target function in order to obtain the returned classifier.In particular, note that no actual label requests to the oracle are required during this process of estimating the appropriate label budget n π,ε , as all executions of A b are simulated.

Inherent Dependence on π in the Sample Complexity
We have shown that for every prior π, the sample complexity is bounded by a o(1/ε) function.One might wonder whether it is possible that the asymptotic dependence on ε in the sample complexity can be prior-independent, while still being o(1/ε).That is, we can ask whether there exists a (π-independent) function s(ε) = o(1/ε) such that, for every π, there is a correct π-dependent algorithm A achieving a sample complexity SC(A, ε, D, π) = O(s(ε)), possibly involving π-dependent constants.Certainly in some cases, such as threshold classifiers, this is true.However, it seems this is not generally the case, and in particular it fails to hold for the space of interval classifiers.

Lemma 6 .
[Han09] For any VC class C, there exists a constant c ∈ (0, ∞), a function E(n; f, D), and a budget-based active learning algorithm A b such that∀D, ∀f ∈ C, E(n; f, D) ≤ c/n and E(n; f, D) = o(1/n),and E ρ (A b (n), h * ) h * ≤ E(n; h * , D) (always).4 A b , E, and c as in Lemma 6, let ĥn denote the classifier returned by A b (n), and define n π,ε = min n ∈ N : E ρ ĥn , h * ≤ ε .