Uploaded by Meskatul islam

Transfer Learning

advertisement
Transfer Learning
Meskatul Islam
ID: 1703210201349
6th Semester, Dept. of CSE
Premier University, Chittagong
Abstract: Transfer learning make use of the
knowledge gained while solving one problem
and applying it to a different but related
problem. Although most machine learning
systems are designed to handle single tasks, the
development of algorithms that facilitate the
transfer of learning is a topic of ongoing
interest in the machine learning community.
This paper provides an introduction to the
objectives, and challenges of Transfer learning.
It explores current research in this area,
provides an overview of the state of the art, and
explains open issues. The study combines
incorrect learning transfers with enhanced
learning and discusses problems with incorrect
transfers and job mapping.
I. INTRODUCTION
Transfer learning is the most common
form of human activity. For example, we learn a
thing but it is not right to think that the thing does
not work in different work. Suppose, at one time
I learned math, algebra, as well as statistics to pass
the exam. Now that knowledge has been
transferred to learn machine learning. Learning
math, algebra, and statistics were not in vain.
Because I didn't have to start with machine
learning from scratch because math and statistics
as well as algebra helped me to learn this machine
learning. That is why it is said that although we
learn something for a work, it is seen later this
knowledge is used for solve another work. This
means that what we have learned in the past is
now being used to learn something new. There,
knowledge is being transferred from one domain
to another.
If we train a model in one domain, but it is
not right to think that it cannot be used in another
domain as well. If we train a deep learning model
on ‘image’ related matters, it can be used for solve
another problem related to that problem. The
more similarities between the two problem, the
easier it will be to transfer the learning between
the models. In the beginning, we train for a
problem from the scratch, but there is no need to
train from the scratch for the next related problem.
This is transfer learning.
In the past, machine learning and deep
learning algorithms were designed to perform a
specific task. Again, these models had to be built
from scratch when there was a change in their
feature space distribution. Now it is seen that even
though there is a change in the feature space the
learning of one is going to be transferred to
another.
II.UNDERSTANDING TRANSFER
LEARNING
I learned to ride a bicycle when I was a
child so I didn't have much trouble learning to ride
a motorcycle. If someone can ride a motorcycle,
it should not be a problem for him to catch the
beginnings of driving. When kids learn to use a
mobile device, when they suddenly get an iPad,
they continue to do new things with their old
knowledge. That means knowledge of one is
going to be transferred to another. This same thing
is happening in another new concept of Deep
Learning ‘Transfer Learning’.
Transfer learning is used to develop a
learner from a single domain by transferring
information from a related domain. We can draw
on real non-technical experiences to understand
why transfer learning is possible. Consider the
example of two people who want to learn to play
the piano. One person has no previous experience
of playing music, while another person has
extensive music experience playing the guitar. A
person with a broad music background will be
able to learn piano effectively by conveying the
music knowledge previously learned in the piano
learning process. One person is able to take
information from a previous study and use it in a
beneficial way to learn related work.
of search bias, action policy, background
knowledge, etc.
Inductive transfer methods, the target-task
inductive bias is chosen or adjusted based on the
source-task knowledge. The way this is done
varies depending on which excluded algorithm is
used to study the source and target activities.
Other transmission methods reduce the
hypothesis space, limit possible models, or
remove search criteria from speculation. Other
methods expand the space, allow the search to
find more complex models, or add new search
steps.
III. TRANSFER LEARNING STRATEGIES
There are different transfer learning
strategies and techniques, which can be applied
based on the domain, task at hand, and the
availability of data. I really like the following
figure from the paper on transfer learning we
mentioned earlier, A Survey on Transfer
Learning.
In this figure inductive learning can be viewed as
a directed search through a specified hypothesis
space. Inductive transfer uses source-task
knowledge to adjust the inductive bias, which
could involve changing the hypothesis space or
the search steps.
V. TRANSDUCTIVE TRANSFER LEARNING
IV. INDUCTIVE TRANSFER LEARNING
Inductive transfer refers to the ability of
the learning process to improve performance in
the current or target activity after learning a
different but related concept or the ability of the
previous source function. Transfers are more
likely to be between two or more learning
activities done simultaneously. The input item
may refer to conditions, features, a specific type
At this case, there are similarities between source
and target operations, but the corresponding
domains are different. In this setting, the source
domain has the most labeled data, while the
targeted domain does not. These can be
categorized into sub-categories, referring to the
settings where the feature spaces may be different
or the possibilities are lower.
VI. UNSUPERVISED TRANSFER LEARNING
This setting is similar to inductive transfer
itself, with a focus on unsupervised tasks in the
target domain. The sources and targeted domains
are the same, but the functions are different. In
this case, the label data is not available for any of
the domains.
data. Bayesian transfer may provide a more
informative prior from source-task knowledge.
VII. INSTANCE TRANSFER LEARNING
IX. HIERARCHICAL TRANSFER LEARNING
Re-applying knowledge from the source
domain to the target task is usually an ideal
scenario. In most cases, source domain data
cannot be reused directly. Instead, there are
certain conditions from the source domain that
can be reused and targeted data to improve results.
In the event of an input transfer, modifications
like AdaBoost by Dai and their co-authors help to
use training conditions from the source domain to
improve the intended task.
Another setting for transfer in inductive
learning is hierarchical transfer. In this setting,
simple task solutions are integrated or provided as
tools to produce a more complex task solution
(see Figure 6). This can include many tasks of
varying complexity, rather than a single source
and target. Targeted work can use whole resource
resources as part of it, or it might use them in a
more subtle way to improve learning.
VIII. BAYESIAN TRANSFER LEARNING
One area of inductive transfer applies
specifically to Bayesian learning methods.
Bayesian
learning
involves
modeling
opportunity-sharing and using conditional
autonomy between variables to simplify
modeling. An additional feature with Bayesian
models is often the previous distribution, which
explains what one can do about the domain before
seeing any training data. With the provision of
data, the Bayesian model makes predictions by
combining it with pre-distribution to produce
post-distribution.
Strong
initiatives
can
significantly affect these outcomes (see Figure 5).
This serves as a natural way for Bayesian learning
methods to incorporate prior knowledge in the
case of transfer learning, source-task knowledge.
In this figure bayesian learning uses a prior
distribution to smooth the estimates from training
Taylor et al. propose a transfer hierarchy
that orders tasks by difficulty, so that an agent can
learn them in sequence via inductive transfer. By
placing tasks in the order of increasing difficulty,
they aim to make transfers more efficient. This
approach can be very useful in a multi-tasking
learning environment, because with our definition
of education transfer the agent can choose the
order in which he or she is learning the tasks, but
it could be applied to help choose from an existing
set of source tasks.
In this figure an example of a concept hierarchy
that could be used for hierarchical transfer, in
which solutions from simple tasks are used to help
learn a solution to a more complex task. Here the
simple tasks involve recognizing lines and curves
in images, and the more complex tasks involve
recognizing surfaces, circles, and finally pipe
shapes.
X. HOMOGENEOUS TRANSFER LEARNING
In homogeneous transfer learning, the
feature spaces of the data in the source and target
domains are represented by the same attributes
(Xs = Xt) and labels (Ys = Yt) while the space
itself is of the same dimension (d s = d t).). The
homogeneous transfer method learns directly in
the big data area. As big data repositories become
more available, there is a desire to use this
resource-intensive app for learning machine
functions, to avoid new and time-consuming new
data collection. If there is an available database
that is drawn from a related domain, but not
exactly the domain of interest targeting, then
homogeneous transfer learning can be used to
build a predictive model for the target domain as
long as the input feature space is the same.
XI. CONCLUSION AND DISCUSSION
In this survey article, we have reviewed
several current trends of transfer learning. This
survey paper presents solutions from the literature
representing current trends in transfer learning.
Homogeneous transfer learning papers are
surveyed that demonstrate instance-based,
feature-based, parameter-based, and relationalbased information transfer techniques. This short
paper has tried to give a basic idea about transfer
learning.
XII. REFERENCES
1. https://ftp.cs.wisc.edu/machinelearning/shavlikgroup/torrey.handbook09.pdf
2. https://towardsdatascience.com/acomprehensive-hands-on-guide-totransfer-learning-with-real-worldapplications-in-deep-learning212bf3b2f27a
3. https://journalofbigdata.springeropen.co
m/articles/10.1186/s40537-017-0089-0
4. https://link.springer.com/article/10.1186/
s40537-016-0043-6
5. https://link.springer.com/article/10.1186/
s40537-016-0043-6
Download