Atomwise
August 1, 2022
1966-1968
August 1, 2022

Transfer Learning

Deep Learning (DL) is very dependent on pre-existing information; it relies on massive amounts of training data in order to understand the intrinsic patterns of data. The problem of having insufficient training data can be unavoidable in some domains. The solution to limited or inaccessible new training data can be solved with Transfer Learning.(1)

This type of learning changes the idea that the training data has to be independent and distributed identically to the test data. The model does not need to be trained from scratch, reducing the time and amount of training data. Deep transfer learning is based on using knowledge from different fields through deep neural networks. There are four main categories for deep transfer learning.(1)

Instances-Based Deep Transfer Learning

This subtype uses a specific weight adjustment strategy by selecting partial instances from the original source domain and supplementing the training data set in the target domain. All this is accomplished by assigning appropriate weight values to the instances selected and is based on the assumption that “although there are differences between two domains, partial instances in the source domain can be utilized by the target domain with appropriate weights.”(1)

Mapping-Based Deep Transfer Learning

Mapping-Based Deep Transfer Learning uses instances from the source domain and target domain to map them into a new dataset space. This is based on the assumption that “although there are differences between two original domains, they can become more similar in an elaborated new dataset space.”(1)

Network-Based Deep Transfer Learning

This technique implies reusing parts of the pre-training source domain, including structure and parameters, and transferring them to be a part of the target domain in a deep neural network. Models using this method can learn to filter extreme amounts of information at different scales with scale-specificity from learned patterns, allowing for transferable learning and fine-tuning. This is supported by the fact that “a neural network is similar to the processing mechanism of the human brain; the front layers of the network can be treated as a feature extractor, which is versatile”.(1)

This technique implies reusing parts of the pre-training source domain, including structure and parameters, and transferring them to be a part of the target domain in a deep neural network. Models using this method can learn to filter extreme amounts of information at different scales with scale-specificity from learned patterns, allowing for transferable learning and fine-tuning. 

This is supported by the fact that “a neural network is similar to the processing mechanism of the human brain; the front layers of the network can be treated as a feature extractor, which is versatile”.(1)

Adversarial-Based Deep Transfer Learning

This subtype refers to introducing adversarial technology based on Generative Adversarial Nets to find transferable representations that can be applied to the source and target domains. It is a suitable method for most feed-forward neural models since it consists of augmentation with a few standard layers and a new gradient reversal layer; because “for effective transfer, good representation should be discriminative for the main learning task and indiscriminative between the source domain and target domain.”(1)

Distant Domain Transfer Learning

There is yet another problem to overcome with transfer learning: the distance from source to target domains. That distance gave rise to Distant Domain Transfer Learning (DDTL), which can be approached with several algorithms, among them, Selective Learning Algorithm (SLA) that selects useful unlabeled data from intermediate domains to create bridges that break large distribution gaps between distant domains.(2)

One interesting example of Transfer Learning can be seen in a face mask detection strategy during the COVID-19 pandemic. Deep and classic machine learning models were tested, and several algorithms were used, yielding excellent results once transfer learning was implemented.(3)

There is yet another problem to overcome with transfer learning: the distance from source to target domains. That distance gave rise to Distant Domain Transfer Learning (DDTL), which can be approached with several algorithms, among them, Selective Learning Algorithm (SLA) that selects useful unlabeled data from intermediate domains to create bridges that break large distribution gaps between distant domains.(2)

One interesting example of Transfer Learning can be seen in a face mask detection strategy during the COVID-19 pandemic. Deep and classic machine learning models were tested, and several algorithms were used, yielding excellent results once transfer learning was implemented.(3)

Contact Us