FREENOM
May 1, 2023
NovaSignal
May 1, 2023

Deep Learning

Slowly but surely, Deep Learning (DL) has become a significant part of our lives. (1) Ever since it was introduced in Science magazine in 2006, DL has been an important research topic in Artificial Intelligence (AI) and Machine Learning studies (ML). (2) DL is a subfield of ML that applies Artificial Neural Networks with multiple processing layers (hidden) that are trained to learn hierarchy and representations from large amounts of data to manage classification and recognition assignments; all this is achieved using an unsupervised pre-training and a supervised fine-tuning approach. (2)

The multiple layers of representation are acquired by creating simple but non-linear modules; each of those then transform the representation at one level (beginning with the raw input) into a higher, more abstract level, allowing very complex functions to be learned. The important aspect of DL is that models learn from data using general-purpose learning methods, and human engineers do not plan the layers of features.(3)

This process resulting in a relationship between inputs and outputs is described as mapping with multiple hidden layers (figure 1). (4) The more advanced DL models use backpropagation procedures to achieve this, while the simpler DL models use feedforward procedures. (3)

DL has proven to be good at deciphering structures in high-dimensional data. Thus, it can be applied in many fields. (1,3) Some of the most significant applications of DL can be seen in areas like image recognition, speech recognition, self-driving cars, object detection, temporal data processing, cancer diagnosis, biomedicine, and several others. (1,5)

In medicine-related research, DL models have helped predict the activity of potential drugs, analyze particle accelerator data, reconstruct brain circuits, and predict the effects of mutations in non-coding DNA on gene expression and disease. (3)

The multiple layers of representation are acquired by creating simple but non-linear modules; each of those then transform the representation at one level (beginning with the raw input) into a higher, more abstract level, allowing very complex functions to be learned. The important aspect of DL is that models learn from data using general-purpose learning methods, and human engineers do not plan the layers of features.(3)

This process resulting in a relationship between inputs and outputs is described as mapping with multiple hidden layers (figure 1)(4). The more advanced DL models use backpropagation procedures to achieve this, while the simpler DL models use feedforward procedures.(3)

 

DL has proven to be good at deciphering structures in high-dimensional data. Thus, it can be applied in many fields. (1,3) Some of the most significant applications of DL can be seen in areas like image recognition, speech recognition, self-driving cars, object detection, temporal data processing, cancer diagnosis, biomedicine, and several others.c(1,5)

In medicine-related research, DL models have helped predict the activity of potential drugs, analyze particle accelerator data, reconstruct brain circuits, and predict the effects of mutations in non-coding DNA on gene expression and disease. (3)

 DL Examples

There are plenty of types of DL architectures, but the most common ones are Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Long Short-Term Memory (LSTM). (1,5) CNNs were mainly developed for large-scale image and video classification and recognition. (1,2) and were designed with three layers: convolutional, sampling/pooling, and fully connected. 

The convolutional and pooling layers resemble the classic examples of simple and complex cells in visual neuroscience, and the architecture mimics the hierarchy in the visual cortex ventral pathway. CNNs are excellent at detecting, segmenting, and recognizing objects and sections in images. (3)

There are plenty of types of DL architectures, but the most common ones are Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Long Short-Term Memory (LSTM)(1,5). 

CNNs were mainly developed for large-scale image and video classification and recognition(1,2) and were designed with three layers: convolutional, sampling/pooling, and fully connected.  The convolutional and pooling layers resemble the classic examples of simple and complex cells in visual neuroscience, and the architecture mimics the hierarchy in the visual cortex ventral pathway. CNNs are excellent at detecting, segmenting, and recognizing objects and sections in images. (3)

RNNs are DL models that take time into account and are often involved in tasks with sequential inputs, like speech and language.(2) RNNs process an input sequence one element at a time while keeping in the hidden units a ‘state vector’ that contains the history of all the past elements of the sequence. RNNs have proven excellent at predicting the next word in a sentence or sequence.(3) Long Short-Term Memory (LSTM) networks resemble RNNs in that they use special hidden units that remember inputs for a long period of time. They are widely used in speech recognition to go from acoustics to the sequence of characters in a text, relying on memorization, ‘ learned’ reasoning, and symbol manipulation. (3)

Using all the data available in the world, these and many other models are developed to optimize technology and everyday activities. Nevertheless, there is still much more to be discovered since the unceasing research is still working to accomplish understanding, comprehension, and reasoning in Deep Learning (4)

RNNs are DL models that take time into account and are often involved in tasks with sequential inputs, like speech and language.(2) RNNs process an input sequence one element at a time while keeping in the hidden units a ‘state vector’ that contains the history of all the past elements of the sequence. RNNs have proven excellent at predicting the next word in a sentence or sequence.(3)

Long Short-Term Memory (LSTM) networks resemble RNNs in that they use special hidden units that remember inputs for a long period of time. They are widely used in speech recognition to go from acoustics to the sequence of characters in a text, relying on memorization, ‘ learned’ reasoning, and symbol manipulation. (3) Using all the data available in the world, these and many other models are developed to optimize technology and everyday activities. Nevertheless, there is still much more to be discovered since the unceasing research is still working to accomplish understanding, comprehension, and reasoning in Deep Learning (4)

Contact Us