1726-1769
July 1, 2021
IBM AI Principles
July 1, 2021

Artificial Neural Networks

Artificial Neural Networks (ANNs) are computer models inspired by neurons and axons that can recognize patterns, manage data, learn...

Artificial Neural Networks (ANNs) are computer models inspired by the structure and behavior of neurons and axons. Like the brain, ANNs can recognize patterns, manage data, and, most significantly, learn.

They learn (or are trained) through experience with appropriate learning exemplars just as people do, not from programming. Such an ability constantly improves its functional accuracy as it keeps on performing.

The brain is an excellent pattern recognition tool(1) (fig1). When we look at a pen, we know it is a pen because biological neurons in a certain area of our brain have come across a similar input pattern on previous occasions and have learned to link that specific pattern with the object description ‘pen.’ Since our brain contains a great amount of neurons which are fully interconnected, we can recognize an almost endless variety of input patterns. The same idea, but with limited capacity, happens in ANNs.

Components of an ANNs

ANNs are also named connectionist models as the connection weights equal the memory of the system. In the neural structure, there are processing elements (PE), and each PE has inputs, a transfer function, and a single output. PE is essentially an equation that balances inputs and outputs.(2)

Each neuron has one or more inputs and produces a single output, which can then be sent to multiple other neurons. The inputs are the chosen values of a sample of external data, such as images or documents, or the output of other neurons. The final result of the neural net’s output neurons accomplishes the task (i.e., spot an object in an image). To calculate a neuron’s output, we weight and sum all of the inputs and the connections from the inputs to the neuron. This summation is then passed through an (usually nonlinear) activation function to produce the output.(3)

The network is made of connections, each one providing the output of a unique neuron as an input to another neuron. Each connection is given a weight that accounts for its importance, and it can have multiple input and output connections. The propagation function is used to transport values through the neurons of a neural net’s layers (transmission of metadata). Usually, the input values are added up and passed to an activation function (final step), which generates an output.

Types of ANNs

ANNs have emerged as a broad family of techniques that have promoted this state-of-the-art technology across multiple domains.

The Basic types have at least one unvarying component, like number of units, layers, weights, and topology. The Dynamic types let one or more of these evolve via learning. They are intricate but can shorten learning periods and give better results. Some classes require learning to be “supervised” by the operator, while others operate independently.

Some types are based purely on hardware, while others are software-based and run on general-purpose computers.

Feedforward ANNs are the simplest form where the information flows only in one direction, i.e., from the input to the output layer. There is no feedback to the input layer.(1)

ANNs have emerged as a broad family of techniques that have promoted this state-of-the-art technology across multiple domains.

The Basic types have at least one unvarying component, like number of units, layers, weights, and topology. The Dynamic types let one or more of these evolve via learning. They are intricate but can shorten learning periods and give better results. Some classes require learning to be “supervised” by the operator, while others operate independently.

Some types are based purely on hardware, while others are software-based and run on general-purpose computers.

 

Feedforward ANNs are the simplest form where the information flows only in one direction, i.e., from the input to the output layer. There is no feedback to the input layer.(1)

In Feedback ANNs, the information can travel in two directions, thus creating a loop while transferring the information between input and output layers. The process is completed when the processed information reaches the optimum level. A recurrent neural network is one of the feedback neural networks.(1,3)

Pros, Cons and Perspectives

ANNs have remarkable information processing characteristics related to non-linearity, high parallelism, fault, and noise tolerance.

They are also easy to optimize, cost-effective, and can predict inference. However, ANNs have some disadvantages too, including poor general application of the architecture, inaccurate analysis of the network, uncontrollable time of machine learning, etc.

The combination of ANNs, evolutionary algorithms, and some other algorithms shows promise and will be able to achieve better results in medical diagnosis and prediction in the future.

Contact Us