In our previous article, we have discussed the definition of AI, ML, and DL. And now, we will discuss further what neural networks in deep learning is.
What are Neural Networks?
Neural networks (NNs) are a set of algorithms that are designed to recognise patterns. They interpret sensory data through a kind of machine perception, labelling or clustering raw input. The patterns recognised by NNs are numerical, contained in vectors, into which all real-world data must be translated, for example, images, sound, texts, time series.
NNs help engineers cluster and classify layers on top of data stored and managed during deep learning processes. NNs also help to group unlabelled data according to similarities among input examples. Then, they classify data when they have a labelled dataset to train on.
In the picture above, the concept of NNs is represented by a neuron in the top layer. Top layer neurons are abstract, meaning engineers cannot look at them. But the input domain of the neural networks, such as images or texts, is usually visible and interpretable.
4 types of neural networks
As a Machine Learning engineer, you must be able to identify the types of neural networks. There are four types at present, namely:
- Artificial Neural Networks (ANN),
- Convolution Neural Networks (CNN),
- Recurrent Neural Networks (RNN),
- and Generative Adversarial Neural Networks (GANs).
In a brief definition, ANN is a group of multiple perceptrons or neurons at each layer. ANN is also known as a Feed-Forward Neural Network due to its inputs that are processed only in a forward direction. A looping constraint on the hidden layer of ANN turns to RNN. RNN has a recurrent connection on the hidden state which ensures the sequential information is captured in the input data. Meanwhile, CNN models are used widely across different applications and domains. CNN is especially prevalent in image and video processing projects.
Meanwhile, GANs are a type of NNs that is described as an unsupervised machine learning algorithm. It is a combination of 2 neural networks, one of which generates patterns and the other tries to distinguish genuine samples from the fake ones. GANs have advanced to a point where they can pick up trivial expressions denoting significant human emotions. For example, the notorious Deep Fakes is deployed by GAN architecture and reconstructed faces in three-dimensional space. When you see faces on the Internet, they might look like real people but are GANs’ algorithm.
The components of neural networks
While there are different types of NNs, they always consist of the same components. The main components include neurons, synapses, weights, biases, and functions.
- Neurons, also referred to as nodes, are a basic unit of NN that receives information, performs simple calculations, and passes it further. Neurons are organised in layers. There is an input layer that receives information, a number of hidden layers, and an output layer that provides valuable results. Neurons only operate numbers in the range [0,1] or [-1,1].
- Synapse is what connects neurons like an electricity cable and every synapse has weight. The weights add to the changes in the input information received in the previous stage.
- A bias neuron allows for more variations of weights to be stored. Biases add a richer representation of an input space to a model’s weight. A bias plays a vital role by making it possible to move the activation function to the left or right on the graph.
How do these neurons work?
Each neuron processes input data to extract a feature. Imagine you have several features and several neurons, each of which is connected with all the features. Each of the neurons has its own weights that are used to weight the features. During the training of the network, you need to select weights for each of the neurons that output provided. And to perform transformations and get an output, every neuron has an activation function. This combination of functions performs a transformation that is described by a common function F which describes the formula behind NN’s magic. See the video below for better understanding and examples:
NNs are useful as a function approximator to map inputs and generate outputs in many tasks of perception. NNs also help achieve more general intelligence when combined with other AI methods.
What do neural networks solve?
Neural networks are a great breakthrough as they can solve complex problems that require analytical calculations. The most common uses of NNs are as follows:
- Classification – for example, a neural network can analyse the parameters of a bank credit such as age, solvency, credit history, and decide whether to loan them money.
- Prediction – the algorithm has the ability to make predictions, such as foresee the rise or fail of a stock based on a situation in the stock market, or see whether a company’s plan will succeed in the future based on the situation and/or history of the company.
- Recognition – this is the widest application of neural networks. For instance, face biometric uses neural networks.