Machine learning was limited for a long time by its inability to process raw data. For decades, machine learning had a great dependency in features engineering that transformed raw data (for example, pixels values from an image) into an internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input. But with the victory in 2012 of a CNN (Convolutional Neural Network) in an image recognition competition, everything changed. A three-decade-old technique in which massive amounts of data and processing power help computers to solve problems that humans solve almost intuitively caused a ripple effect that is gaining more and more momentum.
Deep learning itself is a revival of an older idea for computing: neural networks. The key aspect of deep learning is that layers of features are not designed by human engineers – they are learned mimicking the human learning through neuron alike connections from data using a general-purpose learning procedure.
Fig.1. Deep Learning (from http://www.atelier.net/)
Deep learning advanced so quick that is used in many aspects of modern society, from web searches to content filtering on social networks, to recommendations on e-commerce websites, and it is increasingly present in consumer products, such as cameras and smartphones. Continuous to beat records in image recognition, speech recognition, it has beaten other machine learning techniques at predicting the activity of potential drug molecules and predicting the effects of mutations in DNA and RNA on gene expression and disease.
The Deep Learning Neural Networks have many advantages:
- Are able to decompose a big problem in little ones.
- Neural networks do not make assumptions about the distribution of the data.
- No need to manually add new inputs for each different type of data that we need to analyse.
- Allow parallel processing, and because of that, failing tolerance. If an element of the neural network fails, in can continue using another path.
- Are able to auto detect features.
But it’s not easy to implement one, there are no general methods to determine optimal number of neurons and layers to solve a problem and a Deep learning model is always dependent of the quality and amount of the input data, without data is impossible to learn. Neural networks are also very similar to black boxes. After feeding the neural network with the initial parameters (weight initialisation, for example), we don’t know what the neural network is doing. It’s difficult to know how they are solving the problem and troubleshoot when they don’t work as expected.
But despite all, neural networks had to overcome so many difficulties to prove themselves with so many successes, and Deep Learning is still in its infancy and has so much potential yet to reach. In the years to come, many improvements are expected in the Deep Learning algorithms, bringing many advancements in areas as data analytics, computer vision, speech recognition, bioinformatics, natural language processing (topic classification, sentiment analysis, question answering and language translation), pharmaceuticals, health, automobile industry and the list goes on.
The neural networks experienced some deep winters in the past, but Deep Learning its today part of the world of tomorrow.
EAI Consultant and IoT Evangelist at Polarising
Integrating the world for over 10 years and enthusiastic about the Internet of Things.
He helps to spread the word at Polarising about the future that is happening today.
Martial artist and History’s nerd, he hopes technology will help us get where we need to go.
- A Deep Learning history walk through – Deep Learning ArisesAs seen in the last post, after the event of CNN’s (Convolutional Neural Networks) we step in the second AI Winter, but in 2006 this would change… 2006: Deep Learning Arises In 2006, Hinton, Simon Osindero and Yee-Whye Teh published a paper where they solved the vanishing or exploding gradient problem. In this paper, “A fast learning […]