Machine Learning – Training versus Inference

by Sherwin Jaleel
823 views

The Basics

Machine learning (ML), deep learning, and neural networks are all sub-fields of Artificial Intelligence (AI). Deep learning is a sub-field of machine learning, and neural networks is a sub-field of deep learning. ML relies on human intervention to tune the learning process. Unlike machine learning, deep learning does not require human intervention and is a scaled-up version of ML. The “deep” in deep learning refers to the depth of layers (see) in a neural network. Learning is the process through which neural networks are ‘taught’ to perform a prescribed task. The learning process can be split into two key phases: the training phase and the inference phase.

Training

Training (also referred to as learning) is the process of “teaching” a neural network to perform a desired AI task by feeding it data, resulting in a trained model. Let’s pretend that we’ve been tasked with building an AI system that looks at pictures of animals and decides if it is a dog or not. This system is what is called a “model”.  This model is created via a process called “training”. The goal of the training process is to create a reliable model that can answers our question (Is it a dog?) correctly most of the time. An ML model is created by training a Neural Network with input training data (pictured of dogs).

During the training process, known data (i.e., sample dog images in our example) is fed to the neural network, which predicts (Is it a dog?) about what the data represents. Errors in the prediction process are employed to tune the artificial neurons to improve its accuracy iteratively and to a point where the neural network makes predictions with sufficient accuracy. When the neural network attains an adequate level of accuracy, we have a trained model.

In the image above, sample images of dogs (training data) are fed to an untrained neural network. Thousands of such images will be required to train a neural network used for image recognition in the real world. When the neural network is trained to an acceptable accuracy level, the trained neural network becomes the AI model that can be used for inference.

Inference

After training is completed, the neural networks are deployed into the field for “inference” — classifying data to “infer” a result. In our example, deciding if the presented image is a dog or not. Put simply, inference is the process of using a trained AI model to make predictions against previously unseen data. An adequately trained neural network will be able to make accurate predictions when presented with data it is not previously seen before.

In the image above, the trained neural network is presented with a picture (data) of a dog that the model has previously not seen, and the model makes an accurate prediction.

Training + Inference = Machine Learning 

Training a real-world neural network to an acceptable accuracy can take a very long time and often requires substantial processing power. As human beings, we gain our knowledge for the most part from training and real-life experience.  A neural network mimics that process.  Neural networks require substantial processing power during the training phase. The training speed depends on the model’s type and scale and the amount of computational power available to train the network. Just as we don’t haul around all our mentors, teachers, a ton of books, and our school building, inference does not require all the computational power and infrastructure needed during the training phase. Performing inference in comparison to training is easy, fast and uses significantly fewer resources.

From a machine learning perspective, inference cannot happen without training. When a trained neural network is put to work out in the digital world, it uses what it has learned (inference). Today there are trained models that can recognize images, predict that a jet engine is likely to fail, spot abnormalities in tissue samples, or suggest the colour of lipstick that someone is likely to buy next.

Nowadays, inference is being done on devices such as mobile phones and low powered IoT devices such as those in your coffee machine. When you use the camera on your phone to take a photo of yourself with funny hairstyles, bunny ears or a silly pout, that’s your phone using a trained model to recognize that there’s a human face and applying enhancements to the correct part of your anatomy. We live in a time when the world around us is changing more quickly than ever before. A new vocabulary is needed to help us grasp what’s happening around us. You better add the words Training and Inference to your lexicon because these two words are going to stick around for a very long time to come and perhaps even shape the destiny of humanity!

You may also like

Leave a Comment