The areas of application for machine learning are very broad and have already found their way into our everyday lives. Classification procedures are used to automatically filter spam emails, predict customer churn, segment customers and detect fraud.
Furthermore, regression methods are used for price predictions and are used in risk management. The widespread purchase recommendations and individual suggestions, for example for music and film titles, also use processes from machine learning. Although these areas of application are very diversified, there are essential and common steps in building appropriate models.

Photo: ibreakstock – shutterstock.com
Machine learning basically differentiates between two learning approaches. On the one hand, methods of supervised learning, hereinafter referred to as supervised learning, can be used. The data is marked before processing. On the other hand there is unsupervised learning, hereinafter referred to as unsupervised learning.

Photo: Mandy Goram
Supervised learning is about finding a function with which unseen or unknown observations of a data set can be assigned to a class or a value. For this purpose, the data is given a so-called label. The use cases for supervised learning are regressions, classifications, recommendations and imputations.
The aim of the unsupervised learning approach is to recognize unknown patterns from the data and to derive rules from them. Methods such as the Gaussian Mixture Model and the k-Means algorithm are used here.
A lot of data is usually required to use unsupervised learning algorithms. Without a sufficient amount of data, the algorithms are unable to cluster and thus make a corresponding forecast about an unknown data set or an unseen data set.
- Facebook faces
Computers can learn to distinguish human faces. Facebook uses this for automatic face recognition. - Machine learning
Contrary to what the picture suggests, machine learning is a sub-area of artificial intelligence – but a very important one. - AlphaGo
Machine beats human: In 2016, Google’s machine learning system AlphaGo defeated the world champion in the game Go. - GPUs GPU Nvidia
The leading companies in machine learning use graphics processors (GPUs) for parallel processing of data – for example from Nvidia. - Deep learning
Deep learning processes first learn low-level elements such as brightness values, then elements on the middle level and finally high-level elements such as entire faces. - IBM Watson
IBM Watson integrates several artificial intelligence methods: In addition to machine learning, these are algorithms for natural language processing and information retrieval, knowledge representation and automatic inference.
The procedures in supervised learning are easy to understand due to their structure. It is possible to compare different methods, to parameterize them and to find an optimal solution for the application. Due to the given traceability, the interpretation of the data is easier than with unsupervised learning methods.
The disadvantage, however, is the often very high manual effort involved in preparing the data.
The advantages of unsupervised learning consist in the partially fully automated creation of models. This can produce a very good forecast about new data or even create new content. The model learns with each new data record and at the same time refines its calculations and classifications. Manual intervention is no longer necessary. Neural networks and the classic understanding of artificial intelligence are based on these self-learning processes.
By training the models, they are increasingly adapted to the input data. From a certain point in time, this leads to a so-called overfitting, in which the model has good forecasts in relation to a known data category. However, new, unknown data are no longer correctly assigned. In addition, so-called underfitting can also occur, in which too little data on the model structure has been provided and the classification is therefore too imprecise. This also leads to poor forecast results.
When a model is sufficiently trained, i.e. neither overfitted nor underfitted, can only be found out by trying and testing. It is a very complex process.
Collecting and preparing the data are the first steps in building a model. As a rule, the data used is incomplete and not in a uniform format. In order to be able to process the data, they must usually be presented in tabular form. Missing values can be supplemented, for example, with the help of imputation.
The processed data is then analyzed to find out how the data is structured and what dependencies there are. Once the variables important for the forecasts have been identified, various statistical models can be used. Not every model is equally suitable. How suitable the respective model is must be found out through an evaluation. This process is usually very complex. In order to find a good forecasting model, various methods should be tested and compared. Once a suitable model has been found, it can usually still be optimized. The model can then be used to generate forecasts on new data.

Photo: Mandy Goram
The process is to be understood as a cycle because, as with classic data warehouse and business intelligence requirements, new findings can arise during development that result in changes in the original data or the model.
Thanks to machine learning, the opportunities to improve existing processes and products and to develop new, higher-quality services are enormous. An examination of the topic can be worthwhile for many companies, despite the moderate initial investments. It is important to have a clear objective and to delimit the use cases, since even small changes in the initial situation can have a major impact on the reliability of the model. A certain level of frustration tolerance is also important because the models usually go through several iterations until they are completed.