Automatic generation of automation applications using explainable and understandable AI methods

 

In recent years, the use of AI methods has rapidly grown in many fields. Currently, the most widely used AI methods belong to the field of Machine Learning (ML), mostly Artificial Neural Networks (ANNs).

ML algorithms are black-box methods. With such algorithms, it is not clear for the user how the system determines a solution. It is likewise impossible to predict the outputs of the system when using an unknown input data set.

This approach has some disadvantages due to its lack of transparency. Black-box methods are susceptible to data manipulation, the so-called adversarial attacks, which can lead to security issues. In the context of automation engineering, the lack of explainability and transparency of ML models reduces their acceptance and prevents the spread of their use in real production plants.

The field of Explainable Artificial Intelligence (XAI) addresses these problems. These algorithms attempt to achieve a compromise between complexity and explainability and offer transparent white-box AI methods.

Such methods can be used to support or automate the generation of solutions for automation tasks. Use cases for the application of explainable AI methods in automation can be the automatic engineering of high-level process controller applications or the automatic networking and orchestration of modular systems.