Robust AI: Lecture with Daniel Neider (TU Dortmund) on Neural Network Verification

On Thursday, Feb 15, Prof. Daniel Neider from Technische Universität Dortmund gave a talk titled “A Gentle Introduction to Neural Network Verification” as part of the SAIL Lecture Series on Robust AI. Nearly 100 people attended the lecture online and at the CITEC, Bielefeld University, and we were especially happy about participants that are not part of our network.

💡 What was the lecture about? Neural networks have become a crucial component in image and speech recognition, natural language processing, and autonomous driving. As these systems increasingly impact our lives, ensuring that they function correctly and make decisions that align with human values is essential. This necessity has prompted the development of methods for verifying that neural networks satisfy specific properties, such as safety, robustness, and fairness.
In his talk, Daniel Neider gave a gentle introduction to neural network verification, focusing on two key aspects. First, he discussed typical properties that one may want to verify. Second, he presented an overview of the most common verification techniques, including deductive verification and abstract interpretation. The goal was to provide a compact yet comprehensive introduction to this burgeoning field, enabling participants to understand the challenges and opportunities in verifying neural networks and providing them with tools for developing more reliable and trustworthy AI systems. Here at SAIL, we think this goal was successful and we thank Daniel for his inspiring lecture.