Robust Training based on Semantic Adversarials

Researcher
Julian Knaup
Publications
Research Theme
R2 Prosilience & Robustness

ML models are prone to small perturbations in the input space, causing them to produce misclassifications. If these perturbed inputs are additionally close to the original inputs, they are referred to as adversarial examples. Semantic Adversarials extend this concept to larger perturbations, which are a more natural version of adversarial examples, modifying high-level features but retaining the semantic context of the input. This threat needs to be counteracted, especially in safety-critical domains. The aim of this project is to transfer Semantic Adversarials to an industrial domain and work towards robustness against this vulnerability.