R2: Prosilience and Long-term Robustness through Human-centered Design

AI technologies can be supported by mathematical guarantees – yet, these hold for well-defined settings of the introduction phase and for ‘technical’ objectives rather than human-centered ones (e.g., fairness of ITS for users with different ethnicity or gender). Complex ITS violate the conditions of such mathematical guarantees during their life-cycle, since the behavior of human partners, social context, and technical environment can be subject to (unexpected) variations, and the long-term societal and technological impact of ITS can hardly be overseen. Within SAIL, we aim for prosilience – proactive actions to guarantee resilience over time – as a fundamental design principle for robust behavior in unexpected situations with respect to diverse (technology-centered and human-centered) requirements. We address three overarching research questions:

  • How do humans perceive restrictions and errors of ITS? How can limitations and uncertainties of ITS be modeled and communicated?
  • How can emerging risks of ITS be identified, involving technological malfunctioning and possible harm on individual humans or social relations? What are self-stabilizing strategies within an interactive human-centered design to counter such long-term deteriorating effects?
  • How can life-long flexibility of ITS in evolving environments be realized to integrate changing preferences? How can specialized ITS be safely deprecated? How to deal with an update against established interaction patterns?