R1: Human agency to shape cooperative intelligence

Human agency and the capability to shape systems according to human needs rather than vice versa constitute key requirements for a sustainable human-centric design. AI technologies have contributed greatly to the development of autonomous systems for well-specified tasks (e.g., playing games). Yet, replacing machine autonomy by human-machine partnership can yield better results in less well-defined environments. Further, human intervention and human-machine interaction are crucial cornerstones for the long-term reliability of AI in human-centered environments and intelligent assistance systems. In SAIL, we address the following key challenges to enable human agency:

  • Which orchestration of ITS with verbal and non-verbal signals enables natural interaction modes of humans with ITS? Taking into account not only short-term effects but also long-term dynamics, we seek to investigate how natural interaction patterns are established and lead to trust in ITS.
  • How can dialogues be a natural and flexible source to inform about human preferences? To what extent does this enable intuitive model individualization and error correction by a human partner while respecting privacy concerns?
  • What are risks which come with the increasing capability of ITS of proactive behavior? How can powerful but opaque ITS be enhanced towards system transparency, which enables insight particularly into critical aspects such as conflicting interests of different stakeholders, individual risks, and biases?