STAI Workshop
The Safe and Trustworthy AI Workshop
Aims and Focus
The STAI workshops are focused on the broad area of safe and trustworthy AI.
In the last few years, there have been considerable advances in the capabilities of AI systems. However, guaranteeing that these systems are safe and trustworthy is still an issue. An AI system is considered to be safe when we can provide some assurance about its behaviour, and it is considered to be trustworthy if the average user can have well-placed confidence in the system and its decision-making.
The STAI workshops take a broad view of safety and trustworthiness, covering areas such as the following.
Formal verification of system behaviour
Explainable and interpretable AI
Knowledge representation and reasoning
Neurosymbolic AI
Safe multi-agent systems
Coordination and cooperative AI
Fairness, bias, and algorithmic discrimination
AI ethics and value alignment
Robustness and failures of generalisation
AI policy and regulation, including the use of agent-based modeling to better understand the consequences of such policy
The use of norms for ensuring alignment of multi-agent systems with certain values
Past Workshops
9 July 2023, a STAI workshop was held at the International Conference of Logic Programming, see STAI 23 @ ICLP.
2 November 2022, a STAI workshop was held at Imperial College London, see STAI 22 @ ICL.