The Safe and Trustworthy AI Workshop (ICLP 2023)
London, July 9th or 10th
Location: Imperial College London
Hosted by: ICLP 2023
Paper Submission: 19th 22nd May (anywhere on Earth)
Notification: 9th 12th June (anywhere on Earth)
Workshop: 9th/10th July (TBC)
Aims and Focus
The second STAI workshop is focused on the broad area of safe and trustworthy AI.
Call for Papers
In the last few years, there have been considerable advances in the capabilities of AI systems. However, guaranteeing that these systems are safe and trustworthy is still an issue. An AI system is considered to be safe when we can provide some assurance about its behaviour, and it is considered to be trustworthy if the average user can have well-placed confidence in the system and its decision-making.
This workshop takes a broad view of safety and trustworthiness, covering areas such as the following.
Formal verification of system behaviour
Explainable and interpretable AI
Knowledge representation and reasoning
Safe multi-agent systems
Coordination and cooperative AI
Fairness, bias, and algorithmic discrimination
AI ethics and value alignment
Robustness and failures of generalisation
AI policy and regulation, including the use of agent-based modeling to better understand the consequences of such policy
The use of norms for ensuring alignment of multi-agent systems with certain values
Best Paper Awards
There will be an award pool (amount TBC) for the best papers and posters accepted to this workshop. We highly encourage submissions in all areas of safe and trustworthy AI, spanning both contemporary and future concerns. The award pool will be divided between a number of winning papers.
STAI23 aims to encourage discussion and sharing of ideas, and to build connections between people working in related areas. There will not be any published proceedings for STAI23; this allows us to welcome not only original, unpublished papers, but also papers that have been published in a relevant conference or journal, and work that is under review at other relevant venues.
STAI23 offers three types of submissions.
Regular original papers (8 pages + references, in TPLP format) will present more mature work that includes some (perhaps preliminary) results, that has not been previously published nor accepted for publication, nor is currently under review by another conference or journal.
Short original papers (4 pages + references, in EPTCS format) are intended for less well-developed work, where results may still be forthcoming, that has not been previously published nor accepted for publication, nor is currently under review by another conference or journal.
Published papers or papers under review (15 pages + references, in the original submitted format) reporting on interesting and relevant work that has been published (or accepted for publication) in the last 18 months or is currently under review at another venue.
It is the authors’ responsibility to ensure that submitting a published paper, or paper under review, does not violate the conditions of the venue where that work has previously been submitted/published.
Authors of accepted papers may be invited to give a talk or present a poster. There will be a prize for the best paper and a prize for the best poster. We will not give prizes to submissions in the published papers track.
All submissions must be written in English.
The reviewing process is double-blind, so the submissions should be anonymized and not contain information that could identify the authors.
Use the button below to submit; it will take you to the EasyChair submission form. Choose 'make a new submission' and then 'WORKSHOP: Safe and Trustworthy AI (STAI)'.
Call for PC
In addition to advancing research in the area of Safe and Trustworthy AI, this workshop aims to give early career researchers (ECRs) working in relevant fields the opportunity to gain experience of participating in a PC, and will provide training and support for this. We are inviting both experienced reviewers and ECRs to join our PC (ensuring all papers receive at least one review from an experienced PC member).
If you are an ECR who works in a field relevant to the safety and/or trustworthiness of AI and would like to be considered for this opportunity, please register your interest below, no later than 15 May. No prior experience in reviewing is necessary.
EDIT: Due to an incredible amount of registrations of interest, we can accept to no more.
Registration and Waivers
Registration for the workshop is via ICLP2023. At least one author of each accepted workshop paper is required to register for ICLP2023.
ICLP offer some support options for students.
We hope to be able to provide some financial support for those who may otherwise find it difficult to attend STAI23. (Details TBC.)