With the current advances in artificial intelligence, it is now possible to see AI systems in almost every field from education to transportation, manufacturing to smart cities. Current practices raise some ethical and trustworthiness concerns for these emerging applications. According to the Ethics Guidelines for Trustworthy AI, developed by the High Level Expert Group on AI, there are some generally accepted criteria for an AI system to be considered as trustworthy such as being: accountable, non-discriminating and fair, explainable, robust and secure, transparent, environment-friendly.
The topic of Trustworthy AI is currently receiving significant interest from academia, industry, policy-makers, and society. The technical focus to-date has been on machine learning and data analytics approaches to AI. However, constraint programming has a critical role to play, and this needs to be explored and discussed. This workshop will provide a forum for such discussions and a position paper will be a key outcome of the event.
Our main target audience in this workshop is researchers who work either in academia or industry. The audience are expected to be experts from one or more of the following disciplines: constraint programming, computational social choice, algorithmic decision-making, ethics, psychology, sociology, law, or artificial intelligence.
In addition to the normal presentation of papers and discussions, a major objective of this workshop will be the development of a position paper on the opportunities for constraint programming in the development of Trustworthy AI. This position paper will be created in a world cafe style involving workshop participants and invitees. This will be published on an open access repository such as ArXiV.
Constraint programming and constraint satisfaction has a special role to play in Trustworthy AI. Amongst the many opportunities for constraint programming in this area include, but are not limited to:
The workshop will bring together researchers from the constraint programming community, researchers from other complementary domains, and practitioners, to explore the opportunities for CP in ethical and Trustworthy AI. We are also interested in tools and techniques that can be used to ensure the trustworthiness of CP systems, e.g. debugging, robustness testing, etc.
Work that is an extension of some recently published or already accepted for publication in journals and major conferences are welcome. If this is the case, your submission should clearly indicate the extension and also include the accepted paper. Workshop participation will be by invitation only. If you would like to participate, submit either:
Short papers may address an important problem for further research or describe a practical problem or an interesting lesson learned. In addition, we solicit proposals for short demonstrations (at most 3 pages with demonstrations taking at most 15 minutes). All submissions should conform to CP’s formatting guidelines using LNCS format (more details are available here). Authors can submit multiple papers.
starts at 11 : 00 AM, CEST (UTC+2)
11 : 05 AM - 11 : 20 AM, CEST (UTC+2)
11 : 20 AM - 11 : 45 AM, CEST (UTC+2)
by Alexander Schiendorfer, Guido Tack, Alexander Knapp and Wolfgang Reif
Read the paper.
11 : 45 AM - 12 : 10 PM, CEST (UTC+2)
by Stephan Gocht, Ciaran McCreesh and Jakob Nordström
Read the paper.
12 : 10 PM - 12 : 35 PM, CEST (UTC+2)
by Arthur Gontier, Charlotte Truchet and Charles Prud'Homme
Read the paper.
starts at 13 : 00 PM, CEST (UTC+2)
from 13 : 00 PM - 13 : 10 PM, CEST (UTC+2)
from 13 : 10 PM - 13 : 20 PM, CEST (UTC+2)
from 13 : 20 PM - 14 : 00 PM, CEST (UTC+2)
14:00 PM - 14:30 PM break
from 14 : 30 PM - 14 : 40 PM, CEST (UTC+2)
from 14 : 40 PM - 14 : 50 PM, CEST (UTC+2)
from 14 : 50 PM - 15 : 30 PM, CEST (UTC+2)
15:30 PM - 15:45, CEST (UTC+2)
Francesca Rossi, IBM Research
Michela Milano, DISI, University of Bologna
Michele Lombardi, DISI, University of Bologna
Roland Yap, National University of Singapore
Steve Prestwich, University College Cork
Susan Leavy, University College Dublin
Toby Walsh, University of New South Wales
Virginia Dignum, Umeå University