AEQUITAS project invites scholars to submit their original research on fairness and bias in AI for a special track in JAIR (Journal of Artificial Intelligence Research) between December 1st and April 1st, 2024.
The track's primary focus is to highlight the importance of responsible and human-centred approaches to addressing these issues. As articles are accepted, the contents of the special track will be made available. Furthermore, this JAIR Special Track will feature a curated selection of papers in extended form from the 1st AEQUITAS Workshop on Fairness and Bias in AI, which was held in Kraków, October 2023, in conjunction with ECAI 2023.
AI-based decision support systems are increasingly deployed in industry, in the public and private sectors, and in policymaking to guide decisions in important societal spheres, including hiring decisions, university admissions, loan granting, medical diagnosis, and crime prediction.
As our society is facing a dramatic increase in inequalities and intersectional discrimination, we need to prevent AI systems from amplifying this phenomenon and rather employ AI to mitigate it. As we use automated decision support systems to formalize, scale, and accelerate processes, we have the opportunity, as well as the duty, to revisit the existing processes for the better, avoiding perpetuating existing patterns of injustice, by detecting, diagnosing, and repairing them. To trust these systems, domain experts and stakeholders need to trust the decisions.
Despite the increased amount of work in this area in the last few years, we still lack a comprehensive understanding of how pertinent concepts of bias or discrimination should be interpreted in the context of AI and which socio-technical options to combat bias and discrimination are both realistically possible and normatively justified.
The topic will be related to fairness and bias in AI; including, but not limited to:
Call for Submissions
Two types of articles will be published:
• Regular journal articles, aiming to advance the state of the art of fairness and bias in AI.
• Viewpoints (short articles of up to 2000 words, dedicated to technical views and opinions on fairness and bias in AI in which positions are substantiated by facts or principled arguments)
Regular journal articles in the domain of fairness in AI should present innovative research that contributes to the field. The novelty may stem from various characteristics, including:
i) the development and presentation of new AI techniques specifically designed for fairness in AI,
ii) the application of existing AI techniques to previously unexplored domains within fairness in AI,
iii) conducting novel experimental comparisons of different fairness AI techniques through computational experiments or user studies,
iv) introducing fresh analyses, theories, or models that enhance our understanding of fairness in AI.
Viewpoint papers are dedicated to technical and critical views and opinions on the field of fairness and bias in AI and should present a novel viewpoint on a problem, or on a novel solution to a problem. They do not need to contain primary research data but should be substantiated by facts or principled arguments to provide new insights or opinions to a debate.
Submission period: December 1, 2023 - April 1, 2024
Anticipated date of completion for handling the submissions, including papers invited for resubmission with a second round of reviews: April 2025
Roberta Calegari, University of Bologna (Contact for Enquiries)
Andrea Aler Tubella, Umeå University
Virginia Dignum, Umeå University
Michela Milano, University of Bologna
Get involved.submit here