Call for papers

AIMMES 2025: AI fairness and bias Measurements, Mitigation, Explanation

Background

Bridging Disciplines: Social, Legal, and Technical Perspectives & Solution

In recent years, research in Fairness and Bias in Artificial Intelligence (AI) has become increasingly critical, especially with the rapid advances in generative AI technologies and the deployment of AI across varied fields such as finance, recruitment, security, healthcare, and public administration. As AI continues to permeate all aspects of society, the complexities surrounding its ethical implications have grown, highlighting the importance of a multifaceted exploration of fairness and bias.

The EU AI Act and other regulatory efforts across the globe reflect the urgency of addressing these issues, as policy frameworks struggle to keep pace with technological innovation. Understanding, measuring, and mitigating bias in AI are no longer solely technical challenges but are deeply intertwined with societal values, legal frameworks, and ethical considerations. This workshop will explore these complexities through the lens of interdisciplinary perspectives, bringing in also insights from the social sciences, humanities, and other fields traditionally underrepresented in AI discourse.

This workshop seeks to create an inclusive space for dialogue and knowledge exchange, aimed at deepening the understanding of how diverse perspectives can contribute to the responsible and ethical development of AI technologies. We invite contributions from researchers, practitioners, and thought leaders working in AI fairness to discuss also the role of disciplines such as sociology, psychology, law, philosophy, ethics, history, and cultural studies in shaping our understanding of AI fairness and bias. Topics of interest include, but are not limited to:

1. Explaining Bias

  • Interdisciplinary Approaches: How can insights from humanities and social sciences enhance understanding, visualization, and communication of bias in AI systems and datasets?
  • Contextualizing Bias in Social Systems: Examination of how societal structures and historical contexts inform and influence bias in data and algorithms.
  • Domain-specific Case Studies: Illustrations of bias explainability across different fields, leveraging insights from sociology, law, and communication studies.
  • Human-centric XAI: Contributions on Explainable AI (XAI) tools that consider social and cognitive dimensions in bias communication.

2. Measuring Bias

  • New Definitions and Frameworks: Proposals for alternative or refined definitions of fairness and bias that incorporate social, cultural, and legal considerations.
  • Human Factors in AI Bias: Studies on how cognitive and societal biases influence AI biases, including research from behavioral sciences.
  • Inclusion of Marginalized Perspectives: Approaches to involving underrepresented or vulnerable communities in the definition and measurement of bias, with case studies from fields such as anthropology and social justice studies.
  • Complexity in Measurement: Addressing the challenge of bias measurement in multi-attribute and multimodal datasets, e.g. images, graphs, etc.

3. Mitigating Bias

  • Trade-offs in Fairness: Exploration of methods for balancing multiple notions of fairness, with insights from ethics, philosophy, and political science on equity and justice.
  • Synthetic Data Use in Mitigation: Examination of synthetic data generation as a tool for bias mitigation, including discussions on its ethical implications.
  • Sector-specific Mitigation Tools and Strategies: Application-focused research and tools for addressing bias in critical areas such as hiring, credit scoring, and facial recognition, with perspectives on regulatory and societal impacts.
  • Fairness in AI Systems Design: Studies on how incorporating social science and humanities knowledge into AI design processes can lead to more ethical and socially responsible systems.

Call for Submissions

Submission Details

Submissions may include new methods and algorithms, empirical studies, theoretical frameworks, case studies, or methodologies that bridge the gap between technical and social science approaches to AI fairness and bias. We particularly encourage submissions that incorporate collaborative research efforts across disciplines.

We encourage the submission of original and non-original contributions. In particular, authors can submit:

  • Regular papers (max. 12 + references – CEUR.ws format);
  • Short/Position/Discussion papers (max 6 pages + references – CEUR.ws format);

Submissions should be made via EasyChair to the following link.

In the case of papers that present original work, authors will have the option (not mandatory) of including their work in the workshop proceedings (to be published through ceur-ws.org/) or opting out. Non-original work will not be included in the workshop proceedings. Authors of the accepted papers are expected to attend the workshop in person in order to present their work.

All submitted papers will be evaluated by at least two members of the program committee, based on originality, significance, relevance, and technical quality. Submissions of full research papers must be in English, in PDF format in the CEUR-WS conference format available at this link or at this link if an Overleaf template is preferred.

Submissions should be single-blinded, i.e. authors’ names should be included in the submissions. Submissions must be made through the EasyChair conference system prior to the specified deadline (all deadlines refer to GMT). Discussion papers are extended abstracts that present your favourite recent application work (published elsewhere), position, or open problems with clear and concise formulations of current challenges.

Organisers and Program Co-Chairs


Roberta Calegari
– University of Bologna, coordinator of AEQUITAS

E-mail: roberta.calegari@unibo.it

Carlos Castillo – ICREA and Universitat Pompeu Fabra – coordinator of FINDHR

Symeon Papadopoulos – CERTH/ITI – coordinator of MAMMOth

Roger Soraa – Norwegian University of Science and Technology – coordinator of BIAS

Key Dates

Papers Due: February 15, 2025 - February 15, 2025

For more information please access here.

Key Dates

Acceptance Notifications: February 28, 2025 - February 28, 2025

For more information please access here.

Key Dates

Workshop in Barcelona (In-person): March 20, 2025 - March 20, 2025

For more information please access here.

Get involve.

submit here