For AEQUITAS October started with an important milestone, the project hosted the first workshop at the 26th European Conference on Artificial Intelligence that took place at the campus of the Jagiellonian University in Kraków, Poland. This workshop provided a forum for the exchange of ideas, presentation of results and preliminary work in all areas related to fairness and bias in AI and presented the 60 participants with a multidisciplinary panel that allowed to cover a large plethora of topics related to fairness and bias in AI.
AI-based decision support systems are increasingly deployed in industry, in the public and private sectors, and in policymaking to guide decisions in important societal spheres, including hiring decisions, university admissions, loan granting, medical diagnosis, and crime prediction. As our society is facing a dramatic increase in inequalities and intersectional discrimination, we need to prevent AI systems to amplify this phenomenon but rather mitigate it. As we use automated decision support systems to formalize, scale, and accelerate processes, we have the opportunity, as well as the duty, to revisit the existing processes for the better, avoiding perpetuating existing patterns of injustice, by detecting, diagnosing, and repairing them. To trust these systems, domain experts and stakeholders need to trust the decisions. Despite the increased amount of work in this area in the last few years, we still lack a comprehensive understanding of how pertinent concepts of bias or discrimination should be interpreted in the context of AI and which socio-technical options to combat bias and discrimination are both realistically possible and normatively justified.
Assistant Professor of Computer Science at University College Dublin (UCD)
Are we being fair about fairness in machine learning?
This talk gave a brief high-level overview of what fairness in machine learning currently looks like but seeks to explore some of the gaps in the literature where there is a need for more focussed effort. The idea is not to criticise, but rather advocate that to be fair about fairness research we need to go beyond existing mathematical definitions and approaches of what it means to be fair. Specifically, moving towards a more holistic view of seeing fairness embedded into a complex system of different actors, goals, and (un-)achievable trade-offs. The talk aims to give a comprehensive synopsis of currently unsolved problems and under explored topics in the fairness literature.
Women in AI (#WAI) Poland Ambassador
“Towards an inclusive and equitable AI ecosystem”
Our world is made up of biases. Despite our best efforts, we will still be biased human beings. but AI doesn’t have to reflect that. We simply need to act on multiple levels – by raising awareness, educating, promoting diversity (gender, age, religion, etc.), and re-evaluating how the AI ecosystem is designed and used for the benefit of a global society. In this talk some data and examples incl. very fundamental issues were analysed, and what more can be done to ensure an inclusive and equitable AI ecosystem was discussed.
This workshop held huge significance for the AEQUITAS project on multiple fronts. Firstly, it provided an invaluable occasion to showcase our preliminary findings and gather essential feedback from the research community. Secondly, the workshop acted as a catalyst for fostering collaborations with experts and enthusiasts alike, deepening our collective understanding of the topic. Thirdly, it served as a forum to collect valuable suggestions on addressing the challenges that AEQUITAS faces, offering a diverse range of perspectives. Finally, through our discussions and insights, we actively contributed to raising awareness about the critical issues of fairness and bias in AI, amplifying the importance of our mission.
“Within our AI fairness and bias workshop, we distilled the multifaceted elements of this critical subject. This interdisciplinary exploration enlightened the complex interplay of social, legal, and technical dimensions, providing invaluable insights essential to forging ethically sound AI systems”, said Roberta Calegari, project coordinator.