THE PROJECT

Action Plan

AI-based decision support systems are increasingly deployed in industry, in the public and private sectors, and in policymaking. As our society is facing a dramatic increase in inequalities and intersectional discrimination, we need to prevent AI systems to amplify social inequalities but rather mitigate them, starting with the AI developers. 

To trust these systems, domain experts and stakeholders need to trust their decisions. AEQUITAS facilitates auditing machine learning models for discrimination and bias and informed and equitable decisions around developing and deploying predictive risk-assessment tools.

 

How ?

The AEQUITAS controlled testing and experimentation environment aims to assess existing AI systems from the perspective of fairness and provide mitigation and reparation methodologies.

Enterprises, public bodies, associations, representative bodies, and citizens provide input to AEQUITAS on the individual dimension with existing tools and datasets.

The combination of case studies from companies, public bodies, and hospitals, and of the association partners of the Consortium will provide indicators that will be used to develop the AEQUITAS technology.

AEQUITAS will also ensure national and international engagement with related projects, initiatives, and networks to design new AI systems by applying anticipatory fairness-by-design practices and methodologies.

Our Target Groups

Group 1175

Key Players

AI developers

Companies in public and private sectors

Broader scientific community

Underrepresented minority groups

Group 1174

Advocates

Scientists working on AI

General public

National and local associations

Group 1176

Context Setters

European platforms, committees, and agencies

European policymakers and EC directorates

Regional/national authorities