authors

Roberta Calegari

UNIBO

NEWS 19-08-2025

AEQUITAS Experimenter: A New Tool to Build Fair and Legally Aligned AI Systems

Are you ready to test your AI system for fairness compliance?

Visit the AEQUITAS Experimenter and begin your journey toward building fair, trustworthy, and
legally aligned AI—guided by socio-legal principles and supported by robust technical automation.

The AEQUITAS Experimenter is a novel software platform developed within the Horizon Europe
project AEQUITAS – Assessment and Engineering of eQuitable, Unbiased, Impartial and
Trustworthy AI Systems (G.A. 101070363). It is designed to operationalize fairness in Artificial
Intelligence (AI) by bridging the gap between socio-legal principles and technical development
practices. Rooted in a robust, flexible meta-methodology, the Experimenter enables the co-creation
and deployment of fair-by-design AI systems across diverse application domains.

The AEQUITAS Experimenter code is open-source and accessible on GitHub, promoting
transparency, auditability, and extensibility. Developed using clean architecture and domain-
driven design principles, it features a modular deployment pipeline based on Docker, ensuring
scalability and ease of integration across different environments. The platform is ready to use and
already deployed as a live service at http://aequitas.apice.unibo.it, where organizations, researchers,
and developers can interactively explore its capabilities.

At its core, the AEQUITAS Experimenter implements a meta-methodology—a methodological
framework that is not limited to a single use case but is adaptable to different domains and evolving
societal contexts. This meta-methodology provides a rigorous process for:

  • Translating socio-legal requirements into actionable technical steps.
  • Structuring fairness considerations throughout the AI lifecycle.
  • Supporting both technical and non-technical users in developing fair systems.

The foundation of this approach is a Question–Answering (Q/A) mechanism, modeled as a
directed graph that guides users through a dynamic, context-sensitive flow of questions. Each user
path is personalized based on answers given, allowing the system to tailor technical actions—such
as fairness metric selection or bias mitigation strategies—to the specific project domain and
stakeholder needs.

One of the platform’s most innovative features is its ability to translate abstract legal and ethical
requirements—such as those found in the EU’s AI Act—into concrete technical implementations.
The Experimenter achieves this through:

  • A domain-aware questionnaire co-designed by legal, ethical, and technical experts.
  • Automation scripts that compute fairness metrics, detect biases, and apply mitigation
    algorithms.
  • A modular software architecture using an event-driven system based on REST APIs and
    Kafka brokers, ensuring extensibility and responsiveness.

For instance, the platform supports fairness metrics like Statistical Parity Difference and Disparate
Impact, and provides mitigation strategies such as Disparate Impact Remover and Learned Fair
Representations. These operations are executed automatically in response to specific user
interactions, such as dataset uploads or sensitive feature identification.

The Experimenter was developed using a participatory design process involving legal scholars,
developers, civil society actors, and representatives of underrepresented groups. This ensures that
fairness is not treated solely as a technical metric, but as a shared societal goal. Validation of the
platform was carried out through multiple focus groups and co-design sessions. These sessions
highlighted user priorities such as the need for human oversight, support for intersectionality,
transparency in automation, and the capacity to personalize AI models rather than applying one-
size-fits-all logic.

authors

Roberta Calegari

UNIBO

found this interesting?

share this page