A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
-
Updated
Nov 13, 2025 - Python
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Bias Auditing & Fair ML Toolkit
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
A curated list of Robust Machine Learning papers/articles and recent advancements.
Examples of unfairness detection for a classification-based credit model
Article for Special Edition of Information: Machine Learning with Python
⚖️ A bias audit tool for binary decision-making systems
BiasFinder | IEEE TSE | Metamorphic Test Generation to Uncover Bias for Sentiment Analysis Systems
Machine Learning Bias Mitigation
implementation of fair dummies
FaireduPlus: Enhancing Intersectional Fairness in Education-Focused Machine Learning Using Synthetic Data
My Own Repository with workfolders and others such as Group Projects etc...
Search-Based Fairness Testing
An official implementation for "Fairness Evaluation in Deepfake Detection Models using Metamorphic Testing"
Agent Indoctrination – AI Safety, Bias, Fairness, Ethics & Compliance Testing Framework 🚀
Trabajo práctico final de la materia "Equidad en Aprendizaje Automático" de la Licenciatura en Ciencia de Datos (UNSAM). 1C-2025
Analysis of Students and Parents Datasets for Bias mitigation and Fairness
Automatic Location of Disparities (ALD) for algorithmic audits.
Add a description, image, and links to the fairness-testing topic page so that developers can more easily learn about it.
To associate your repository with the fairness-testing topic, visit your repo's landing page and select "manage topics."