Skip to the content.

FaUCI (Fairness Under Constrained Injection)

Overview

FaUCI (Fairness Under Constrained Injection) is an in-processing fairness technique that incorporates fairness constraints directly into the model’s loss function. It works by adding a fairness regularization term to the standard loss function, allowing the model to optimize for both prediction accuracy and fairness simultaneously. This approach provides a flexible framework for incorporating various fairness metrics as regularization terms.

What Problem Does It Solve?

FaUCI addresses the challenge of creating fair machine learning models without sacrificing too much predictive performance. By incorporating fairness metrics directly into the training process, it allows for explicit control over the trade-off between accuracy and fairness. This approach is particularly useful when specific fairness constraints must be satisfied while maintaining model performance.

Key Concepts

How It Works

  1. Loss Function Design:
    • The loss function is a weighted combination of a standard loss (e.g., MSE, BCE) and a fairness regularization term;
    • The formula is: total_loss = (1 - weight) * base_loss + weight * regularizer_loss;
    • The weight parameter controls the importance of fairness vs. accuracy.
  2. Fairness Metrics as Regularizers:
    • Various fairness metrics can be used as regularization terms;
    • Common options include Statistical Parity Difference (SPD) or Disparate Impact (DI);
    • The fairness metric is computed on mini-batches during training.
  3. Training Process:
    • The model is trained using standard gradient-based optimization;
    • Gradients flow through both the prediction loss and the fairness regularization term;
    • The model learns to make predictions that are both accurate and fair.

Implementation Details

The FairLib implementation includes:

The implementation is flexible and can work with any PyTorch model and various fairness metrics.

Example Usage

import fairlib as fl
from fairlib.inprocessing.fauci import Fauci
from torch import nn

# FairLib DataFrame with one protected attribute
df = fl.DataFrame(...)
df.targets = "label"
df.sensitive = "gender"

# Create a base PyTorch model
base_model = nn.Sequential(
    nn.Linear(df.shape[1], 32),
    nn.ReLU(),
    nn.Linear(32, 1),
    nn.Sigmoid()
)

# Wrap with Fauci
model = Fauci(
    model=base_model,
    fairness_metric="statistical_parity_difference",
    weight=0.5  # Balance between accuracy and fairness
)

# Train the model
model.fit(df, num_epochs=20, batch_size=64)

# Make predictions
y_pred = model.predict(df)

Advantages and Limitations

Advantages

Limitations

References

Magnini, M., Ciatto, G., Calegari, R., Omicini, A. (2024). Enforcing Fairness via Constraint Injection with FaUCI. Aachen : CEUR-WS.