Listen

Description

Spoken (by a human) version of this article.

When we're checking for fairness in our algorithmic systems (incl. processes, models, rules), we often ask:

What are the personal characteristics or attributes that, if used, could lead to discrimination?


This article provides a basic framework for identifying and categorising these attributes.

About this podcast

A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

Hosted by Yusuf Moolla.
Produced by Risk Insights (riskinsights.com.au).