Fairness: Types of bias

Machine learning (ML) models are not inherently objective. ML practitioners train models by feeding them a dataset of training examples, and human involvement in the provision and curation of this data can make a model's predictions susceptible to bias.

When building models, it's important to be aware of common human biases that can manifest in your data, so you can take proactive steps to mitigate their effects.

Reporting bias

Historical bias

Automation bias

Selection bias

Selection bias occurs if a dataset's examples are chosen in a way that is not reflective of their real-world distribution. Selection bias can take many different forms, including coverage bias, non-response bias, and sampling bias.

Coverage bias

Non-Response bias

Sampling bias

Group attribution bias

Group attribution bias is a tendency to generalize what is true of individuals to the entire group to which they belong. Group attribution bias often manifests in the two following forms.

In-group bias

Out-group homogeneity bias

Implicit Bias

Confirmation bias

Experimenter's bias

Exercise: Check your understanding

Which of the following types of bias could have contributed to the skewed predictions in the college admissions model described in the introduction?
In-group bias
Automation bias
Confirmation bias
Historical bias