Fairness addresses the possible disparate outcomes end users may experience related to sensitive characteristics such as race, income, sexual orientation, or gender through algorithmic decision-making. For example, might a hiring algorithm have biases for or against applicants with names associated with a particular gender or ethnicity?
Learn more about how machine learning systems might be susceptible to human bias in this video:
For a real world example, read about how products such as Google Search and Google Photos improved diversity of skin tone representation through the Monk Skin Tone Scale.
There are reliable methods of identifying, measuring, and mitigating bias in models. The Fairness module of Machine Learning Crash Course provides an in-depth look at fairness and bias mitigation techniques.
People + AI Research (PAIR) offers interactive AI Explorables on Measuring Fairness and Hidden Bias to walk through these concepts. For more terms related to ML Fairness, see Machine Learning Glossary: Fairness | Google for Developers.