公平性
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
公平性是指通过算法决策,解决最终用户可能因种族、收入、性取向或性别等敏感特征而面临的可能不同的结果。例如,招聘算法是否会对与特定性别或种族相关的申请人姓名产生偏见?
观看以下视频,详细了解机器学习系统可能如何容易受到人类偏见的影响:
如需了解实际应用示例,请参阅有关 Google 搜索和 Google 相册等产品如何通过 Monk 肤色量表改善肤色多样性呈现效果的文章。
有一些可靠的方法可以识别、衡量和减少模型中的偏差。机器学习速成课程的公平性模块深入探讨了公平性和偏差缓解技术。
人 + AI 研究 (PAIR) 提供了有关衡量公平性和隐性偏差的互动式 AI 可探索内容,可帮助您了解这些概念。
如需了解与机器学习公平性相关的更多术语,请参阅机器学习术语表:公平性 | Google for Developers。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2025-07-27。
[null,null,["最后更新时间 (UTC):2025-07-27。"],[[["\u003cp\u003eFairness in machine learning aims to address potential unequal outcomes for users based on sensitive attributes like race, gender, or income due to algorithmic decisions.\u003c/p\u003e\n"],["\u003cp\u003eMachine learning systems can inherit human biases, impacting outcomes for certain groups, and require strategies for identification, measurement, and mitigation.\u003c/p\u003e\n"],["\u003cp\u003eGoogle has worked on improving fairness in products like Google Search and Google Photos by utilizing the Monk Skin Tone Scale to better represent skin tone diversity.\u003c/p\u003e\n"],["\u003cp\u003eDevelopers can learn about fairness and bias mitigation techniques in detail through resources like the Fairness module of Google's Machine Learning Crash Course and interactive AI Explorables from People + AI Research (PAIR).\u003c/p\u003e\n"]]],[],null,["# Fairness\n\n\u003cbr /\u003e\n\n**Fairness** addresses the possible disparate outcomes end users may experience\nrelated to sensitive characteristics such as race, income, sexual orientation,\nor gender through algorithmic decision-making. For example, might a hiring\nalgorithm have biases for or against applicants with names associated with a\nparticular gender or ethnicity?\n\nLearn more about how machine learning systems might be susceptible to human bias\nin this video: \n\n\u003cbr /\u003e\n\nFor a real world example, read about how products such as Google Search and\nGoogle Photos improved diversity of skin tone representation through the\n[Monk Skin Tone Scale](https://blog.google/products/search/monk-skin-tone-scale/).\n\nThere are reliable methods of identifying, measuring, and mitigating bias in models. The [Fairness](/machine-learning/crash-course/fairness)\nmodule of [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course)\nprovides an in-depth look at fairness and bias mitigation techniques.\n\n[People + AI Research](https://pair.withgoogle.com/) (PAIR) offers interactive\nAI Explorables on [Measuring Fairness](https://pair.withgoogle.com/explorables/measuring-fairness/)\nand [Hidden Bias](https://pair.withgoogle.com/explorables/hidden-bias/) to walk\nthrough these concepts.\nFor more terms related to ML Fairness, see [Machine Learning Glossary:\nFairness \\| Google for Developers](https://developers.google.com/machine-learning/glossary/fairness)."]]