公平性
透過集合功能整理內容
你可以依據偏好儲存及分類內容。
公平性:透過演算法決策,解決使用者可能因種族、收入、性傾向或性別等敏感特徵,而面臨的結果差異。舉例來說,聘用演算法是否可能對與特定性別或族群相關聯的姓名,抱持偏見或反感?
如要進一步瞭解機器學習系統如何受到人類偏誤影響,請觀看這部影片:
如需實際範例,請參閱這篇文章,瞭解 Google 搜尋和 Google 相簿等產品如何透過孟克膚色量表,提升膚色代表性的多元性。
您可以透過可靠的方法,找出、評估及減少模型中的偏誤。機器學習密集課程的公平性模組,深入探討公平性和偏見減輕技術。
People + AI Research (PAIR) 提供互動式 AI 探索工具,說明如何評估公平性和隱藏偏誤,協助您瞭解這些概念。如要瞭解更多與機器學習公平性相關的詞彙,請參閱 Google 開發人員平台上的機器學習詞彙:公平性。
除非另有註明,否則本頁面中的內容是採用創用 CC 姓名標示 4.0 授權,程式碼範例則為阿帕契 2.0 授權。詳情請參閱《Google Developers 網站政策》。Java 是 Oracle 和/或其關聯企業的註冊商標。
上次更新時間:2025-07-27 (世界標準時間)。
[null,null,["上次更新時間:2025-07-27 (世界標準時間)。"],[[["\u003cp\u003eFairness in machine learning aims to address potential unequal outcomes for users based on sensitive attributes like race, gender, or income due to algorithmic decisions.\u003c/p\u003e\n"],["\u003cp\u003eMachine learning systems can inherit human biases, impacting outcomes for certain groups, and require strategies for identification, measurement, and mitigation.\u003c/p\u003e\n"],["\u003cp\u003eGoogle has worked on improving fairness in products like Google Search and Google Photos by utilizing the Monk Skin Tone Scale to better represent skin tone diversity.\u003c/p\u003e\n"],["\u003cp\u003eDevelopers can learn about fairness and bias mitigation techniques in detail through resources like the Fairness module of Google's Machine Learning Crash Course and interactive AI Explorables from People + AI Research (PAIR).\u003c/p\u003e\n"]]],[],null,["# Fairness\n\n\u003cbr /\u003e\n\n**Fairness** addresses the possible disparate outcomes end users may experience\nrelated to sensitive characteristics such as race, income, sexual orientation,\nor gender through algorithmic decision-making. For example, might a hiring\nalgorithm have biases for or against applicants with names associated with a\nparticular gender or ethnicity?\n\nLearn more about how machine learning systems might be susceptible to human bias\nin this video: \n\n\u003cbr /\u003e\n\nFor a real world example, read about how products such as Google Search and\nGoogle Photos improved diversity of skin tone representation through the\n[Monk Skin Tone Scale](https://blog.google/products/search/monk-skin-tone-scale/).\n\nThere are reliable methods of identifying, measuring, and mitigating bias in models. The [Fairness](/machine-learning/crash-course/fairness)\nmodule of [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course)\nprovides an in-depth look at fairness and bias mitigation techniques.\n\n[People + AI Research](https://pair.withgoogle.com/) (PAIR) offers interactive\nAI Explorables on [Measuring Fairness](https://pair.withgoogle.com/explorables/measuring-fairness/)\nand [Hidden Bias](https://pair.withgoogle.com/explorables/hidden-bias/) to walk\nthrough these concepts.\nFor more terms related to ML Fairness, see [Machine Learning Glossary:\nFairness \\| Google for Developers](https://developers.google.com/machine-learning/glossary/fairness)."]]