Keadilan
Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Keadilan menangani kemungkinan hasil yang berbeda yang mungkin dialami pengguna akhir terkait dengan karakteristik sensitif seperti ras, pendapatan, orientasi seksual, atau gender melalui pengambilan keputusan algoritmik. Misalnya, apakah algoritma perekrutan memiliki bias terhadap atau menentang pelamar dengan nama yang terkait dengan gender atau etnis tertentu?
Pelajari lebih lanjut bagaimana sistem machine learning mungkin rentan terhadap bias manusia dalam video ini:
Untuk contoh di dunia nyata, baca cara produk seperti Google Penelusuran dan Google Foto meningkatkan keberagaman representasi warna kulit melalui Monk Skin Tone Scale.
Ada metode yang andal untuk mengidentifikasi, mengukur, dan mengurangi bias dalam model. Modul Keadilan dalam Kursus Singkat Machine Learning memberikan pembahasan mendalam tentang teknik mitigasi bias dan keadilan.
People + AI Research (PAIR) menawarkan AI Explorables interaktif tentang Mengukur Keadilan dan Bias Tersembunyi untuk menjelaskan konsep ini.
Untuk istilah lainnya yang terkait dengan Keadilan ML, lihat Glosarium Machine Learning: Keadilan | Google for Developers.
Kecuali dinyatakan lain, konten di halaman ini dilisensikan berdasarkan Lisensi Creative Commons Attribution 4.0, sedangkan contoh kode dilisensikan berdasarkan Lisensi Apache 2.0. Untuk mengetahui informasi selengkapnya, lihat Kebijakan Situs Google Developers. Java adalah merek dagang terdaftar dari Oracle dan/atau afiliasinya.
Terakhir diperbarui pada 2025-07-27 UTC.
[null,null,["Terakhir diperbarui pada 2025-07-27 UTC."],[[["\u003cp\u003eFairness in machine learning aims to address potential unequal outcomes for users based on sensitive attributes like race, gender, or income due to algorithmic decisions.\u003c/p\u003e\n"],["\u003cp\u003eMachine learning systems can inherit human biases, impacting outcomes for certain groups, and require strategies for identification, measurement, and mitigation.\u003c/p\u003e\n"],["\u003cp\u003eGoogle has worked on improving fairness in products like Google Search and Google Photos by utilizing the Monk Skin Tone Scale to better represent skin tone diversity.\u003c/p\u003e\n"],["\u003cp\u003eDevelopers can learn about fairness and bias mitigation techniques in detail through resources like the Fairness module of Google's Machine Learning Crash Course and interactive AI Explorables from People + AI Research (PAIR).\u003c/p\u003e\n"]]],[],null,["# Fairness\n\n\u003cbr /\u003e\n\n**Fairness** addresses the possible disparate outcomes end users may experience\nrelated to sensitive characteristics such as race, income, sexual orientation,\nor gender through algorithmic decision-making. For example, might a hiring\nalgorithm have biases for or against applicants with names associated with a\nparticular gender or ethnicity?\n\nLearn more about how machine learning systems might be susceptible to human bias\nin this video: \n\n\u003cbr /\u003e\n\nFor a real world example, read about how products such as Google Search and\nGoogle Photos improved diversity of skin tone representation through the\n[Monk Skin Tone Scale](https://blog.google/products/search/monk-skin-tone-scale/).\n\nThere are reliable methods of identifying, measuring, and mitigating bias in models. The [Fairness](/machine-learning/crash-course/fairness)\nmodule of [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course)\nprovides an in-depth look at fairness and bias mitigation techniques.\n\n[People + AI Research](https://pair.withgoogle.com/) (PAIR) offers interactive\nAI Explorables on [Measuring Fairness](https://pair.withgoogle.com/explorables/measuring-fairness/)\nand [Hidden Bias](https://pair.withgoogle.com/explorables/hidden-bias/) to walk\nthrough these concepts.\nFor more terms related to ML Fairness, see [Machine Learning Glossary:\nFairness \\| Google for Developers](https://developers.google.com/machine-learning/glossary/fairness)."]]