可靠性
透過集合功能整理內容
你可以依據偏好儲存及分類內容。
問責是指對 AI 系統的影響負責。負責通常涉及透明度,或提供系統行為和組織程序的資訊,包括記錄及說明模型和資料集的建立、訓練和評估方式。下列網站說明兩種有用的問責文件模式:
另一個可用來評估可靠性的指標是「可解釋性」,可用來評估機器學習模型的決策,讓人類能夠找出導致預測結果的功能。此外,可解釋性是指模型的自動化決策是否能以人類可理解的方式進行解釋。
如要進一步瞭解如何建立使用者對 AI 系統的信任,請參閱「人類與 AI 指南」中的「可解釋性 + 信任」一節。您也可以參閱 Google 的 Explainability Resources,瞭解實際範例和最佳做法。
除非另有註明,否則本頁面中的內容是採用創用 CC 姓名標示 4.0 授權,程式碼範例則為阿帕契 2.0 授權。詳情請參閱《Google Developers 網站政策》。Java 是 Oracle 和/或其關聯企業的註冊商標。
上次更新時間:2025-07-27 (世界標準時間)。
[null,null,["上次更新時間:2025-07-27 (世界標準時間)。"],[[["\u003cp\u003eAccountability in AI involves taking ownership for the effects of a system, often achieved through transparency about the system's development and behavior.\u003c/p\u003e\n"],["\u003cp\u003eTransparency can be enhanced using documentation practices like Model Cards and Data Cards, which provide information about models and datasets.\u003c/p\u003e\n"],["\u003cp\u003eInterpretability and explainability are crucial aspects of accountability, enabling understanding of model decisions and providing human-understandable explanations for automated actions.\u003c/p\u003e\n"],["\u003cp\u003eFostering user trust in AI systems requires focusing on explainability and transparency, with further resources available in Google's Responsible AI Practices and Explainability Resources.\u003c/p\u003e\n"]]],[],null,["# Accountability\n\n\u003cbr /\u003e\n\n**Accountability** means owning responsibility for the effects of an AI system.\nAccountability typically involves **transparency**, or sharing information about\nsystem behavior and organizational process, which may include documenting and\nsharing how models and datasets were created, trained, and evaluated. The\nfollowing sites explain two valuable modes of accountability documentation:\n\n- [Model Cards](https://modelcards.withgoogle.com/about)\n- [Data Cards](https://sites.research.google/datacardsplaybook/)\n\nAnother dimension of accountability is **interpretability** , which involves the\nunderstanding of ML model decisions, where humans are able to identify features\nthat lead to a prediction. Moreover, **explainability** is the ability for a\nmodel's automated decisions to be explained in a way for humans to understand.\n\nRead more about building user trust in AI systems in the [Explainability +\nTrust](https://pair.withgoogle.com/chapter/explainability-trust/) section of the\n[People + AI Guidebook](https://pair.withgoogle.com/guidebook).\nYou can also check out [Google's Explainability Resources](https://explainability.withgoogle.com/)\nfor real life examples and best practices."]]