安全分
透過集合功能整理內容
你可以依據偏好儲存及分類內容。
AI 安全性包含一套設計和操作技巧,可避免及控制可能造成傷害的動作,無論是蓄意或無意皆然。舉例來說,即使面臨安全性違規或鎖定攻擊,AI 系統是否仍能正常運作?AI 系統是否夠健全,即使在受到干擾時也能安全運作?您如何事先規劃,避免或降低風險?AI 系統是否可靠且穩定?
其中一種安全技術是對抗測試,也就是嘗試「破壞」自己的應用程式,瞭解應用程式在使用者輸入惡意提示或無意間輸入有害提示時,會有什麼行為。負責任的生成式 AI 工具包進一步說明安全性評估,包括對抗性測試。如要進一步瞭解 Google 在這個領域的努力,以及從中學到的經驗,請參閱「關鍵字」網誌文章「Google AI 技術紅隊:讓 AI 技術更安全的駭客」或「SAIF:Google 的 AI 安全指南」。
除非另有註明,否則本頁面中的內容是採用創用 CC 姓名標示 4.0 授權,程式碼範例則為阿帕契 2.0 授權。詳情請參閱《Google Developers 網站政策》。Java 是 Oracle 和/或其關聯企業的註冊商標。
上次更新時間:2025-07-27 (世界標準時間)。
[null,null,["上次更新時間:2025-07-27 (世界標準時間)。"],[[["\u003cp\u003eAI safety encompasses design and operational techniques to prevent harm, ensuring AI systems behave as intended, even under pressure or attack.\u003c/p\u003e\n"],["\u003cp\u003eAdversarial testing is a key safety technique where AI systems are intentionally challenged with malicious or harmful input to assess their robustness.\u003c/p\u003e\n"],["\u003cp\u003eGoogle's Responsible AI Practices provide recommendations for protecting AI systems, including methods for adversarial testing and safeguarding against attacks.\u003c/p\u003e\n"]]],[],null,["# Safety\n\n\u003cbr /\u003e\n\nAI **safety** includes a set of design and operational techniques to follow to\navoid and contain actions that can cause harm, intentionally or unintentionally.\nFor example, do AI systems behave as intended, even in the face of a security\nbreach or targeted attack? Is the AI system robust enough to operate safely\neven when perturbed? How do you plan ahead to prevent or avoid risks? Is the AI\nsystem reliable and stable under pressure?\n\nOne such safety technique is [adversarial testing](/machine-learning/guides/adv-testing),\nor the practice of trying to \"break\" your own application to learn how it\nbehaves when provided with malicious or inadvertently harmful input. The\n[Responsible Generative AI Toolkit](https://ai.google.dev/responsible/docs/evaluation)\nexplains more about safety evaluations, including adversarial testing. Learn\nmore about Google's work in this area and lessons\nlearned in the Keyword blog post, [Google's AI Red Team: the ethical hackers\nmaking AI\nsafer](https://blog.google/technology/safety-security/googles-ai-red-team-the-ethical-hackers-making-ai-safer/)\nor at [SAIF: Google's Guide to Secure AI](https://saif.google/)."]]