The next challenge
The advent of large, generative models
introduces new challenges to implementing Responsible AI practices due to their
potentially open-ended output capabilities and many potential downstream uses. In addition to the AI Principles, Google has a Generative AI Prohibited Use Policy
and Generative AI Toolkit for Developers.
Google also offers guidance about generative AI models on:
Summary
Assessing AI technologies for fairness, accountability, safety, and privacy is
key to building AI responsibly. These checks should be incorporated into every
stage of the product lifecycle to ensure the development of safe, equitable, and
reliable products for all.
Further learning
Why we focus on AI – Google AI
Google Generative AI
PAIR Explorable: What Have Language Models Learned?
Responsible AI Toolkit | TensorFlow
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-02-25 UTC.
[null,null,["Last updated 2025-02-25 UTC."],[[["Generative AI models present new challenges to Responsible AI due to their open-ended output and varied uses, prompting the need for guidelines like Google's Generative AI Prohibited Use Policy and Toolkit for Developers."],["Google provides further resources on crucial aspects of generative AI, including safety, fairness, prompt engineering, and adversarial testing."],["Building AI responsibly requires thorough assessment of fairness, accountability, safety, and privacy throughout the entire product lifecycle."],["Google emphasizes the importance of Responsible AI and offers additional resources like the AI Principles, Generative AI information, and toolkits for developers."]]],[]]