The advent of large, generative models introduces new challenges to implementing Responsible AI practices due to their potentially open-ended output capabilities and many potential downstream uses. In addition to the AI Principles, Google has a Generative AI Prohibited Use Policy and Generative AI Toolkit for Developers.
Google also offers guidance about generative AI models on:
Summary
Assessing AI technologies for fairness, accountability, safety, and privacy is key to building AI responsibly. These checks should be incorporated into every stage of the product lifecycle to ensure the development of safe, equitable, and reliable products for all.
Further learning
Why we focus on AI – Google AI
PAIR Explorable: What Have Language Models Learned?
Responsible AI Toolkit | TensorFlow
AI Principles Review Process | Google AI: