Introduction to Responsible AI

How do we build AI systems responsibly, at scale? Learn about Responsible AI, relevant concepts and terms, and how to implement these practices in products.

Introduction

Artificial intelligence (AI) powers many apps and services that people use in daily life. With billions of users of AI across fields from business to healthcare to education, it is critical that leading AI companies work to ensure that the benefits of these technologies outweigh the harms, in order to create the most helpful, safe, and trusted experiences for all.

Responsible AI considers the societal impact of the development and scale of these technologies, including potential harms and benefits. The AI Principles provide a framework that includes objectives for AI applications, and applications we will not pursue in the development of AI systems.

Responsible AI Dimensions

As AI development accelerates and becomes more ubiquitous, it is critical to incorporate Responsible AI practices into every workflow stage from ideation to launch. The following dimensions are key components to Responsible AI, and are important to consider throughout the product lifecycle.

Fairness

Fairness addresses the possible disparate outcomes end users may experience as related to sensitive characteristics such as race, income, sexual orientation, or gender through algorithmic decision-making. For example, might a hiring algorithm have biases for or against applicants with names that are associated with a particular gender or ethnicity?

Learn more about how machine learning systems might be susceptible to human bias in this video:

Read about how products such as Search and Photos improved diversity of skin tone representation.

For more terms related to ML Fairness, please see Machine Learning Glossary: Fairness | Google for Developers. To learn more, the Fairness module of the Machine Learning Crash Course provides an introduction to ML Fairness.

People + AI Research (PAIR) offers interactive AI Explorables including Measuring Fairness and Hidden Bias to walk through these concepts.

Accountability

Accountability means being held responsible for the effects of an AI system. This involves transparency, or sharing information about system behavior and organizational process, which may include documenting and sharing how models and datasets were created, trained, and evaluated. Model Cards and Data Cards are examples of transparency artifacts that can help organize the essential facts of ML models and datasets in a structured way.

Another dimension of accountability is interpretability, which involves the understanding of ML model decisions, where humans are able to identify features that lead to a prediction. Moreover, explainability is the ability for a model's automated decisions to be explained in a way for humans to understand.

Read more about building user trust in AI systems in the Explainability + Trust chapter of the People + AI Guidebook, and the Interpretability section of Google's Responsible AI Practices.

Safety

AI safety includes a set of design and operational techniques to follow to avoid and contain actions that can cause harm, intentionally or unintentionally. For example, do systems behave as intended, even in the face of a security breach or targeted attack? Is your AI system robust enough to operate safely even when perturbed? How do you plan ahead to prevent or avoid risks? Is your system reliable and stable under pressure?

The Safety section of Google's Responsible AI Practices outlines recommended practices to protect AI systems from attacks, including adversarial testing. Learn more about our work in this area and lessons learned in the Keyword blog post, Google's AI Red Team: the ethical hackers making AI safer.

Privacy

Privacy practices in Responsible AI (see Privacy section of Google Responsible AI Practices) involve the consideration of potential privacy implications in using sensitive data. This includes not only respecting legal and regulatory requirements, but also considering social norms and typical individual expectations. For example, what safeguards need to be put in place to ensure the privacy of individuals, considering that ML models may remember or reveal aspects of the data that they have been exposed to? What steps are needed to ensure users have adequate transparency and control of their data?

Learn more about ML privacy through PAIR Explorables' interactive walkthroughs:

Responsible AI in Generative Models/LLMs

The advent of large, generative models introduces new challenges to implementing Responsible AI practices due to their potentially open-ended output capabilities and many potential downstream uses. In addition to the AI Principles, Google has a Generative AI Prohibited Use Policy and Generative AI Guide for Developers.

Read more about how teams at Google use generative AI to create new experiences for users at Google Generative AI. On this site, we also offer guidance on Safety and Fairness, Prompt Engineering, and Adversarial Testing for generative models. For an interactive walkthrough on language models, see the PAIR Explorable: What Have Language Models Learned?

Additional Resources

Why we focus on AI – Google AI

Google AI Review Process

AI Principles Review Process | Google AI:

Responsible AI Toolkit | TensorFlow