Accountability
Accountability means owning responsibility for the effects of an AI system. Accountability typically involves transparency, or sharing information about system behavior and organizational process, which may include documenting and sharing how models and datasets were created, trained, and evaluated. The following sites explain two valuable modes of accountability documentation:
Another dimension of accountability is interpretability, which involves the understanding of ML model decisions, where humans are able to identify features that lead to a prediction. Moreover, explainability is the ability for a model's automated decisions to be explained in a way for humans to understand.
Read more about building user trust in AI systems in the Explainability + Trust section of the People + AI Guidebook, and the Interpretability section of Google's Responsible AI Practices. You can also check out Google's Explainability Resources for real life examples and best practices.