Principles for Trustworthy AI The Building Blocks Toward Effective AI Risk Management

The rapid adoption of artificial intelligence (AI) is unlocking unprecedented value for organizations that can effectively harness its potential. However, as AI continues to evolve, so does the need for robust risk management to ensure its ethical and transparent use. With forthcoming oversight from industry, government, and international bodies, maintaining trustworthy AI is critical to unlocking its full value while ensuring safety and accountability. Here’s where to start on the journey toward building AI that can be trusted, valued, and guided responsibly.
8 Principles for Trustworthy AI
Organizations should establish an AI governance framework that’s grounded in the concept of “Trustworthy AI” principles that guide people, processes, and technology throughout the development and deployment of AI. The core principles of trustworthy AI principles include:


- Accountability: The obligation and responsibility to ensure systems operate ethically, fairly, transparently, and compliantly (e.g., traceable actions, decisions, outcomes).
- Contestability: Ensuring system outputs and actions can be questioned and challenged.
- Explainability (XAI): The ability to describe AI’s output and decision-making.
Fairness: Relatively equal treatment of individuals and groups. - Reliability: Ensuring systems behave as expected (e.g., perform intended functions consistently and accurately, especially with unseen data).
- Robustness: Systems maintain functionality and perform accurately in a variety of circumstances (e.g., new environments, unseen data, against adversarial attacks).
- Safety: Minimizing potential harm to individuals, society, and the environment.
- Transparency: Ensuring information about the system is available to stakeholders.
While it may not be possible to maximize all characteristics of trustworthy AI, organizations still need to determine and accept tradeoffs in a risk-based manner. Effectively balancing risk and implementing trustworthy AI is key to:
Implementing Trustworthy AI
So where do you start? Organizations struggling to operationalize trustworthy AI or seeking a health check on their existing framework may benefit from a baseline risk assessment or audit. A few relevant assessment/audit types are program assessments, development workflow assessments, and model assessments.