"Exploring AI Trust: Beyond Moral Implications and Ethical Challenges"

The Untapped Economic Potential of AI: Trust as the Key to Success

While the economic potential of artificial intelligence (AI) is well-recognized, a staggering 87% of AI projects fail to deliver results. This widespread shortfall is not merely a technology, business, culture, or industry issue; recent evidence points to a more fundamental problem: trust.

Strengthening Trust in AI Systems

Recent research indicates that almost two-thirds of C-suite executives believe that trust in AI significantly influences revenue, competitiveness, and customer success. However, establishing trust in AI is challenging. Just as we don’t instantly trust humans, trust in AI systems does not come easily either.

A lack of trust is stifling AI's economic benefits, and conventional recommendations for building trust often appear too abstract or impractical. To address this, we propose a new framework: the AI Trust Equation.

The AI Trust Equation Defined

Originally conceived for interpersonal trust, the Trust Equation from The Trusted Advisor by David Maister, Charles Green, and Robert Galford is expressed as:

Trust = Credibility + Reliability + Intimacy / Self-Orientation

However, this framework does not effectively translate to human-machine relationships. The revised AI Trust Equation is:

Trust = Security + Ethics + Accuracy / Control

1. Security is the first foundational element. Organizations must ask, "Will my information remain secure if shared with this AI system?" Ensuring robust security measures is essential.

2. Ethics introduces moral considerations over technical ones. Leaders should reflect on factors such as:

- Treatment of individuals involved in model development.

- Explainability of the model and mechanisms to address harmful outputs.

- Awareness of biases in the model, as evidenced by initiatives like the Gender Shades research.

- Business models and compensation for contributors to AI training data.

- Alignment of company values with actions, exemplified by OpenAI's controversies.

3. Accuracy assesses how reliably an AI system delivers correct answers in relevant contexts. It's crucial to evaluate both model sophistication and data quality.

4. Control encapsulates the degree of operational oversight desired. Relevant questions include whether the AI system will act as intended and whether control over intelligent systems is ever at risk.

5 Steps to Implementing the AI Trust Equation

1. Assess Usefulness: Determine whether the AI platform creates value before exploring its trustworthiness.

2. Evaluate Security: Investigate data handling practices on the platform, ensuring compliance with your security standards.

3. Set Ethical Standards: Define clear ethical thresholds and assess all systems against these criteria for explainability and fairness.

4. Define Accuracy Goals: Establish acceptable accuracy benchmarks and resist the temptation to settle for subpar performance.

5. Determine Required Control Levels: Define how much control your organization needs over AI systems, ranging from fully autonomous to semi-autonomous options.

In the rapidly evolving AI landscape, searching for best practices may be tempting, but no definitive solutions exist yet. Instead, take the initiative. Form a dedicated team, customize the AI Trust Equation for your organization, and critically assess AI systems against it.

Some tech companies recognize these evolving market dynamics and are enhancing transparency, such as Salesforce's Einstein Trust Layer, while others may resist. Ultimately, your organization must decide how much trust to place in AI outputs and the companies behind them.

The potential of AI is immense, but realizing it hinges on cultivating and maintaining trust between AI systems and the organizations that employ them. The future of AI depends on it.

Brian Evergreen is the author of “Autonomous Transformation: Creating a More Human Future in the Era of Artificial Intelligence.”

Most people like

Find AI tools in YBX