Skip to content

Building Trust in AI: The Certification Approach

“AI represents the future, and trust is the bridge that will take us there.”

— K.L.

Trustworthy AI

Upon the unveiling of ENVIDA’s Q1 2024 financial report, the leading luminary in the AI-chip manufacturing and service sector, the public has enthusiastically lauded the company’s exceptional performance. This accomplishment serves to reinforce the current and projected robust growth of the AI ecosystem.

Concurrently, the global landscape of AI is witnessing a dynamic evolution in terms of regulation and policy. The EU’s AI ACT has established a nuanced legal architecture, and an array of frameworks, guidelines, statutes, and regulations are emerging across various jurisdictions. These developments are poised to undergo continuous transformation, exerting a profound and enduring influence on industries, communities, and societal structures.

As AI progresses on its trajectory of expansion and advancement, professionals from legal, social, and ethical spheres are demonstrating a heightened interest in critical issues. These include ensuring safety, upholding ethical standards, enforcing accountability, guaranteeing trustworthiness, and protecting privacy, among others.

Google’s CEO Sundar Pichai offered seven objectives for AI applications that became core beliefs for the entire company.

  • Be socially beneficial.
  • Avoid creating or reinforcing unfair bias
  • Be built and tested for safety
  • Be accountable to people
  • Incorporate privacy design principles
  • Uphold high standards for scientific excellence
  • Be made available for uses that accord with these principles

Building trust in the AI era

The chasm between theoretical principles and tangible application is a pivotal subject that demands attention. To address this, Alan Winfield of the University of Bristol and Marina Jirotka from Oxford University propose a comprehensive, four-layer governance framework for AI systems, which could effectively narrow this divide.

This governance framework encompasses the following elements:

(1) reliable systems based on sound software engineering practice;

(2) safety culture through proven business management strategies;

(3) trustworthy certification by independent oversight, and

(4) regulation by government agencies.

Central to the concept of independent oversight is the reinforcement of legal, moral, and ethical tenets that underpin human or organizational accountability and liability for their products and services. Nonetheless, the notion of responsibility is inherently intricate, with subtleties and complexities that are especially pronounced in the face of rapid technological progression and the concurrent evolution of regulatory frameworks.

To navigate these complexities, it is imperative to adopt a dynamic governance approach that not only reflects the current technological landscape but also anticipates future developments. This involves continuous dialogue among technologists, ethicists, legal experts, and policymakers to ensure that AI systems are governed responsibly and ethically, aligning with societal values and expectations.

From privacy trust to AI trust

Certifications, as outlined in Articles 42 and 43 of the General Data Protection Regulation (GDPR), have been acknowledged and incorporated as integral components of compliance mechanisms.

Certain marks and seals have garnered official recognition from European supervisory authorities, exemplified by the likes of EuroPrise and Europrivacy.

Beyond these, entities such as Trustarc are proactively delivering independent assurance and compliance verification services, which are pivotal in substantiating an organization’s adherence to regulatory standards.

The question arises whether the successful establishment of privacy trust can be analogously applied to the realm of AI trust. The affirmative response is warranted, given that the majority of AI operations are fundamentally predicated on data.

While we don’t need to reinvent the wheel, it is crucial to leverage the accumulated experience and successful outcomes from the domain of data protection and privacy governance.

By building upon the established frameworks and practices, we can effectively extend these principles to AI governance, ensuring that AI systems are developed and deployed with a strong foundation of trust, compliance, and ethical considerations.

In April, Trustarc announced the first client to be certified under the newly launched TRUSTe Responsible AI Certification. “This certification marks an important step for the industry towards greater accountability and trust in the technologies shaping our future.” Noël Luke, Chief Assurance Officer at TrustArc said.