Skip to content

AI Governance and Audit

One Compliance is a regulatory compliance consulting firm for clients under this digital transforming world.

OECD AI Principles overview

The OECD AI Principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values.

U.S.

“Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”

E.U.

What is the AI Act?

The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.

The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs).

Who will the AI Act apply to?

It applies in varying degrees to providers, users, end-product manufacturers, importers or distributors of AI systems, depending on the risk.

The AI Act casts a wide net, aiming to ensure that AI systems are used responsibly and ethically by any organization or government in the EU, regardless of where these systems are developed or deployed. This means that even businesses outside the EU will need to follow the Act's rules if they want to operate in Europe.

What is Risk-Based Approach?

The AI Act promises a proportionate risk-based approach that imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety. Targeting specific sectors and applications, the regulation sets a four-tiered risk framework, which classifies risk into four categories: ‘unacceptable risks’ that lead to prohibited practices; ‘high risks’ that trigger a set of stringent obligations, including conducting a conformity assessment; ‘limited risks’ with associated transparency obligations; and ‘minimal risks’, where stakeholders are encouraged to build codes of conduct—irrespective of whether they are established in the EU or a third-country.

What is Human Oversight?

The European Commission has emphasized the importance of adopting AI systems with a human-centric approach to ensure their safe deployment. This human-centric approach requires implementing AI systems safely and reliably to benefit humanity, with the aim of protecting human rights and dignity by keeping a ‘human-in-the-loop’. Specifically, The EU AI Act proposal would require AI designers to allow human control or interference with an AI system to achieve effective human oversight. Under Article 14 (1), systems must be designed and developed in such a way that they can be ‘effectively overseen by natural persons during the period in which the AI system is in use’. 

What’s the fines/penalty for infringements of the AI Act?

  • up to €35m or 7% of global annual turnover for infringements on prohibited practices or non-compliance related to requirements on data;
  • up to €15m or 3% of global annual turnover for other requirements or obligations of AI Act, including the rules on general-purpose AI models;
  • up to €7.5m or 1% of global annual turnover for providing incorrect information, incomplete or misleading information.
  • For SMEs and start-ups, the fines for all the above are subject to the same maximum percentages or amounts, but whichever is lower.

Our Service

OneCompliance is dedicated to helping companies develop ethical, trustworthy and responsible AI solutions. Our services guide you through identifying use cases, assessing readiness, and navigating the ethical impact and risks of AI development. We support SMEs with tailored workshops, assessments, and consulting to promote responsible AI adoption and industry-standard compliance.

Schedule a call with one of our experts