A little over a year after ChatGPT took the world by storm, the EU AI has come into force. There seem to be many standards and regulations suddenly on the scene and we thought this might be a good time to review them and discuss their impacts and overlaps.
The EU AI act states a specific definition of what is considered AI and provides that all businesses that provide,deploy,import, distribute, and manufacture AI systems with links to the EU market. The EU’s definition of AI is very specific, but their definition of connected businesses runs rather broad, so the introduction of this legislation should prompt businesses who know they interact with AI to have a look at their connections with EU markets and evaluate if these connections will bring them within the scope and responsibilities of this act.
Unacceptable risks: Deemed incompatible with EU Human Rights/Values. e,g. deploying subliminal, manipulative, or deceptive techniques to distort behaviour.
High Risk AI Systems: Applications that could negatively affect the health and safety of people, their fundamental rights or the environment: e.g. Employment Screening, Public Assistance Screening,
To be put on the market and operated in the EU, AI systems in this risk-class must meet certain requirements.
Key obligations for high-risk providers include setting up an extensive quality management system, keeping logs, preparing detailed technical reports that act as a basis for audits, undergoing a conformity assessment, and implementing human oversight, among others.
Limited Risk: Limited risk AI systems include AI systems with a risk of manipulation or deceit. AI systems falling under this category must be transparent, meaning humans must be informed about their interaction with the AI (unless this is obvious), and any deep fakes should be denoted as such.
Minimal Risk:Includes all other AI systems that do not fall under the above-mentioned categories, such as a spam filter. AI systems under minimal risk do not have any restrictions or mandatory obligations. However, it is suggested to follow general principles such as human oversight, non-discrimination, and fairness.
Penalties provided in the EU AI Act are by far the most stringent on noncompliance with the rules against prohibited uses of AI, stipulating a maximum penalty of €35million or 7% of worldwide turnover. The act also provides penalties for noncompliance with other provisions, providing incorrect or misleading information, and failing to provide access to a GPAI when requested. These fines range from €7.5million to €15million and between 1-3% of worldwide turnover.
The enforcement of the EU AI Act begins on 1st August 2024, but some provisions will come into force over a longer period of time. This timeline will be important for businesses to remain aware of so as to keep on top of their changing responsibilities under the act if they are trading or linked with EU markets. The most heavily penalized obligations, those on prohibited uses of AI come into force early 2025, followed by reporting other associated obligations on General Purpose AIs in later 2025. Lastly, the enforcement regarding high-risk AI systems comes into force in late 2027, but companies are encouraged to follow the provisions regarding high-risk AI in the interim.
The ISO is the globally recognized and adopted developer and publisher of technical standards for technology,manufacturing, and other related endeavors. These are not legislation but their guidelines routinely function as the gold standard for standards and guidelines in their related endeavors. Their accreditations now function as a useful shorthand for businesses and organizations that want to show a high level of competency in their related endeavors. As such, with the emergence of AI systems as a significant tool for businesses, it naturally follows that the relevant ISO standard developed would become a significant guide for how businesses should operate with regard to AI systems.
Management systems:
The Standard provides that systems should be established to monitor and improve AI systems to ensure that they are in line with the company’s goals and ethical standards and that they are used in a way that is compliant with privacy, security, and ethical values.
Data Privacy and Security:
Guidelines set out how compliant organizations must ensure that their AI systems operate within applicable data protection laws,regulations,and standards. Compliant organizations must also develop and implements security measures to protect AI systems from malicious access, breaches, and other cybersecurity threats, all while documenting security practices in a transparent manner in order to demonstrate responsibility and accountability.
Bias Mitigation:
Compliant organizations are required to combat AI bias by both employing diverse data sets for the training of AI systems, and by continuously monitoring AI systems for the emergence of IA bias.
Risk and Impact Assessments:
Compliant organizations are required to regularly conduct risk assessments to identify potential risks to their users, the organization, and wider. Similarly compliant organizations should perform impact assessments to understand the consequences of the various uses and deployments of their AI systems. Having undertaken these assessments, organizations should develop and implement practices to minimize any identified risks or potential negative impacts.
ISO standards and certification are voluntary and must be applied for and worked towards. An applicant organization must go through an external audit, and an application for certification; this process can prove lengthy and expensive. This may prove worth it for many organizations, however, as the ISO standards are widely recognized,and will signal competence and security to peer organizations, customers, and even local regulatory bodies, which often use the ISO standards as a basis for developing their own regulations.
NIST is a US organization so its guidance is most relevant to US businesses, however, it is intended as a global framework for managing the risks of developing and using GAI systems. It is intended to be used similarly to NIST’s existing cybersecurity and privacy frameworks.It is intended, largely, to be a framework upon which businesses and organizations can build out and model their own risk management frameworks when deciding to utilize GAIs
The NIST standards list the following risks which are unique to GAI systems, or significantly worsened in using them:
CBRN Information or Capabilities:
Eased access to or synthesis of materially nefarious information or design capabilities related to chemical, biological, radiological, or nuclear (CBRN) weapons or other dangerous materials or agents.
Confabulation:
The production of confidently stated but erroneous or false content (known colloquially as “hallucinations” or “fabrications”) by which users may be misled or deceived.
Dangerous, Violent, or Hateful Content:
Eased production of and access to violent, inciting, radicalizing, or threatening content as well as recommendations to carry out self-harm or conduct illegal activities. Includes difficulty controlling public exposure to hateful and disparaging or stereotyping content.
Data Privacy:
Impacts due to leakage and unauthorized use, disclosure, or de-anonymization of biometric, health, location, or other personally identifiable information or sensitive data
Environmental Impacts:
Impacts due to high compute resource utilization in training or operating GAI models, and related outcomes that may adversely impact ecosystems
Harmful Bias or Homogenization:
Amplification and exacerbation of historical, societal, and systemic biases; performance disparities between sub-groups or languages, possibly due to non-representative training data, that result in discrimination, amplification of biases, or incorrect presumptions about performance; undesired homogeneity that skews system or model outputs, which may be erroneous, lead to ill-founded decision-making, or amplify harmful biases
Human-AI Configuration:
Arrangements of or interactions between a human and an AI system which can result in the human inappropriately anthropomorphizing GAI systems or experiencing algorithmic aversion, automation bias, over-reliance, or emotional entanglement with GAI systems.
Information Integrity:
Lowered barrier to entry to generate and support the exchange and consumption of content which may not distinguish fact from opinion or fiction or acknowledge uncertainties, or could be leveraged for large-scale dis- and mis-information campaigns.
Information Security:
Lowered barriers for offensive cyber capabilities, including via automated discovery and exploitation of vulnerabilities to ease hacking, malware, phishing, offensive cyber operations, or other cyberattacks; increased attack surface for targeted cyberattacks, which may compromise a system’s availability or the confidentiality or integrity of training data, code, or model weights.
Intellectual Property:
Eased production or replication of alleged copyrighted, trademarked, or licensed content without authorization (possibly in situations which do not fall under fair use); eased exposure of trade secrets; or plagiarism or illegal replication.
Obscene, Degrading, and/or Abusive Content:
Eased production of and access to obscene, degrading, and/or abusive imagery which can cause harm, including synthetic child sexual abuse material (CSAM), and nonconsensual intimate images (NCII) of adults.
Value Chain and Component Integration:
Non-transparent or untraceable integration of upstream third-party components, including data that has been improperly obtained or not processed and cleaned due to increased automation from GAI; improper supplier vetting across the AI lifecycle; or other issues that diminish transparency or accountability for downstream users.
And recommends the following controls to mitigate the risks of any of these outcomes
NIST provides a comprehensive process for the implementation, governance, review and feedback, and training on their guidelines to make it as simple as possible for businesses to use their guidelines, or to build off their guidelines to produce their own frameworks. These guidelines are not legislation, or an accreditation, so are not binding on organizations, but as with the NIST cybersecurity and privacy frameworks it is likely to impact how future legislation,accreditation, and frameworks will be developed, so it will be valuable to businesses and organizations to model their approaches on.
The OECD is a supranational organization that governs some international taxation rules and publishes various works on international economic developments. It publishing guidance on AI highlights the importance of AI moving forward especially in OECD nations which tend to be highly developed and highly technological. OECD seems to be aiming its recommendations mostly at policy makers, rather than directly at businesses, but it will still prove a useful channel to tune into.
Investing in AI research and development (Principle 2.1):
Governments should consider long-term public investment, and encourage private investment, in research and development and open science, including interdisciplinary efforts, to spur innovation in trustworthy AI that focus on challenging technical issues and on AI-related social, legal and ethical implications and policy issues.
Governments should also consider public investment and encourage private investment in open-source tools and open datasets that are representative and respect privacy and data protection to support an environment for AI research and development that is free of harmful bias and to improve interoperability and use of standards.
Fostering a digital ecosystem for AI (Principle 2.2):
Governments should foster the development of, and access to, an inclusive, dynamic, sustainable, and interoperable digital ecosystem for trustworthy AI. Such an ecosystem includes inter alia, data, AI technologies, computational and connectivity infrastructure, and mechanisms for sharing AI knowledge, as appropriate. In this regard, governments should consider promoting mechanisms, such as data trusts, to support the safe, fair, legal and ethical sharing of data.
Shaping an enabling policy environment for AI (Principle 2.3):
Governments should promote an agile policy environment that supports transitioning from the research and development stage to the deployment and operation stage for trustworthy AI systems. To this effect, they should consider using experimentation to provide a controlled environment in which AI systems can be tested, and scaled-up, as appropriate. They should also adopt outcome-based approaches that provide flexibility in achieving governance objectives and co-operate within and across jurisdictions to promote interoperable governance and policy environments, as appropriate.
Governments should review and adapt, as appropriate, their policy and regulatory frameworks and assessment mechanisms as they apply to AI systems to encourage innovation and competition for trustworthy AI.
Building human capacity and preparing for labour market transformation (Principle 2.4):
Governments should work closely with stakeholders to prepare for the transformation of the world of work and of society. They should empower people to effectively use and interact with AI systems across the breadth of applications, including by equipping them with the necessary skills.
Governments should take steps, including through social dialogue, to ensure a fair transition for workers as AI is deployed, such as through training programmes along the working life, support for those affected by displacement, including through social protection, and access to new opportunities in the labour market.
Governments should also work closely with stakeholders to promote the responsible use of AI at work, to enhance the safety of workers, the quality of jobs and of public services, to foster entrepreneurship and productivity, and aim to ensure that the benefits from AI are broadly and fairly shared.
International co-operation for trustworthy AI (Principle 2.5):
Governments, including developing countries and with stakeholders, should actively co-operate to advance these principles and to progress on responsible stewardship of trustworthy AI.Governments should work together in the OECD and other global and regional fora to foster the sharing of AI knowledge, as appropriate. They should encourage international, cross-sectoral and open multi-stakeholder initiatives to garner long-term expertise on AI.Governments should promote the development of multi-stakeholder, consensus-driven global technical standards for interoperable and trustworthy AI.Governments should also encourage the development, and their own use, of internationally comparable indicators to measure AI research, development and deployment, and gather the evidence base to assess progress in the implementation of these principles.
Adoption: OECD standards are notable specifically aimed at the member governments and aim to inform how they should act shaping the legislative and social structures that will/should emerge around an economy and labor market heavily impacted by AI systems. Businesses should be aware of these standards as there is a high chance that they are at least considered as these legislative and social frameworks are developed
Clearly there is some coherence between these standards, guidelines and regulations. For example, The EU AI Act places an obligation to have a system that can be audited and ISO provides an audit framework, within which NIST has a set of risks and controls. However, these are very early days and we do not have a body of case law that has provided guidance on how conformance at each level is considered conformity at the next. We look forward to the case law and will survey and review with our audience when it becomes available.