The answer to the question posed is, quite simply, nothing, in terms of risk management.  The process of analysis of the threats posed to any particular organisation or infrastructure, the assessment of how vulnerable that organisation is to those threats, and the application of controls to bring down the risk to an acceptable level, remains pretty much the same.

Of course, the threat is changing, quite considerably, with billions being invested globally in the field of AI, which is affording huge advances in technology which brings with it great benefits but also new risks which are potentially more dangerous than those associated with current IT systems.

There is guidance coming out from several sources internationally on these risks and how to address them, but the EU has gone one step further and is producing the first comprehensive continental legislation on AI, the EU AI Act.  Unlike most countries guidance, it is not voluntary but will become law and has real teeth.  It wouldn’t be a shock to find other countries following suite.

The EU AI Act focuses on impacts to the rights, freedoms and safety of the public within the EU but is nevertheless a landmark legislative proposal by the European Union aimed at regulating artificial intelligence across its member states. Proposed in April 2021, the Act seeks to establish a comprehensive legal framework for AI that ensures the technology is developed and used in a way that respects fundamental rights, safety, and democratic values.

Here are the key points of the EU AI Act:

1. Risk-Based Approach

The Act adopts a risk-based classification system that categorises AI systems into four risk levels:

  • Unacceptable Risk: AI systems deemed harmful (e.g., social scoring by governments) are banned outright.
  • High Risk: AI systems with significant potential to impact safety, rights, or wellbeing (e.g., biometric identification, critical infrastructure) must meet strict requirements regarding transparency, accuracy, oversight, and documentation.
  • Limited Risk: Systems with moderate risk must comply with transparency obligations (e.g., AI chatbots must inform users they are interacting with AI).
  • Minimal Risk: Systems with negligible or no risk (e.g., spam filters, AI in video games) are largely unregulated.

2. High-Risk AI Regulation

For high-risk AI systems, the EU AI Act imposes stringent regulatory requirements. These include:

  • Thorough risk assessments before deployment.
  • Ongoing monitoring during use.
  • Ensuring traceability and transparency in the system’s decision making processes.
  • Compliance with technical documentation and human oversight standards.

3. Prohibited Practices

Certain AI uses are banned outright because they are considered to violate fundamental rights. Examples include:

  • Real-time remote biometric identification in public spaces for law enforcement purposes (with some exceptions).
  • AI systems that exploit vulnerabilities of specific groups, such as children or the elderly.

4. Governance and Enforcement

A new European Artificial Intelligence Board (EAIB) will be created to oversee the implementation of the AI Act. This body will work alongside national regulators to enforce compliance across the EU.

5. Penalties

Non-compliance with the AI Act can result in hefty fines, with penalties of up to €30 million or 6% of global annual turnover, whichever is higher, for serious violations.

6. Promoting Innovation

While the AI Act imposes strict controls on high-risk systems, it also includes provisions to encourage innovation in the AI sector. It proposes the creation of regulatory sandboxes, controlled environments where companies and public institutions can test AI systems under the supervision of regulators before full deployment.

7. Scope

The AI Act has a broad scope, applying not just to companies and institutions based in the EU, but also to non-EU organisations that place AI systems on the European market or whose AI systems affect individuals within the EU.

The EU AI Act is significant because it represents the first major attempt globally to create a legal framework that balances the benefits and risks of AI. It aims to position the EU as a global leader in AI regulation, prioritizing ethical AI development while promoting safety, transparency, and accountability.

As I said earlier, there are other sets of guidance being issued but they are not enforceable and can be adopted in whole or in part or ignored.  The US Dept pf Commerce National Institute for Standards and Technology (NIST) and the UK National Cyber Security Centre (NCSC) have issued such guidance.  The NIST guidance for example covers Harm to People, Harm to an Organisation and Harm to an Ecosystem.  But it remains just guidance.  On the upside it is all based on sound risk management and for those of us who have been steeped in that culture, almost for as long as information security has been taken into the IT sphere, that is music to our ears.

If you want to know more or to chat over the issues, drop me a message.  I’d be only too pleased.  If you are interested in knowing a bit more about risk management then this article might be of interest to you https://hah2.co.uk/still-on-the-subject-of-cyber-resilience/.

Scroll to top