What to do now to prepare for EU AI Act coming to effect?
EU AI Act was approved 1st August 2024. It has transition period until August 2027, but for majority of companies, the transition time ends already 2nd August 2026 when most of the controls and tech needs to be applied to organisation.
As some have noticed, EU has its AI Act since 1st August 2024. It has transition period with provisions becoming applicable in stages, with majority taking place in 2nd August 2026 and full compliance required by 2st June 2027. Don’t be fooled by the full enforcement date as the earlier date is the date unless otherwise specified, which just turned the 3 years into 2 out of which there is roughly 1.5 years left when writing this. And, if you are a company that (i) has employees inside EU or (ii) serves customers in EU, this is your problem.
The AI Act
It is a full framework for both developing and deploying AI systems within EU or systems that are offered to EU citizens. It contains specific requirements set for trustworthiness, transparency and respecting fundamental rights and a requirement to have guardrails implemented. This makes sense on an individual's level - after all, we are taught as kids the way to talk to each other, set of values and laws to adhere throughout our lives; one doesn’t give anyone a sports car without rules of the road.
The AI Act requires all AI systems to be classified into four risk categories which each have their set of requirements to be fulfilled.
Fines
Like GDPR, companies that do not fulfil the requirements of the AI Act, can be fined. In GDPR there was unclarity when it was introduced, and until now the highest GDPR fine is by Meta (twice, total 1.6b EUR), followed by Amazon (0.75b EUR).
The penalties are defined to high/medium and low level, where highest level is: “For breaches related to prohibited AI practices, or to non-compliance with Article 5 defined systems, fines can reach up to 35m EUR or 7% of the total worldwide annual revenue, whichever is higher (EU AI Act Article 99). This is steep, of course depending on one’s development budget. But > 30M EUR would probably get quite decent setup to cater for AI Act requirements - one could even save if it can be done with 15m EUR only, making it quite simple to calculate the compliance risk and budget it accordingly.
Who is impacted
EU AI Act touches anyone working with AI - both developers (providers) as well as deployers. The obligations of the developers (providers), companies who offer the systems in EU market, are required to have
- Risk management
- Data governance
- Technical documentation
- Transparency and Information provision
- Human oversight
- Accuracy, robustness and cybersecurity
- Conformity assessment
These all together need to provide clear instructions and required information to deployers, who are then required to have
- Usage in Accordance with Instructions: Operate AI systems following the provider's instructions to ensure proper functionality and compliance.
- Monitoring and Reporting: Monitor AI system performance and report any serious incidents or malfunctions to the provider and relevant authorities.
- Transparency to Affected Persons: Inform individuals when they are interacting with an AI system, especially in contexts like recruitment or credit scoring, to ensure transparency.
- Data Management: Ensure input data is relevant, representative, and used in compliance with data protection laws.
- Record-Keeping: Maintain logs of the AI system's activities for a specified period to facilitate traceability and accountability.
- Human Oversight Implementation: Establish measures for human oversight to promptly address any issues arising from the AI system's operation.
The requirements are not easily met, especially with regards to data management and record-keeping, which enable human oversight requirement can be fulfilled.
Prepare now
In order to get ready, the organisation should be already working on the following activities:
- Implement required processes through the AI task force (see the AI task force charter turned into a process talked in our previous blog by my colleague Antti https://www.nrocsecurity.com/blog/how-to-drive-gen-ai-adoption-example-ai-task-force-charter)
- Implement technical solutions that are able to prove and provide human oversight capability - according to Article 15 legal requirements of the AI Actsome text
- mitigate attacks on AI model / App in use
- Automatically log communication and operations to identify potential breaches
- Protect against unauthorized 3rd party alterations
- Ensure resilience to faults and errors
- Maintain risk management system to address potential cybersecurity threats
- Whilst implementing the EU AI Act requirements, it is good to consider the other things required to be able to put company proprietary data to work, transparency, guardrails for prompts and responses, guardrails for dataflows and ensuring all activities are authenticated and traceable.
In order to understand your exposure to the requirements, it all starts from visibility - to know what your users are using (be it approved or using your personal account/computer for GenAI apps) - and be able to act on it to prepare for requirements and have full process built by the time it is needed:
- Ensure visibility and insightsome text
- Identify and visualize Gen AI app usage patterns
- Ensure sufficient logging of all GenAI interactions, both prompts and responses
- Prompt- response guardrailssome text
- Protect against risks like data leakage and overreliance
- Data flow guardrailssome text
- Keep company sensitive content and PCI/PII away from any AI
- Use granular-enough policy settings specific to user group and app combinations
- User authentication and guidancesome text
- Ensure all GenAI usage authenticated, even when on personal id’s
- Target for real-time guidance that drives accountability
- Keep information traceable to the source(s)
Waiting until the last minute will get you breaching the EU AI Act for sure, as the visibility is not collected overnight and that is the first requirement to implement anything else.
NROC Security is the only solution that comprehensively addresses all potential causes of GenAI copilot security and compliance vulnerabilities – user access, user behavior, prompt and response content - to fully secure them while maximizing their potential for business productivity.