AI Law 2025: EU AI Act and the Implications for Marketing
EU AI act and speculations around future AI law
One year on from the EU AI Act, which was the world’s first comprehensive law around the development and usage of AI, the water is still murky. The UK is at a crossroads with AI legislation; with America’s unregulated, guns blazing attitude and the EU’s legislative, measured approach, it’s difficult for British innovators to know which approach to take.
But before we dive into implications, let’s understand the EU’s AI Act. Within the EU there are now four categories for AI applications based on risk:
Unacceptable
High Risk (including many employment contexts such as hiring or promoting)
Limited
Minimal
High-risk AI applications have numerous requirements such as continuous risk management systems, strict data governance to prevent AI training on biased data, and detailed technical documentation covering capabilities, limitations and human oversight. If operating within the EU, it is important to understand how this will affect your AI integration and you may require more careful implementation.
Moving up the AI food chain, providers of general-purpose AI systems must ensure they comply with EU copyright laws and publish detailed summaries of the data the AI is trained on. This is to prevent biases arising within AI that may have been present in the data sets that it was trained on.
Transparency implications
Under Article 50, if an AI system interacts directly with someone, it must inform them that they are interacting with an AI system unless it is obvious to a reasonably well-informed, observant and circumspect person. This information should be provided to the human in a clear and distinguishable manner at the time of first interaction or exposure at the latest. These transparency duties start applying from 2 August 2026.
Deepfake implications
Deepfakes are defined as image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear authentic or truthful. Under the 2024 Act, such content must be disclosed as generated or manipulated. In addition, providers of AI systems that generate synthetic media must implement machine-readable marking (such as watermarking or metadata) to help make detection possible. This element of the Act is imperative to stem the flow of misinformation that is becoming increasingly difficult to detect.
Email marketing implications
If you’re in London, you may well have seen adverts for “AI-First Platforms” which offer AI employees who can do anything from writing copy to automating personalised emails. Many marketers might jump for joy at the idea of an AI that can find them leads while connecting with people in a meaningful way. Some companies claim that their AI bots can even use web scraping to hyper-personalise emails and make references to events the recipient has posted about on social media.
Sentiments about this were mixed within our primary research and, while some might hold back, others were willing to go for it. However, the EU AI Act has set some guidelines if you’re looking to make contacts across the channel.
If the email clearly indicates that it is generated by an AI system (“This is an automated message”) and the recipient could reasonably infer this, that disclosure may be sufficient. If it is not apparent, a company must inform recipients that they are interacting with an AI system. Misrepresenting the amount of AI input is also non-compliant: if humans did not review the email this should be stated. If an AI is presented as a real individual, it can mislead recipients, which violates the transparency obligations.
It’s worth noting that beyond the AI Act, GDPR and e-privacy rules (such as PECR in the UK) govern the collection, use and sharing of personal data, as well as the sending of marketing emails. Web scraping for personalisation without a clear lawful basis may breach those rules even if the AI Act were satisfied.
Penalties can range up to 7% of global annual turnover or €35 million (for the most serious infringements), with lower tiers capped at €15 million or 3%.
What this means for UK businesses
Although experts suggested that the implementation of this law would cause further governance from countries like the UK or the US, more than one year on, this is still not the case. The UK AI Regulation Bill was reintroduced in March 2025 in the House of Lords and could create an AI authority with regulatory principles, except it doesn’t have the support of the government. Even if the bill were passed, it is likely that it would be less vigilant than the EU version.
Similarly, the UK and the US didn’t sign the February 2025 Paris declaration on inclusive and sustainable AI for people and planet, which would support the United Nations’ Sustainable Development Goals. Although the UK said their decision is still under consideration, Sir Keir Starmer has said that he wants the UK to be at the forefront of AI.
In the US, the administration pushed for looser federal rules through the “One Big Beautiful Bill” package, but the final law dropped a proposed moratorium on state-level AI regulation. The American approach remains more deregulatory and fragmented compared to the EU’s centralised regime.
The immediate takeaway is that one should enact caution when implementing AI to EU-based or EU-serving systems. Aside from this, it would be smart to take precautions with sensitive data. Not just from the perspective of your customers and clients, but from the perspective of possible legal requirements in the future. It may be prudent to avoid putting information you wouldn’t be willing to share with others into AI systems, unless it is clearly stated that the data will not be scraped or used to train future iterations of the model.
Author: Oakley Webb.
This article was published in August 2025. Further legislative changes may not be reflected in this piece. Content in this article does not constitute legal advice and should not be relied upon as such.
FAQs about the EU AI Act 2024
-
The EU AI Act is the first comprehensive piece of legislation in the world that regulates the development and use of artificial intelligence. It classifies AI systems into risk categories — from prohibited to minimal risk — and sets obligations for providers and users. For marketers, this means understanding where AI tools used in campaigns fit within these categories and ensuring compliance with transparency and data governance rules.
-
The EU AI Act introduces transparency obligations that impact marketing activities such as automated emails, chatbots, and AI-generated content. In certain markets, Marketers may need to disclose when AI is being used, avoid misleading practices, and ensure data used for personalisation is fair and unbiased. This is critical for brand trust and compliance when operating in or targeting customers in EU markets.
-
Breaches of the EU AI Act can lead to fines of up to €35 million or 7% of global annual turnover, depending on the severity of the infringement. Lesser breaches carry penalties of up to €15 million or 3% of turnover, which is still significant for most businesses. For marketing teams, non-compliance could not only mean financial risk but also reputational damage.
-
Yes. If it is not obvious to a reasonably well-informed recipient that an email is generated by AI, businesses must disclose this fact clearly. Passing off AI-generated emails as human-authored misleads recipients, which violates the AI Act’s transparency requirements and could also fall foul of GDPR and PECR rules on fair communication.
-
The EU AI Act defines deepfakes as image, audio, or video content that falsely appears to depict real people, events, or places. Under the act, such content must be clearly labelled as AI-generated or manipulated so audiences are not misled. In addition, providers of AI systems that create synthetic media must implement technical measures such as watermarking or metadata tagging to support detection.
-
Currently, the UK does not have a direct equivalent to the EU AI Act. A Private Members’ Bill — the AI Regulation Bill — was reintroduced in March 2025, but it does not have government backing and is unlikely to become law in its current form. The UK remains focused on a “pro-innovation” and sector-specific regulatory approach, meaning businesses working across both the EU and UK need to navigate two very different regimes.