The EU AI Act is the world's first comprehensive horizontal regulation of artificial intelligence. It is also the most misread piece of EU legislation for freelancers since the GDPR. Most coverage in 2024-2025 treated the Act as either an existential threat to AI use in business or as a developer-only problem that knowledge workers could ignore. Both readings miss what actually applies.
The Act entered into force on August 1, 2024, with a four-phase rollout (European Commission, AI Act overview; AI Act explorer, official timeline):
- February 2, 2025: prohibited AI practices enforceable (social scoring, real-time biometric ID in public spaces, manipulative AI, certain emotion recognition).
- August 2, 2025: General-Purpose AI (GPAI) provider obligations effective; GPAI Code of Practice in force.
- August 2, 2026: most obligations for high-risk AI systems and AI governance rules effective.
- August 2, 2027: full applicability across all remaining high-risk AI categories.
For a freelancer who uses ChatGPT, Claude, Gemini, or Copilot in the course of client work for EU clients, the question is which of those obligations actually land on you. Most of them do not. A small number do.
What the Act actually classifies you as
The AI Act distinguishes between *providers* of AI systems, *deployers* of AI systems, *importers*, *distributors*, and *affected persons*. The category that matters most for freelancers is deployer — defined as any natural or legal person using an AI system under its own authority in the course of a professional activity (European Commission, AI Act FAQ).
A freelance copywriter who uses ChatGPT to draft a campaign for a Berlin SaaS client is a deployer. A freelance designer who uses Midjourney to generate concept boards for a Paris agency is a deployer. A freelance developer who uses Copilot to write production code for a Madrid fintech is a deployer.
The good news: deployer obligations are dramatically lighter than provider obligations. The bad news: they are not zero, particularly when the AI system is being used in a "high-risk" context — which a surprising number of freelance use cases are.
What rules actually apply to a freelancer-as-deployer
Three layers of obligation worth understanding.
Layer 1: prohibited practices (effective February 2025). A freelancer must not deploy AI systems in the prohibited categories. For most knowledge-work freelancers this is irrelevant — the prohibited list covers social scoring, mass biometric surveillance, manipulative subliminal techniques, and emotion recognition in workplaces or schools. A freelance copywriter or developer is essentially never building or deploying those systems.
Layer 2: high-risk AI obligations (effective August 2026). High-risk AI systems include those used in employment decisions (CV screening, performance evaluation), credit scoring, biometric identification, critical infrastructure, education access decisions, and law enforcement. A freelancer building or deploying AI for any of those use cases lands in the high-risk regime, which carries documentation, transparency, human oversight, and quality management requirements.
Layer 3: transparency obligations (effective August 2026 for most). This is where most freelancers actually encounter the Act. Article 50 requires that:
- Users interacting with an AI system (chatbot, voice agent) must be informed they are interacting with AI.
- Synthetic audio, image, video, or text content must be labelled as AI-generated, with limited exceptions (European Commission, AI Act transparency obligations).
- Deepfakes must be clearly labelled.
- Text published "with the purpose of informing the public on matters of public interest" that has been AI-generated or substantially AI-assisted must be disclosed.
For a freelance copywriter producing AI-assisted blog content for an EU client's marketing site, the transparency obligation is a real consideration. The "matters of public interest" carve-out is meant to catch journalism and certain civic content; standard marketing copy generally does not fall under it. But editorial content, expert commentary, or anything published to inform the public should be labelled as AI-generated or AI-assisted where applicable.
What the GPAI Code of Practice changes
The GPAI obligations from August 2, 2025 fall primarily on the *providers* of foundation models — OpenAI, Anthropic, Google, Meta, Mistral. Not on freelancers. The practical effect for freelancers is indirect: the providers must publish summaries of training data, support copyright opt-outs, and implement systemic-risk mitigations for "GPAI models with systemic risk" (above the 10^25 FLOPs threshold). This is being negotiated through the GPAI Code of Practice signed by major providers in mid-2025.
For freelancers, the downstream effect is that the LLMs you use should disclose more about how they were trained and what content they will and will not regurgitate. If your client asks "does the AI you used train on copyrighted material," the answer in late 2026 will be more documented than it was in late 2024.
What this means in practice for freelance contracts
Six practical implications for freelancers signing EU client contracts in 2026:
1. Disclose AI assistance in your deliverables. If you used ChatGPT, Claude, or another LLM in producing client work, say so in the deliverable scope or methodology. EU clients increasingly expect this regardless of legal requirement; for high-risk or public-interest content it is mandatory.
2. Keep records of which AI tools you used per engagement. A deployer's documentation obligation is light, but record-keeping is the cheapest insurance against a future client or regulator question. A simple "AI tools used" line in your project notes is enough.
3. Watch for high-risk use cases. If a client engages you to build CV-screening logic, credit-decision systems, biometric ID, or anything in the high-risk Annex III list, the rules tighten dramatically from August 2026. Pricing those engagements should reflect compliance overhead.
4. Avoid the prohibited categories entirely. No emotion recognition in workplace or school contexts, no social scoring, no untargeted scraping for biometric databases. These are bright lines.
5. Update your service agreement to address AI use. A short clause noting that AI tools may be used to support delivery, with quality assurance and human review applied, addresses both the legal question and the trust question covered in our piece on AI-assisted client communication.
6. Read your AI provider's terms. OpenAI, Anthropic, and Google have all updated their EU terms to reflect Act compliance. Your obligations as a deployer flow downstream from theirs as providers; knowing the chain matters.
What the AI Act does *not* require of most freelancers
Three reassurances worth stating clearly:
- No CE marking, conformity assessment, or registered AI system documentation for a freelancer using off-the-shelf LLMs in standard knowledge work. Those obligations apply to providers of high-risk systems, not deployers.
- No AI literacy certification. Article 4 requires "AI literacy" among staff of deployers. For solo freelancers this is a self-education obligation, not a paperwork one. Read the platform docs and the AI Act summary.
- No registration with an EU national competent authority. Standalone deployers using third-party AI systems are not subject to registration.
The fines for non-compliance with the Act are substantial — up to €35 million or 7% of global annual turnover for prohibited-practice violations, up to €15 million or 3% for high-risk obligations, with SMEs capped at the lower figure (AI Act Article 99 (Penalties); European Commission). For freelancers operating below those scales, the realistic enforcement risk is via client contract, not direct regulator action. Your EU clients will pass compliance obligations down the supply chain through their procurement and vendor management, not by reporting you to the Commission directly.
Related: how to write EU-client-friendly proposals in 2026, and our take on AI agents for freelance ops.
Delivvo gives freelancers a branded EU-compliant portal for proposals, contracts, deliverables, and invoices, with audit-ready logs of every client artefact. When the EU client asks "show me which AI tools were used on this engagement," the documentation is already structured rather than scraped from five email threads. See how it works →
The takeaway
The EU AI Act is not the existential compliance burden that 2024 coverage suggested, nor is it a developer-only problem. For freelancers using off-the-shelf LLMs in standard EU client work, the live obligations are: disclose AI use where applicable, avoid prohibited and high-risk categories, keep light records, and stay AI-literate. Most freelancers will spend more time reading their AI provider's terms than complying with their own deployer obligations. That is the right ratio. The Act is meant to constrain providers and high-risk deployers, not knowledge workers using assistants in the course of normal client work.
Written by The Delivvo team · May 16, 2026
More from the blog →