For five years, the question every freelance designer asked was: "is my work in the training set?" In 2026 the question has moved on. The training sets exist. The models are trained. What freelancers can now influence is whether future training rounds include their work — and whether they get credit, compensation, or at least credit-and-opt-out for the rounds that already happened.
The opt-out landscape in 2026 is more developed than most designers realise. It is also genuinely uneven. Some controls do something. Some controls perform doing something. Telling the two apart is the first move.
Where Adobe actually sits
Adobe's official position is that Firefly is trained on Adobe Stock content, openly-licensed content, and public-domain content — explicitly not on the open web (Adobe AI overview; Adobe Stock contributor FAQ).
That is a meaningfully different position from OpenAI, Stability, and Midjourney — none of whom have ever published a clear training-data list. For freelance designers whose primary worry is "did my Instagram portfolio get scraped into a generative model," Adobe is structurally less of a threat than the other three.
The controls Adobe actually offers in 2026:
- Content Credentials' "Do Not Train" preference. Set on any image you push through Adobe Content Authenticity, this signal asks Adobe (and any aligned model trainer) to exclude that image from training. The catch: it is a request, not a hard block. Adobe honours it for Firefly Custom Models, Style Reference, and Structure Reference. Third parties that respect Content Credentials honour it too. The image still loads on the open web and other model trainers can — and do — ignore the signal.
- Adobe Stock contributor opt-out (none, as of May 2026). Adobe explicitly does not allow Adobe Stock contributors to opt out of having their stock content used for Firefly training. The mechanism is a one-time bonus payment to contributors whose work was used in training, paid out September 17, 2025, with future bonus cycles still being worked out (Adobe Stock contributor FAQ).
- Enterprise opt-out. Adobe Enterprise Creative Cloud customers can opt out of having their cloud-stored work used in any model training. Personal users cannot, by default.
- Firefly Custom Models, retrained on owned data. Enterprise users can train Firefly on their own brand library, isolated from the foundation model. This is the path for studios who want generative output that matches their style without feeding the public model (Adobe Firefly Custom Models retraining).
The honest read: Adobe has built more opt-out plumbing than any other major model trainer. The Adobe Stock contributor situation is the weakest link — a freelance illustrator who put years of work into Adobe Stock has effectively no opt-out for past training, only a small share of a contributor bonus pool.
What Content Credentials actually does
Content Credentials is the C2PA-based provenance signal Adobe co-developed. It is now embedded in Photoshop, Lightroom, Firefly, and (as of 2026) in iPhone's Camera app under iOS 18+ for Pro models. The signal travels with the file: who made it, what tools touched it, whether it was AI-generated, and whether the creator has set a "do not train" preference (Adobe Content Authenticity training preference guide).
The strengths: the signal is genuinely cryptographic, hard to forge, and machine-readable. Model trainers who respect it can filter out flagged content with a pre-processing step.
The weaknesses: not every trainer respects it. The signal is stripped by some image hosts during compression. The "do not train" flag is voluntary on the trainer's side; there is no legal force behind it in most jurisdictions.
For a freelance designer in 2026, the practical advice is: turn on Content Credentials with "do not train" set by default, publish images with credentials intact (use platforms that preserve them — your own portfolio site, Adobe-aligned platforms, increasingly Instagram and Threads), and accept that the signal raises the friction for unauthorised training without eliminating it.
Glaze and Nightshade — the active defences
The University of Chicago's Glaze (passive disruption — makes a model's output less faithful to your style if it tries to learn from your images) and Nightshade (active poisoning — corrupts what the model learns from your images, with broader effects) are the freelance designer's offensive tools.
The 2026 reality: both work, both are visible to humans only minimally if used correctly, and both meaningfully reduce the value of a designer's images to a non-consenting trainer. The catch is the runtime: Glaze takes 20-60 minutes per image on a consumer GPU, Nightshade similar. For a designer publishing 200 images a year, that is real time.
Use cases where Glaze/Nightshade are worth the effort:
- A signature illustration style that defines your freelance pricing leverage.
- Concept art and unreleased work shown in pitch decks or social previews.
- Stock-like assets you license for individual use but do not want absorbed at scale.
Use cases where they are not worth the effort:
- General portfolio images you have already published in plain form (the originals are out there; protecting the upload-to-LinkedIn version does nothing).
- Quick everyday social content where the visual signature is incidental rather than valuable.
What the EU AI Act adds to the picture in 2026
The EU AI Act's general-purpose AI model provisions came into full effect on August 2, 2025. Among them: AI model trainers must publish a summary of the training data they used, and rights-holders may issue opt-out reservations under the EU's text-and-data-mining exception (EUR-Lex, EU AI Act).
In practice for freelance designers in 2026:
- Major model trainers (OpenAI, Stability, Anthropic) have published summary descriptions of their training corpora. These summaries are vague but better than nothing.
- The TDM opt-out is real but the mechanism is per-trainer. Each model provider has its own opt-out page; there is no central registry. Adobe Content Credentials' signal is one of the most-used mechanisms in practice for designers because it travels with the file.
- Enforcement is staged. The first enforcement actions against models that ignored a properly-issued opt-out are expected in late 2026 and early 2027.
For non-EU freelance designers, the EU AI Act still matters because most major models are trained on globally-sourced content and deploy in the EU market. An EU-respecting opt-out flag has cross-border effect.
Related readThe EU AI Act August 2, 2026 Deadline: What Freelancers Using ChatGPT and Claude Actually Need to DoThe compensation question
The hardest question for freelance designers in 2026 is not "how do I opt out." It is "how do I get paid for past training." The honest answers are still narrow.
Adobe Stock contributor bonuses — paid in September 2025 to contributors whose work was used in Firefly training. Future cycles are expected but not committed.
Class-action lawsuits against Stability AI, OpenAI, and Midjourney — multiple ongoing, with the Getty Images vs Stability case as the bellwether. The economic outcomes for individual freelance designers are speculative; even a favourable settlement is unlikely to produce per-image payments.
License auditing companies — Spawning's Have I Been Trained tool lets designers check whether their work appears in LAION-5B and similar public training sets. The "Source.Plus" mechanism aims to compensate creators whose work is used by paying-customer models. Adoption is small but growing.
The realistic 2026 expectation: meaningful compensation for past training is unlikely for individual freelance designers. The leverage is in opt-out for future training, plus market-rate pricing for direct licensing arrangements with model trainers who want specific styles.
A practical workflow for 2026
A freelance designer protecting their work in 2026 has a 30-minute setup and a small ongoing tax. Specifically:
- Turn on Content Credentials in Photoshop, Lightroom, and Firefly. Set "do not train" as the default preference. Time: 5 minutes once.
- Publish portfolio work to a platform that preserves Content Credentials. Your own site, Behance (Adobe-owned, preserves the signal), Instagram (preserves on most uploads), Threads (partial). Avoid platforms that strip metadata on upload — historically Twitter/X and some image hosts.
- For genuinely signature work that defines your pricing leverage, run it through Glaze before social posting. Time: 30-60 minutes per image, one-off.
- File a TDM opt-out reservation per major trainer. OpenAI, Stability AI, Anthropic, Google all have public mechanisms in 2026. Time: 60-90 minutes one-off to find and submit each.
- Negotiate AI-training rights explicitly in every client contract. "Client may not use the deliverable to train or fine-tune generative models without separate paid license." Time: a one-time clause add, zero ongoing.
Total setup: about 4 hours. Ongoing: a few minutes per work cycle. This is genuinely cheap protection compared to the alternative of leaving everything fully open.
The takeaway
The "is my work being scraped" question is mostly answered for 2025-era models. The 2026 question is "what protections do I want for the next training round." The answer is layered: Content Credentials handles the floor, Glaze/Nightshade handle the ceiling, the EU AI Act creates the legal scaffolding, and contract clauses cover what your direct clients are allowed to do.
None of these tools is a panacea. Together, they put a freelance designer in 2026 in a meaningfully stronger position than in 2024.
Delivvo is the branded client portal where designers deliver files with intact Content Credentials, AI-training clauses sitting in the signed contract, and full payment history in one place — so the protections you set up upstream actually survive the hand-off downstream. From $15/mo, free for 7 days.Written by The Delivvo team · May 11, 2026
More from the blog →