January 16, 2026

Do law firms have to disclose AI use in their privacy policy? CPRA, GDPR, and sample language for 2025

People keep asking a simple thing with a lot behind it: does a law firm need to say, right in its privacy policy, that it uses AI? Short answer: if the AI touches personal data, yes—say it. In 2025, t...

People keep asking a simple thing with a lot behind it: does a law firm need to say, right in its privacy policy, that it uses AI? Short answer: if the AI touches personal data, yes—say it.

In 2025, this ties directly into California’s CPRA rules and GDPR/UK GDPR duties around profiling and automated decisions. The point isn’t to scare you. It’s to help you give clear, honest info that clients and regulators expect—and keep your AI projects moving.

Here’s the plan for what follows and how to put it to work fast.

What you’ll learn:

  • When CPRA and GDPR/UK GDPR require disclosure, including profiling and automated decisions, and who counts as a “business.”
  • What a solid 2025 privacy notice includes: purposes, data types, retention, sell/share, model training, human review, rights, and opt-outs.
  • How ethics rules fit in, and what to put in engagement letters versus your website.
  • Vendor and model guardrails: DPAs, SCCs/UK addendum, training limits, deletion testing.
  • A practical rollout plan plus copy-and-paste sample language.

Why this matters in 2025: quick answer and who this guide is for

If your AI tools touch personal info—analytics on your site, intake triage, marketing profiles, drafting assistants—put that in your privacy notice. That’s the crossroads where CPRA transparency and GDPR disclosure meet your daily operations.

Regulators have been active. California’s enforcement around adtech “sharing” and the Global Privacy Control made it clear you can’t hide the ball. EU authorities have been pressing AI companies on transparency and rights too. Meanwhile, clients now ask in RFPs how your firm uses AI and protects their data. Saying nothing is a trust problem.

Don’t just write “we use AI.” Spell out where it shows up, what data it touches, why you’re using it, how long you keep it, and where humans stay in charge. Add one small section on what you won’t do (no solely automated client screening, no training public models on client files). This guide is for managing partners, firm GCs, ops leaders—anyone who needs a policy that’s accurate, simple, and defensible.

Does my law firm qualify as a “business” under CPRA?

CPRA covers any business operating in California that hits one of these: over $25M in annual gross revenue; buys/sells/shares personal info of 100,000+ consumers or households; or gets 50%+ of revenue from selling/sharing personal info. Many mid‑size and large firms meet the revenue bar.

Since 2023, employee and B2B data are fully covered. That means hiring, recruiting, and prospecting flows count, not just your public website. A 60‑lawyer firm with about $30M revenue and California‑facing marketing typically qualifies—even without a California office—if it targets California matters.

One twist: you may be a “service provider/processor” when handling client‑matter data, but you’re a “business/controller” for your own HR, website, and marketing. Your CPRA privacy notice has to cover that “business” side, even if your engagement letters lock down client‑matter data separately.

On the fence about the 100,000 threshold because of heavy site traffic and adtech? Plan as if you’re in scope. It’s cheaper to build the notice and opt‑out now than scramble after a regulator email—or a picky client questionnaire—lands.

CPRA transparency requirements that touch AI today

CPRA says you must disclose what categories of personal info you collect, why you collect it, how long you keep it, whether you “sell” or “share” it (think cross‑context ads), and how people can use their rights. For AI, translate that into plain English about analytics, intake triage, and drafting assistants that process personal data.

Regulators treat many adtech signals as “sharing,” which triggers the “Do Not Sell or Share My Personal Information” link and honoring the Global Privacy Control. That’s not theoretical—it’s already been enforced.

  • Building marketing audiences from site behavior (profiling)
  • Scoring or routing intake to prioritize follow‑ups
  • Chatbots that collect contact info or guide visitors
  • Drafting/research tools that use firm content

If you share for targeted ads, say it and offer a clean opt‑out path. If you don’t, say that too. For retention, skip vague lines like “as long as necessary.” Give a timeframe or criteria (e.g., analytics logs for 13 months). And make sure your cookie banner, notice at collection, and privacy policy all tell the same story.

Automated decision-making and profiling under California rules

California’s privacy agency has proposed rules for automated decision‑making technology (ADMT). They point toward pre‑use notices, rights to access info, and opt‑outs for certain profiling, especially in areas like employment or decisions that really affect people.

What to do now:

  • Find where profiling happens: marketing audiences, intake scoring, resume screening, chatbot routing.
  • Draft a short pre‑use note you can reuse: purpose, inputs, any weighting, and where a human reviews the outcome.
  • Build a playbook: when to offer an opt‑out, how to respond to access requests about these tools, and how to flip a feature off if rules kick in tomorrow.

Law firm hot spots are applicant tracking and intake triage. If anything is auto‑rejecting, add a human checkpoint with the power to override. It’s fair, it’s good practice, and it gives you a clear story even if the ADMT rules shift.

GDPR/UK GDPR: AI, profiling, and automated decisions

Under GDPR, if you target or monitor folks in the EU/UK (even via your site), you need to say when you use profiling or automated tools, explain the purpose, the data involved, and the rights people have. Article 22 limits solely automated decisions with legal or similarly significant effects unless strict conditions are met.

Most firm use cases don’t hit Article 22. Ranking leads or suggesting clauses? Probably fine. Automatically rejecting an inquiry or applicant with no human review? That could cross the line.

Regulators in Europe and the UK have pushed AI providers on transparency and rights. Your move: disclose profiling, pick a solid legal basis (legitimate interests is common for analytics/marketing, with an easy opt‑out), and confirm a human steps in before any decision that truly matters.

Tip: write a short DPIA‑lite for AI that touches EU/UK users. It keeps your notices, cookie banner, and intake forms consistent and speeds up security reviews on RFPs.

What to disclose about AI in a 2025-ready law firm privacy notice

Your notice should answer what people actually wonder. Use clear, direct lines:

  • Which AI‑assisted tools you use and why (analytics, intake triage, drafting help).
  • What data is involved (contact info, interaction data, matter descriptors).
  • How humans stay in charge (no solely automated decisions with legal or similarly significant effects).
  • Whether you sell or share data for ads, and how to opt out.
  • How long you keep each category or the criteria you use.
  • Your stance on model training (e.g., no training public models on confidential client materials).
  • Cross‑border transfers and safeguards (SCCs/UK addendum).
  • How to exercise rights and how you verify identity.

One line that helps a lot: note that client files live in separate, secured systems and are not used to train public models—even if your site has a chatbot. It cuts confusion between marketing tech and client‑matter work.

Ethics overlay: competence, confidentiality, and client communication

Ethics rules sit next to privacy law. Model Rule 1.1 asks you to understand tech risks and benefits. Rule 1.6 covers confidentiality, including vendors. Rule 5.3 requires supervising nonlawyers—yes, that includes AI vendors.

In practice, that means vet tools, limit training uses, and explain material effects to clients when it touches their matters. Use your website privacy notice for public info, and handle representation‑specific details in your engagement letters.

One more operational habit: label time entries when AI speeds a task (e.g., “AI‑assisted research (reviewed) – 0.4”). It shows supervision and value without diving into tech jargon. It also lines up with what your policy promises.

Vendor and model governance for law firms

Most risk lives with vendors and your settings. Your data processing agreements should:

(1) classify the vendor correctly (service provider/processor vs. third party), (2) forbid training on your data, (3) require approval for subprocessors, (4) define retention and deletion timelines, and (5) require solid security like encryption, access controls, and logs. Under CPRA, get the “service provider vs. third party” call right—mislabeling adtech can wreck your “we don’t sell/share” stance.

  • Test deletion: send a mock DSAR, verify prompt/output caches are erased, save proof.
  • Watch model updates: defaults change, including retention windows.
  • Ask about isolation: single‑tenant or strong logical isolation lowers leakage risk.

Inside the firm, teach “prompt hygiene”—no client identifiers in prompts, use placeholders, keep to approved templates. It sharply lowers accidental disclosure risk and makes client security reviews easier.

Data retention and deletion for AI workflows

Regulators want specifics, not vague promises. CPRA asks for how long you keep each category (or your criteria). GDPR’s storage limitation rule pushes you to justify it. AI creates new artifacts—prompts, embeddings, outputs, vendor caches. They all need a place in your retention schedule.

Examples that many firms adopt: analytics logs for 13 months; intake triage logs for 90 days; drafting outputs per your matter schedule. Ask vendors for per‑feature retention settings; a lot of them cache for reliability by default.

  • Make sure AI artifacts inherit legal holds from the matter.
  • Run quarterly deletion tests and keep screenshots or attestations.
  • When risk is higher (special category data, large‑scale profiling), do a DPIA and record mitigations.

Even a lightweight DPIA clarifies necessity and safeguards. It’s gold when a regulator or a client asks, “Why did you use this tool and how did you reduce risk?”

Cross-border issues and special categories

If your site or intake form reaches the EU/UK, using cloud AI often means restricted transfers. Use Standard Contractual Clauses and, for the UK, the IDTA/Addendum. Do transfer impact assessments that look at vendor access, encryption, and government access risk. Regional hosting or customer‑managed keys help.

Special category data (health, beliefs, etc.) and criminal data can pop up in matter descriptions. Keep that out of AI systems unless it’s truly necessary. If you must process it, identify a valid Article 9 condition (often legal claims), and document it in your records and notices.

A pattern that works: keep raw EU data in‑region, do sensitive analysis locally or in the client’s environment, and send only de‑identified features or embeddings to global services. It’s easier to defend and reassures clients.

Step-by-step implementation plan

  • List every AI touchpoint: marketing, website, intake, HR, matter work.
  • Note purposes, data types, legal bases; flag profiling/automated decisions.
  • Run DPIAs where risk is higher (intake triage, analytics, drafting).
  • Update your CPRA notice with AI purposes, retention, sell/share, and rights. Sync your cookie banner and notice at collection.
  • Add the “Do Not Sell or Share” link and honor Global Privacy Control. Test it end‑to‑end.
  • Paper the vendor stack: DPAs, SCCs/UK addendum, training bans, deletion SLAs, subprocessor lists.
  • Set internal guardrails: prompt hygiene, approved tools, a quick off‑switch for risky features.
  • Train teams; run a tabletop on DSARs and opt‑outs involving AI.
  • Review quarterly as rules and tools change.

Nice extra: publish a short changelog of privacy updates. Clients and regulators appreciate the transparency, and marketing has something useful to share.

Sample privacy policy language (copy-and-adapt)

Website analytics and marketing
“We use analytics and advertising tools that may build interest‑based audiences from your interactions with our site. We disclose limited identifiers and interaction data to our service providers for these purposes. We do not sell personal information. If we ‘share’ personal information for cross‑context behavioral advertising, you can opt out via the ‘Do Not Sell or Share My Personal Information’ link and by enabling a Global Privacy Control in your browser.”

Client intake triage
“We use AI‑assisted routing to help prioritize and respond to inquiries. Human reviewers make final decisions; we do not rely on solely automated decisions that have legal or similarly significant effects.”

Drafting and research support
“We may use AI‑assisted tools to support drafting and research. Attorneys review and approve all outputs. We do not permit vendors to use confidential client materials to train public models.”

Retention and rights
“We retain analytics logs up to 13 months and intake records per our retention schedule. You may have rights to access, correct, delete, or object to certain processing; see How to Contact Us.”

Common mistakes to avoid

  • Claiming “we do not use AI” while running analytics or a chatbot—people notice, and it hurts credibility.
  • Skipping retention periods or pointing only to vendor policies—CPRA expects your durations or criteria.
  • Calling adtech a “service provider” while saying you don’t sell/share—regulators watch for this mismatch.
  • Overlooking profiling in intake or HR systems—these are likely targets for new rules.
  • Promising “no AI ever”—it blocks sensible, supervised uses. Promise guardrails instead: human review, no training on client materials, opt‑outs for marketing profiles.

Quick tune‑up path: map profiling flows, add retention by category, fix your sell/share stance, and add one sentence on human review. Many firms get most of the benefit with one thoughtful revision.

FAQs for managing AI disclosures

  • Do we need consent to use AI tools? Usually not for core operations. In the EU/UK, marketing profiling and non‑essential cookies often need consent. In the U.S., give easy opt‑outs for targeted ads.
  • How detailed should we be about the logic? Give meaningful info: inputs considered, main factors, and where humans review. No need to publish trade secrets.
  • Are cookie banners enough? No. Banners handle device‑level collection. Your privacy notice must connect analytics/profiling to purposes, retention, and rights.
  • What triggers Article 22? Solely automated decisions with legal or similarly significant effects (like auto‑rejecting applicants without human review). Add a human checkpoint.
  • Can we stop vendors from training on our data? Yes. Put it in your DPA, confirm admin settings, and get it in writing.

How LegalSoul helps operationalize compliant AI disclosures

LegalSoul takes on the heavy lifting so your team can focus on client work. It maps your AI data flows, flags profiling and automated decision spots, and produces notice language tied to your actual purposes, data types, retention, and opt‑outs.

It also centralizes vendor governance—building and tracking DPAs for AI tools, enforcing training bans, and attaching Standard Contractual Clauses or the UK addendum where needed. You get deletion test checklists, DSAR playbooks that cover AI scenarios, and a website‑ready changelog.

On the day‑to‑day side, LegalSoul records human‑review checkpoints, keeps prompt hygiene templates handy, and stores evidence (screenshots, vendor attestations) you can hand to a client or regulator. As California and EU rules evolve, your privacy notice and cookie banner stay in sync without last‑minute fire drills.

Key Points

  • If AI at your firm processes personal data, disclose it. CPRA expects details on categories, purposes, retention, and sell/share status (and that you honor opt‑outs/GPC). GDPR/UK GDPR require transparency about profiling and automated decisions, with meaningful info and human review for impactful calls.
  • Your 2025 notice should clearly cover AI purposes, data types, retention by category, human‑in‑the‑loop, model training limits, cross‑border safeguards, rights, and a clean “Do Not Sell or Share” opt‑out.
  • Build sturdy vendor and model governance: correct role classification under CPRA, DPAs with training bans and deletion SLAs, SCCs/UK addendum, periodic deletion tests, and DPIAs for higher‑risk AI. Align your cookie banner, notice at collection, and privacy policy; back it up in engagement letters and supervision practices.
  • Fast rollout: inventory data flows, update notices and opt‑out UX, train staff, set a review cadence. LegalSoul helps you map AI use, draft CPRA/GDPR‑ready language, manage vendor agreements, and keep everything current.

Conclusion

If your AI tools touch personal data, say so in your privacy policy. CPRA wants clear categories, purposes, retention, and sell/share opt‑outs. GDPR/UK GDPR want transparency about profiling and automated decisions, plus meaningful info and human review.

Cover the basics—AI purposes, data types, retention, human checks, training restrictions, cross‑border safeguards, and rights—and tie it to your ethics and client communication. Want to move quickly? LegalSoul maps your AI use, drafts the language, sets up opt‑outs, and handles the vendor contracts and SCCs. Book a 20‑minute demo and ship a policy you can stand behind.

Unlock professional-grade AI solutions for your legal practice

Sign up