January 17, 2026

Do law firm AI chatbots have to disclose they’re bots? California Bot Law, FTC guidance, and EU AI Act requirements for 2025

Your intake chat is the front door to your firm now. And yes, regulators are peeking through that door. As AI helpers handle more marketing and intake, the question isn’t just what they can do—it’s wh...

Your intake chat is the front door to your firm now. And yes, regulators are peeking through that door. As AI helpers handle more marketing and intake, the question isn’t just what they can do—it’s what you need to say about them.

Do law firm chatbots have to admit they’re bots? For 2025, California’s Bot Disclosure Law, the FTC’s rules on deception, and the EU AI Act’s transparency duties push the answer toward “yes” in many situations. At minimum, it’s the safer path and it builds trust with serious clients.

In this guide, you’ll learn:

  • What counts as a “bot” and which law firm interactions are covered
  • California Bot Law scope and “clear and conspicuous” disclosure requirements
  • FTC expectations, dark patterns to avoid, and “not legal advice” disclaimers
  • EU AI Act timelines and who must comply
  • Ethics and UPL guardrails (Model Rule 7.1 and Rule 5.3)
  • Where, when, and how to present disclosures, with wording examples
  • Special channels (SMS, voice) and accessibility/multilingual considerations
  • Data governance, consent, and audit logs to prove compliance
  • A practical roadmap to implement jurisdiction-aware bot disclosures and human handoff

Executive summary: do law firm AI chatbots have to disclose they’re bots?

If your firm runs a chatbot for intake or marketing, plan on disclosing it. Three currents steer the risk: California’s Bot Disclosure Law (B&P Code §§17940–17943), the FTC’s Section 5 standards on deception (think “clear and conspicuous”), and the EU AI Act’s transparency rules landing around mid‑2026.

Together, they set a practical baseline for law firm AI chatbot disclosure requirements. Even B2B firms feel it. The FTC’s Business Blog has flagged human‑like bots without labels since 2023, and the agency targets “dark patterns” where notices are hidden or hard to read.

On top of that, big clients now ask vendors and outside counsel to confirm bot labeling and “not legal advice” messaging during intake. That turns disclosure into table stakes.

Two easy moves cover most scenarios: label the launcher and the first message (“AI assistant”), and keep a small, always‑visible “AI” badge in the chat header. Add a path to a human with honest response times. Bonus: consistent wording makes transcripts and audit logs easier to defend if anyone questions what your bot said.

What counts as a “bot,” and which interactions are covered

Regulators don’t care what you call the tool; they look at the experience. If software chats on its own—text or voice—answering questions, collecting info, booking time, or triaging documents, it’s a bot for disclosure purposes.

That includes web chat, SMS or WhatsApp, client portal assistants, and IVR with an AI voice. Even “copilot” tools that draft replies for staff can trigger expectations if AI text reaches a user without real human review.

Typical law firm uses: marketing Q&A, eligibility checks, fee ranges, appointment scheduling, “what to bring,” and FAQ snippets. Internal tools that never face prospects are a different story, but once the AI speaks to a potential client, plan to disclose.

The EU AI Act keeps it simple: people should be told they’re interacting with AI unless it’s obvious. On a phone screen, it’s rarely obvious. If your chat looks or reads like a human—names, avatars, typing dots, emojis—label it.

And if the bot shares legal information, pair the label with scope language and a “not legal advice” reminder to lower unauthorized practice of law risk with chatbots.

California Bot Disclosure Law (B&P Code §§17940–17943): scope and obligations

California makes it unlawful to use a bot to talk with a person in the state with the intent to mislead them about the bot’s identity and push a purchase or influence voting. For law firms, intake and marketing chat aimed at Californians often counts as commercial persuasion.

The fix is a “clear and conspicuous” disclosure where the conversation happens—inside or right next to the chat box, not buried in a privacy policy or footer.

  • Placement: show it at or before the first message; persistent labeling is smart.
  • Wording: plain and short (“I’m an AI assistant, not a lawyer.”).
  • Presentation: readable on mobile with good contrast.

There aren’t many public cases citing this exact statute yet, but the California AG keeps pushing interface transparency, and plaintiffs’ lawyers poke at site flows. For California Bot Law compliance for lawyers, watch edge cases like social DMs (tight character limits) and auto‑opening lead widgets on California IPs.

Practical move: detect location and show a stronger disclosure for California users. Log when it’s displayed so you can prove it later.

FTC guidance and Section 5 risk beyond California

Even outside California, the FTC Act bans deceptive or unfair practices. Since 2023, the FTC has warned that unlabeled, human‑like chatbots can mislead reasonable consumers—especially if you add cues like human names or avatars, or claims about expertise.

The test is whether your design is likely to mislead and affect decisions, not whether one person might be confused. Keep your notices close to the interaction, easy to see on mobile, and written in everyday language.

  • Don’t hide disclosures behind icons or expandable text.
  • Avoid labels like “Concierge” if you don’t say it’s AI.
  • Back up any performance claims; skip “this boosts case value” unless you can prove it.
  • Never imply the bot is a lawyer; pair the label with “not legal advice” if it shares legal info.

FTC cases on hidden fees and negative options show how they judge prominence, placement, and presentation—same logic fits chat. For FTC Section 5 guidance on chatbots and deception, add a simple review checklist: first screen says AI, clear human option, transcripts store the exact disclosure text.

Many firms test a tiny “AI” badge and see no drop in leads. Often trust goes up.

EU AI Act transparency duties and timelines

The EU AI Act says people must be told when they’re interacting with AI unless it’s obvious. This covers conversational AI broadly, not only high‑risk systems. The law took effect in 2024, but most transparency pieces land around mid‑2026.

If you serve or market to EU residents—or your clients do—expect procurement teams to ask about this well before the deadline.

  • Label AI at the first touch and keep a visible tag in the header.
  • Translate for EU languages and keep it simple.
  • Offer easy escalation to a human and say how long that might take.
  • If you use biometrics or emotion recognition (rare here), stricter rules apply.

Extraterritorial reach matters. EU authorities coordinate and can act where effects are felt in the EU. If geofencing isn’t reliable, use EU‑level disclosure as your default. It also keeps your UX consistent worldwide.

Many firms bundle this with GDPR work: update privacy pages, consent prompts, and retention notes to cover chat transcript storage.

Professional responsibility and UPL considerations for AI chat in law firms

This isn’t only consumer law—it’s ethics. Model Rule 7.1 bans false or misleading communications. Don’t let your bot promise results or suggest a lawyer is speaking. Model Rule 5.3 requires supervising nonlawyer assistants, which includes AI.

Turn that into practice: review prompts, set escalation rules, monitor outputs, and document oversight. Keep your bot from stepping into advice.

Big risk: accidentally forming an attorney‑client relationship. If the chat answers specific legal questions without clear limits, someone could think they got legal advice. Use a legal intake chatbot “not legal advice” disclaimer, set scope (“general info only”), and offer a quick path to a human.

In multistate practices, screen for location to avoid unauthorized practice of law risk with chatbots. Let the bot share general info, but route state‑specific advice questions to licensed attorneys.

Also think privilege and confidentiality. People will share sensitive stuff. Use encrypted channels, limit vendor access, and explain how transcripts are stored. If users upload files, ask for consent before analyzing anything with health or financial data, and put that warning where uploads happen.

Where, when, and how to present disclosures in legal intake and marketing flows

Picture a regulator on a phone screen. Your disclosure should show up before or right as the chat starts and stay easy to find.

  • Launcher label: “Chat with our AI assistant.”
  • First message: “I’m an AI assistant, not a lawyer. I can share general information and help schedule time.”
  • Persistent header/badge: a small “AI” or “Automated” tag.

Follow clear and conspicuous bot disclosure examples: high contrast, normal body text or larger, no hover‑only tips. On mobile, make sure it’s visible without scrolling and can’t be dismissed before someone sees it.

Dial up the caution at high‑friction moments—fee quotes, eligibility screens, outcome talk—by repeating “not legal advice” and nudging to a human if needed.

UX testing in pro services shows labeling rarely hurts engagement. Clarity often helps. If you see drop‑offs after the disclosure, the copy may be too stiff. Add a quick value line after the label (“I can book you with an attorney in under two minutes”) to keep things moving.

Wording examples and templates for law firms

Keep it short and clear. Adjust for your practice and risk level:

  • Bot identity: “I’m an AI assistant that can share general information and help with scheduling.”
  • Scope and UPL guardrail: “I don’t provide legal advice. For advice about your situation, I can connect you with an attorney.”
  • Human handoff: “Prefer a person? Tap ‘Talk to a human.’ Our team typically replies within 1 business hour.”
  • Records and privacy: “We may store this chat to improve our services. Don’t share sensitive details unless asked. See our Privacy Notice.”

Channel tweaks:

  • SMS/WhatsApp: “AI assistant here (not a lawyer). Info only. Reply HUMAN for our team.”
  • Voice IVR: “This system uses AI to assist you. You can ask for a human at any time.”

These cover the legal intake chatbot “not legal advice” disclaimer while keeping conversions healthy. For multilingual audiences, use professional translation and check reading level—auto‑translation can turn careful limits into promises.

Best practice: manage disclosure text from one central snippet owned by compliance. Updates then hit web chat, SMS, voice, and portals at once, and your logs show version history for each conversation.

Special channels and edge cases

Disclosures need to follow the conversation everywhere. On SMS and WhatsApp, space is tight, so lead with a short label (“AI assistant, info only”) and add a link to details.

On social DMs, long messages can collapse. Put the AI label first and keep any link visible. For email autoresponders, add a one‑liner if the reply was generated by AI and include a direct human contact.

Voice needs a different approach. Start with “You’re speaking with an AI system,” and repeat after long pauses or transfers. If a human joins, say so. For live chat where humans use AI to draft responses, disclose that the agent may use AI but is accountable.

Accessibility isn’t optional. Follow WCAG: good contrast, proper labels, ARIA roles so screen readers announce “AI assistant.” Offer multilingual disclosures and keep the language simple. If your practice touches minors, avoid targeting them and route guardians to contact the firm.

Accessible and multilingual chatbot disclosures (WCAG, ARIA) aren’t just nice to have—they show your notices were truly “clear and conspicuous” for everyone.

Jurisdiction detection, consent, and data governance

Jurisdiction‑aware bot disclosures help you show the right message to the right person. Use IP geolocation as a first cut, then ask users to confirm location if needed (“Are you currently in California or the EU?”). If yes, raise the prominence and add consent prompts where required.

For uploads, ask before analyzing files. Link to your privacy notice and data retention policy inside the chat. Spell out what you collect (chat content, metadata), why (intake, scheduling, service improvement), where it lives, who can access it, and how long you keep it.

Offer simple controls: download the transcript, request deletion. If you honor “do not sell/share” signals under state privacy laws, make sure the chat respects those choices.

Document consent with timestamps. And keep training data separate—don’t pour prospect chats into global AI training without explicit permission. It protects confidentiality and reduces accidental disclosure risk later.

Logging, audits, and incident response

If you can’t show that users saw the disclosure, it’s as if it never happened. Keep audit logs and disclosure versioning for AI chat that capture the exact text, placement, time, device, language, and user locale. Note if the user acknowledged it.

Archive full transcripts with timestamps and hashed integrity checks. Limit access and log every read or export. Sample transcripts regularly for mistakes, hallucinations, and UPL red flags, then track fixes.

Maintain an “AI Disclosure Change Log” with dates, reasons, and approvals. If something goes wrong—misadvice, data exposure—know how to pause the bot, notify where appropriate, and update wording or prompts. Map model and prompt dependencies so you can roll back risky changes fast.

A simple RACI helps: marketing writes copy, compliance/legal approves, IT implements, ops monitors. Run a quarterly drill: pretend a regulator asked for evidence and see how fast you can produce it. You’ll find gaps before they do.

Penalties, enforcement trends, and risk mitigation

What if you miss? The FTC can bring a Section 5 case for deception. That can lead to consent orders with reporting and monitoring, which is expensive even without penalties. State AGs can act under UDAP laws, and California can add Bot Law claims.

Reputation damage is quick—screenshots spread. Trends to watch:

  • Dark pattern scrutiny: notices hidden behind taps or styled to be ignored.
  • Vulnerable users: consumer practices (immigration, PI, housing) face extra eyes if bots overpromise.
  • Platform rules: ad and social platforms increasingly require AI labeling.

Risk playbook:

  • Keep disclosures visible and close to the chat; skip human names and avatars for bots.
  • Offer a human path with realistic timing.
  • Avoid outcome claims; substantiate anything you assert.
  • Review regularly and keep records.

Insurers are starting to ask about AI disclosures and oversight. A documented program can help with coverage and enterprise contracts. That’s a real upside, not just risk avoidance.

Implementation roadmap for 2025–2026

Weeks 0–4: List every channel where an automated agent talks to prospects (site chat, SMS, DMs, voice, portals). Draft standard disclosure and “not legal advice” copy. Add launcher labels and first‑message notices. Turn on “Talk to a human” with response SLAs. Start logging disclosure impressions with timestamps and versions.

Months 2–3: Add a persistent “AI” badge in the chat header. Localize top languages. Turn on geolocation plus self‑attestation to show California/EU variants. Update privacy pages to cover chat transcripts, consent, and retention. Train intake staff on escalation.

Months 4–6: Extend to SMS and voice. Make sure you have accessible and multilingual chatbot disclosures. Begin quarterly transcript reviews and keep an AI Disclosure Change Log. Tie consent capture into your CRM so records follow the contact.

Months 6–12: Automate redaction and retention. Separate training data from prospect chats. Add monitoring for hallucinations and misstatements. Prepare a one‑pager on your program for client due diligence.

2026 readiness: Track EU AI Act milestones and be ready to attest to transparency on AI interactions. If geofencing isn’t reliable, use EU‑level disclosure everywhere. Aim for steady state: managed disclosure copy, jurisdiction‑aware logic, audit‑ready logs, and a routine review cycle.

How LegalSoul supports compliant chatbot disclosures

LegalSoul makes disclosure practical without rebuilding your stack. Flip on jurisdiction‑aware bot disclosures that automatically show stronger notices for California and EU users. Use multilingual templates tuned for clarity and reading level. First‑message and header labels are built in, along with a small “AI” badge that stays visible.

Compliance features include:

  • Not‑legal‑advice modules you can tailor by practice area.
  • Human handoff controls with stated response times and queue visibility.
  • Consent capture for transcripts and uploads, with timestamps and locale.
  • Retention and redaction policies applied to chat content.
  • Audit logs and disclosure versioning for AI chat: exact copy, location, time, and device.
  • Accessibility tools (contrast checks, ARIA labels) so notices are truly conspicuous.

One place to manage disclosure copy means updates hit web chat, SMS, voice IVR, and portals together, with an approval trail. When a client or regulator asks for proof, you can produce it fast. If you want to meet FTC expectations and prep for EU AI Act transparency obligations, LegalSoul helps you do it while keeping the client experience polished.

Quick checklist

Use this quick pass to validate your program across channels:

  • Bot identity: Does the launcher and first message clearly state “AI assistant”? Is there a persistent “AI” badge in the header?
  • Proximity and prominence: Is the disclosure visible at or before first interaction on mobile, with sufficient contrast and plain language?
  • Scope guardrails: Does the chat include a concise “not legal advice” statement and avoid implying a lawyer is speaking?
  • Human option: Is “Talk to a human” always available with accurate response‑time expectations?
  • Jurisdiction logic: Do California and EU users see elevated disclosures via jurisdiction‑aware bot disclosures? Are language/localization handled?
  • Consent and privacy: Are chatbot consent notices, privacy, and data retention explained in‑chat, with timestamped records?
  • Logging: Do you capture disclosure versioning, impressions, transcripts, and access logs?
  • Monitoring: Do you routinely sample conversations for hallucinations, misrepresentations, and UPL risks—and document fixes?
  • Special channels: Are SMS, social DMs, and voice flows adapted with concise labels and links/spoken notices?
  • Accessibility: Do disclosures meet WCAG contrast and screen reader requirements (e.g., ARIA labels)?
  • Contracts and insurance: Can you attest to these controls in vendor/client questionnaires and to your carrier?

If you can check these boxes, you’re aligned with today’s expectations and in good shape for 2026.

Key Points

  • Disclosure is required or strongly advisable: California’s Bot Law applies when you chat with Californians for commercial purposes; the FTC can treat unlabeled human‑like bots as deceptive nationwide; the EU AI Act will require telling users they’re interacting with AI (phasing in toward mid‑2026).
  • Act now: label the launcher and first message, keep a visible “AI” badge, add a plain “not legal advice” line, and give users a quick human handoff with realistic response times. Adapt for mobile, SMS/voice, multilingual readers, and accessibility.
  • Governance counts: use jurisdiction‑aware logic (California/EU), include consent/privacy and data retention notes in the chat, and maintain audit logs showing disclosure text, placement, and timestamps. Supervise under Rules 7.1 and 5.3 to avoid misrepresentation and UPL risk.
  • Roadmap and tooling: land UI changes in 30–90 days, then add logging, redaction, monitoring, and EU readiness. LegalSoul helps with built‑in disclosure templates, human handoff, consent capture, and audit‑ready logs.

Conclusion

Bottom line: if your firm uses conversational AI, say so. California’s Bot Law, FTC Section 5, and the EU AI Act expect clear, nearby bot labels and a “not legal advice” guardrail.

Ship the basics—launcher and first‑message disclosures, a small AI badge, easy human handoff, and California/EU variants. Back it up with consent notices, retention policies, and logs that prove what users saw. Want this running without slowing intake? Try LegalSoul. You get jurisdiction‑smart disclosures, multilingual templates, human handoff, and audit‑ready logging in one place. Book a 20‑minute demo and make your chatbot transparent, ethical, and client‑ready.

Unlock professional-grade AI solutions for your legal practice

Sign up