November 12, 2025

Do lawyers have to disclose using AI for client intake and consultations?

Clients keep asking about AI. Regulators are paying attention. Your team is testing tools and trying not to step on a rake. So, when do you actually have to tell a client you’re using AI? Here’s the p...

Clients keep asking about AI. Regulators are paying attention. Your team is testing tools and trying not to step on a rake. So, when do you actually have to tell a client you’re using AI?

Here’s the plan: we’ll answer the core question—Do lawyers have to disclose using AI for client intake and consultations?—and walk through the rules that matter (competence, communication, confidentiality, supervision, and fees). We’ll hit intake versus real legal work, when informed consent kicks in, what to say about billing, and how to keep a lawyer in the loop without slowing everything to a crawl.

We’ll share sample language, a checklist you can actually use, and trends worth tracking. Then we’ll show how a private, firm-controlled “mind clone” of your practice—like CaseClerk—can move faster while keeping client data locked down and your disclosures simple.

Quick Takeaways

  • Tell clients about AI if it shapes your advice or deliverables, sends their info to a third-party system, or affects what they pay. Quiet, internal use under attorney supervision usually doesn’t need a separate notice.
  • For intake, label your bot as AI, add a “not legal advice” note, block jurisdictional mismatches, get consent before collecting personal info, and log which disclosure each visitor saw.
  • Keep a lawyer on review duty and put guardrails around data: vendor vetting, no training on client data by default, tight access controls, redaction, and audit logs. That lines up with Model Rules 1.1, 1.4, 1.6, and 5.3.
  • Be upfront about costs and don’t double bill. Fixed or flat fees often work well. A private, firm-controlled mind clone (e.g., CaseClerk) keeps data in your environment and boosts consistency under attorney oversight.

Quick answer: When must lawyers disclose AI use?

Short version: disclose when AI meaningfully affects the work, touches confidentiality, or changes the bill. Always supervise. If you’ll lean on AI for research, drafting, or analysis that shapes the outcome, say so. If client info goes to a third-party system, get informed consent. If you’ll pass through usage fees, tell them up front.

Courts have made the supervision point painfully clear. In Mata v. Avianca (S.D.N.Y. 2023), the court sanctioned counsel for fake citations from a tool that wasn’t checked. Bottom line, a lawyer must review and adopt the output. A practical gut check: would a reasonable client want to know this? If yes, disclose. If you’re on the fence and there’s any confidentiality or billing angle, disclose anyway. Most clients who care about quality and speed respect the transparency.

The ethics framework that governs AI use

The rules you already know apply here. Model Rule 1.1 (competence) includes understanding tech risks and benefits. Rule 1.4 (communication) says tell clients about material pieces of the representation, which can include AI that shapes advice or cost. Rule 1.6 (confidentiality) matters whenever data leaves your systems. Rule 5.3 (supervision) treats vendors and tools like nonlawyers—you’re responsible for policies, training, and oversight.

Rule 1.5 (fees) requires reasonable, transparent billing if you pass through any usage charges. Rules 7.1–7.2 bar misleading statements; don’t imply a bot is a lawyer. Recent guidance from California (2023) and New York (2024) turns this into action: label client-facing AI, do vendor diligence, document lawyer review. When you buy tools, map every feature to a duty: accuracy helps competence, access controls support confidentiality, and audit logs support supervision of technology vendors under Rule 5.3.

What “AI” means in this context

“AI” in law covers a few buckets. Generative tools draft, summarize, and analyze documents, emails, and transcripts. Predictive or classification tools triage intake or help with conflicts. Client-facing chat handles Q&A and form-fill during intake. Risk levels aren’t the same for each bucket.

Big divider: firm-controlled systems (your tenant, your access controls) versus third-party hosted tools that process client data offsite. Know where the data goes, who can see it, and whether it’s used for training. For privacy and security, look for settings to disable training on your data, data residency options, and usable audit logs. Another divider is internal thinking aids versus client-facing content. The latter needs tighter accuracy checks and, often, disclosure. Tag each use case by data exposure (none, de-identified, or identifiable) and reliance level (background or material). That simple matrix tells you fast where to get consent and where to require lawyer-in-the-loop review.

Disclosure triggers unique to client intake

Intake blends marketing with risk control. Label your intake assistant as AI, say it isn’t a lawyer, and avoid legal advice in chat. California’s 2023 guidance and New York’s 2024 materials encourage clear notices and jurisdiction screens to avoid UPL.

Before you collect PII, show terms that explain data use and retention, and get consent—especially with third-party processing. Use location filters and gating questions, and hand off nuanced legal issues to a human fast. A good pattern: let the bot triage topics, capture contact info, and schedule time, but hold back free-text narratives until the person checks a consent box. If the script nudges users toward legal conclusions, dial it back. Try “progressive disclosure”: short AI notice up front, link to details for those who want it. And log the exact disclosure shown; that record can be as valuable as the lead and it supports attorney disclosure of AI in client intake.

Disclosure triggers in consultations and substantive work

Once you move from intake to advice, two things matter: reliance and confidentiality. If AI materially shapes analysis, drafts, or recommendations, tell the client and explain that a lawyer reviews everything. If any client data hits an external vendor, get informed consent under Rule 1.6. When sharing AI-assisted content, set expectations clearly and invite questions.

Internal tools that run on firm-controlled systems and don’t expose client info usually don’t need a separate disclosure, provided a lawyer supervises and adopts the work. Some judges now ask for certifications about AI help in filings, so check local rules (certain courts in Texas, for example). Consider tiered notices: a short engagement letter clause for routine drafting speed-ups, and a one-paragraph, matter-specific note for heavier reliance (think strategy memos or complex research).

Confidentiality, data security, and vendor management

Rule 1.6 is the anchor. ABA Formal Opinion 477R (2017) expects reasonable safeguards when using tech. If client data will be processed by a third-party AI system, many bars expect informed consent plus vendor diligence. At minimum, look for encryption in transit and at rest, SSO/MFA, role-based access, data residency choices, audit logs, and a way to turn off training on your data.

Get clear data processing terms, know the subcontractors, and insist on retention/deletion commitments. Cross-border processing can add disclosure steps. For protective orders or regulated data (HIPAA, GLBA), isolate environments and use de-identified or synthetic data for testing. Ask vendors for immutable logs of prompts, outputs, and reviewers tied to a matter—huge for supervision and incident response. Aim for least exposure: mask identifiers before analysis and unmask after lawyer approval. Also, treat your prompt library like work product; it’s valuable and deserves access controls.

Fees, costs, and billing transparency

Rule 1.5 says fees must be reasonable and explained. Several bars warn against charging like you did it all by hand when AI saved time. If you plan to bill for AI tools and reasonable fees, put it in the engagement letter. Spell out whether usage is included in a flat fee, reflected in time entries with efficiency baked in, or passed through at cost.

Usage-based vendor fees? Disclose and put reasonable caps in place. Many clients prefer fixed or subscription models when AI improves turnaround. Share periodic summaries—time saved, quality benefits—so the conversation centers on value. Avoid line items that look like software resale; tie everything to legal services. Your engagement letter can also reserve the right to use AI under supervision and promise that a lawyer will review all work and protect client information per your privacy policy.

Supervision, competence, and staff training

Under Rule 5.3, you have to supervise technology providers like you would nonlawyer staff. Rule 1.1 expects basic tech competence—know where these tools help and where they misfire. Put the oversight in writing and make it routine: two-person review the first time you use AI on a matter, checklists for quotes and citations, and “red flag” prompts that kick a task to manual research when it’s tricky.

Train your team to spot hallucinations, use approved prompts, and scrub sensitive details before anything leaves your environment. Sample outputs monthly and keep an error log by use case. Ask vendors for change logs and confirm privacy settings after updates. “Prompt preclearance” helps—approved templates tied to workflows so junior staff aren’t improvising on sensitive topics. It’s auditable, scalable, and it satisfies both supervision and technology competence without making everyone miserable.

Public communications, advertising, and disclaimers

Rules 7.1–7.2: don’t mislead. If your site has a chatbot, call it an AI assistant and say it doesn’t give legal advice. Use jurisdiction filters, clear notices, and a human handoff when things turn substantive. Good intake disclaimers cover who you are, what the AI does (collects info and schedules), what it doesn’t do (no advice), and how you use data.

Watch ad platforms that auto-generate text. Don’t let algorithms brag about results you can’t promise or claim specialties you don’t hold. If you publish AI-assisted content, have a lawyer review and adopt it. One simple control: require a lawyer approval checkbox in the CMS before anything goes live. Add “here’s how to reach us now” prompts for urgent issues so no one mistakes a widget for real-time counsel. That aligns with state bar ethics opinions on AI disclosure.

Jurisdictional trends to watch

No single national rule yet, but the themes are the same: competence, confidentiality, supervision, fees, and truthful communications. California’s 2023 guidance emphasizes labeling public-facing AI, vendor diligence, and bias awareness. The New York State Bar’s 2024 report recommends internal controls and transparency when AI materially contributes to work product.

Florida’s 2024 proposed opinion highlights supervision, consent for external processing, and honest billing. Some courts and judges now want certifications about AI help in filings—check local rules first. Also watch privacy laws like CCPA/CPRA and GDPR; they can shape your intake disclosures. The trend favors firm-controlled deployments with documented review. Get these patterns in place now and you’ll spend less time renegotiating AI terms with clients later.

Where a private, firm-controlled “mind clone” fits

A private mind clone is an AI assistant trained on your templates, briefs, tone, and playbooks—running in your environment with your controls. Because it learns from your materials and stays under your governance, it cuts down on third-party exposure and reduces disclosure triggers tied to outbound data.

You still disclose when its output materially shapes advice, but you often avoid separate consent for external processing because there isn’t any. CaseClerk supports this model: bring your precedent library, set data residency, disable training on client data without opt-in, and require lawyer review. The big win is consistency. It can hold your preferred authorities and negotiation posture, so first drafts land closer to final. You can even segment by practice group or client to respect walls. Procurement likes it too—SOC 2 style controls, audit trails, and key management—plus a simple story about competence and confidentiality.

Sample disclosures and engagement letter language

Use plain English and link to details. Try these and tweak for your firm:

  • Intake banner: “You’re chatting with our AI assistant. It collects information so our team can evaluate your matter and schedule a consultation. It isn’t a lawyer and doesn’t provide legal advice. By continuing, you agree to our terms and privacy policy.”
  • Scheduling screen: “Please don’t share sensitive details until we send a secure form. We’ll review your information and a licensed attorney will follow up.”
  • Engagement letter (technology clause): “Our firm may use AI-assisted tools under attorney supervision to draft, analyze, and organize materials. A lawyer will review and approve all work before it is shared with you or a court. We will protect your information as described in our confidentiality and privacy disclosures. We do not allow vendors to train on your data without your consent.”
  • Engagement letter (fees clause): “We may incur usage-based charges for AI services. We will bill reasonably and transparently, and we will not charge for time saved by automation. If we anticipate material AI-related costs, we will discuss them with you in advance.”

These plain-language examples fit common ethics expectations and set a helpful tone with clients.

Implementation checklist for compliant AI use

  • Map use cases: intake, triage, drafting, research, transcript analysis. Tag each by data exposure (none/de‑identified/identifiable) and reliance (background/material).
  • Vendor diligence: disable training on your data, require encryption and MFA, confirm data residency, sign a DPA, and test deletion. Demand matter-linked audit logs.
  • Website and intake: add clear AI labels, consent flows, jurisdiction filters, and “no legal advice” notices. Log which disclosure version each visitor saw.
  • Supervision controls: require lawyer approval before AI output reaches clients, maintain approved prompt templates, and sample outputs monthly. Track error rates by use case.
  • Confidentiality hygiene: redact PII before external processing, use synthetic data for testing, and restrict access by matter. Keep a working breach response plan.
  • Billing practices: update engagement letters, set reasonable caps on usage fees, and avoid double billing.
  • Training: teach prompt craft, verification, bias awareness, and privacy basics. Record attendance.
  • Audits: run quarterly reviews against supervision requirements and your privacy controls. Keep a remediation log.
  • Kill switch: be able to shut off any tool instantly if it misbehaves. Rollbacks build trust.

FAQs and edge cases

  • Do I need consent for internal, on‑prem tools? Usually no if no client data leaves firm systems and a lawyer supervises. Disclose if outputs materially shape advice.
  • Can I pass AI costs to clients? Yes, if reasonable and disclosed. Avoid double billing and surprise line items; consider flat fees.
  • What about courts? Some require certifications about AI help in filings. Check local rules and standing orders before you file.
  • Cross‑border processing? Disclose and get consent if data may be processed outside your jurisdiction. Offer localization when needed.
  • Intake conflicts and confidentiality? Use disclaimers and filters to avoid sensitive details before conflicts checks. Push complex issues to a human early.
  • Highly sensitive matters? Use de-identified datasets or segregated environments. Tighten access and logging.
  • Can an AI tool create an attorney–client relationship? Disclaimers and gating help prevent it, but consistent implementation and recordkeeping matter most.

Key takeaways

  • Disclose when AI affects confidentiality, fees, or substantive advice—and supervise it every time.
  • Client-facing intake needs clear labels, consent, and UPL-aware routing. Hold back narratives until terms are accepted.
  • Firm-controlled tools reduce disclosure triggers, but lawyer-in-the-loop review still applies.
  • Build security into buying: disable training on your data, lock down access, and keep auditable, matter-linked logs.
  • Charge for value, not duplicated time. Be transparent, avoid double billing, and consider flat fees where AI helps.
  • Make it a workflow: prompt libraries, output sampling, and a kill switch.
  • A private, firm-controlled mind clone (e.g., via CaseClerk) keeps your voice consistent and protects client data.

Conclusion

You don’t have to announce every background use of AI. You do need to disclose when it shapes advice, touches client confidentiality, or changes the bill—and you must keep a lawyer in the loop. For intake, label the bot, get consent before PII, and route real legal questions to a human to avoid UPL.

Want the easy path? Do vendor diligence, block training on your data by default, tighten access, and keep audit trails. Ready to put this into practice? Use CaseClerk to build a private, firm‑controlled mind clone that speeds up intake and drafting while keeping data safe and disclosures simple. Book a quick demo and kick the tires.

Unlock professional-grade AI solutions for your legal practice

Sign up