Do lawyers need client consent to use AI on client matters? Ethics opinions, engagement letters, and sample disclosures for 2025
Clients, judges, even your carrier keep asking the same thing: how are you using AI on my matter? It’s not just about software. It’s about trust, confidentiality, ethics, and what shows up on the bill...
Clients, judges, even your carrier keep asking the same thing: how are you using AI on my matter? It’s not just about software. It’s about trust, confidentiality, ethics, and what shows up on the bill.
In 2025, new bar guidance and court certifications mean you need a clear answer. Sometimes you must get informed consent. Other times, careful safeguards and lawyer supervision are enough.
Below is a plain‑English roadmap: when consent is required vs. recommended, which ethics rules matter most, sample engagement letter clauses, ready‑to‑send disclosures, billing tips, and a quick look at court rules. You’ll also see practical safeguards, vendor vetting ideas, and how LegalSoul helps you do this the right way without training on client data.
Quick Takeaways
- When to ask for consent: if confidential info goes to an AI vendor, if AI will change strategy, cost, or outcomes in a real way, or if a court or client contract says so. If none of those apply, you can still use AI—just protect confidentiality, supervise the work, and document what you did.
- Ethics and billing basics: follow Model Rules 1.1, 1.4, 1.5, 1.6, and 5.3. Review everything a model produces, confirm citations, and tell clients about material uses. Pass through AI charges at actual cost, don’t bill “learning time,” and keep proof of your verification steps.
- Engagement and disclosures: add opt‑in/opt‑out AI language to your letters, use short approval emails for ongoing matters, track client preferences by matter, and match any client notice to judge‑specific certification rules when required.
- Safeguards and rollout: choose enterprise tools with a DPA, no training on client data, encryption, RBAC, retention controls, and audit logs. Keep inputs minimal or redacted, and use a “prompt staging” habit. Put all this into a firm policy with consent capture, audits, and checklists. LegalSoul supports this with per‑matter controls, consent logging, trails, redaction, and billing exports.
Short answer and why this matters in 2025
Short version: yes, you can use AI in client work. Get informed consent when client confidences might be seen or kept by a vendor, when AI meaningfully changes the plan or price, or when rules or contracts tell you to disclose. Florida Bar Advisory Opinion 24‑1 (2024) allows generative AI with supervision, bars sending confidences to tools that train on inputs, and stresses fee transparency. California’s 2023 guidance echoes this: protect confidentiality, supervise, and communicate material impacts. Several judges now ask lawyers to certify that a human verified AI‑assisted filings—partly due to Mata v. Avianca (S.D.N.Y. 2023), where fake citations led to sanctions.
Why now? GCs are adding AI questions to outside counsel guidelines. More courts have new orders in 2024–2025. Even if you’re using AI “behind the scenes,” Model Rule 1.4 can require disclosure if it shifts cost or strategy. The fix: write a policy, update your engagement letters, and use enterprise tools that don’t train on client data. Then when asked, you can answer clearly and confidently.
Do you need client consent to use AI? A decision framework
- Will confidential info leave your walls? If a vendor can access, keep, or train on client data, get explicit consent or use a no‑training enterprise setup backed by a data processing addendum (DPA).
- Will AI change strategy, cost, or results in a meaningful way? Disclose and get written consent for critical drafting, large‑scale diligence, or analytics that shape your approach.
- Is disclosure required by a court, a client agreement, or industry rules (HIPAA, GDPR, CPRA)? Follow those rules and make your client notice match them.
Example: a public chatbot that trains on inputs usually triggers consent or heavy redaction. A no training on client data enterprise AI for law firms—using data isolation, retention controls, and audit logs—reduces the need for express consent, but you still must supervise.
Treat consent as living, not one‑and‑done. If you move from light research to AI‑driven document review that changes cost, refresh the client’s okay. Buyers notice that kind of candor.
Ethics rules implicated and what they require
- Model Rule 1.1 (competence): understand what the tool can and cannot do, and stay current.
- Model Rule 1.4 (communication): tell clients about material uses that affect cost or decisions.
- Model Rule 1.6 (confidentiality): don’t expose client info without solid protections.
- Model Rule 1.5 (fees): keep fees reasonable; explain any pass‑through AI costs.
- Model Rule 5.3 (supervision): treat AI providers like nonlawyer assistants; set rules and review the work.
- Model Rules 3.3/3.4 (candor/fairness): confirm facts and citations—remember Avianca.
Florida AO 24‑1 ties these duties directly to AI use. California’s guidance focuses on risk reviews, vendor diligence, and human oversight. Many expect the ABA to keep moving in this direction: use AI if you protect confidences, supervise the outputs, and communicate costs and material impacts. Put these requirements into checklists and templates so your team follows them even on tight deadlines.
Confidentiality, data handling, and vendor safeguards
Think like a privacy lead and lock in three layers:
- Contracts: a data processing addendum (DPA) for legal AI vendors, no‑training guarantees, clear subprocessor lists, and fast breach notice.
- Security: encryption in transit/at rest, SSO/MFA, role‑based access, audit logs, and retention/deletion settings. Mind data residency too.
- Practice: minimize or redact inputs, try synthetic facts for early prompts, and upload only what you need.
Several bar opinions warn against dropping client facts into consumer tools that train on inputs. Enterprise setups typically disable training and offer per‑matter data isolation. That difference often decides whether you need express consent.
A handy habit: “prompt staging.” First ask general legal questions with no identifiers. If you must use specifics, run them in a no‑training environment with redactions. You cut risk and keep a clean audit trail.
Engagement letter language lawyers can adopt
Your engagement letter sets expectations. Consider:
- Opt‑in for sensitive work: “With your informed consent, we may use AI tools for research, drafting, and analysis. A licensed attorney will supervise all outputs. Your information will not be used to train public models. We may pass through usage‑based AI costs at actual cost.”
- Opt‑out for routine matters: “Our firm uses vetted AI tools under strict confidentiality and supervision. Tell us if you prefer we not use AI on your matter.”
- Billing clarity: no charges for our internal tool exploration; disclose pass‑through costs up front.
Florida AO 24‑1 warns against billing clients for the firm’s experimentation and urges clear fee communication. Tie this to Model Rule 1.5 to avoid billing fights. Add a “client preference” field at intake—“OK to use AI,” “Only with approval,” or “No AI.” It respects choice and makes later consent decisions easy. Helpful keywords for your templates: engagement letter AI clause template, AI billing ethics and pass‑through costs.
Sample client disclosures and scripts (for new and existing matters)
Short email for existing matters:
We propose using a confidentiality‑protected AI system to speed up document drafting and review. A lawyer will supervise and verify all outputs. Your data will not be used to train public models. Estimated cost impact: [range]. Please reply “Approved” or share any concerns.
Matter‑specific disclosure for high‑stakes work:
For this phase, we plan to use AI to analyze [volume] documents to prioritize issues and reduce review time by ~[estimate]. We will verify all findings. Costs include platform usage estimated at [$X–$Y], billed at actual cost. This should shorten timelines by [X%].
Aligning with court rules and certifications for AI‑assisted filings 2025:
If the court requires an AI‑use certification, we will comply and keep cite‑checked workpapers. Our filing will be verified by counsel and supported by cited authorities.
Two small touches clients like: link to your short AI policy and explain your no‑training setup. These AI disclosure to clients sample language options let busy in‑house lawyers approve quickly and move on.
Billing and pricing: ethical and practical guidance
Treat AI like any other efficiency tool under Model Rule 1.5. Be fair and transparent:
- Pass through actual usage costs with simple descriptions clients understand.
- Do not bill “learning” time—Florida AO 24‑1 flags this.
- No double billing for time saved. ABA Opinion 93‑379 warns against that. If AI cuts hours, consider a value‑based fee agreed in advance.
Example entry: “Used supervised AI for first‑pass drafting and case list; attorney revised and verified citations. AI usage billed at actual cost ($XX). Result: faster delivery with fewer attorney hours.” Many GCs want predictability. An AI‑assisted flat fee with built‑in verification can help you win the work.
Decide whether AI usage is a matter expense or an admin cost. Either is fine if you’re consistent and disclose it. Put the policy in your rate letter and matter plan to reduce write‑offs and show strong AI billing ethics and pass‑through costs discipline.
Court rules and certifications about AI-assisted filings
Courts vary, but the trend is obvious. Some judges—like N.D. Tex. Judge Starr and E.D. Pa. Judge Baylson—ask for certifications that a human verified AI‑assisted filings and citations. Others warn against filing AI‑generated content without checking. This picked up speed after Mata v. Avianca (S.D.N.Y. 2023), where fake citations from a public chatbot caused sanctions.
Your move:
- Check local rules, standing orders, and your judge’s procedures before drafting.
- If a certification is needed, keep a verification workpaper: sources, quotes, and a cite‑check log.
- Even without a disclosure rule, you still must verify for candor and competence.
Adopt a “source‑of‑truth” habit to prevent preventing AI hallucinations and citation verification for lawyers problems: every proposition must tie to an original authority pulled from a trusted database. Makes certifications easy to sign.
Supervision and quality control of AI outputs
AI is a power tool. You still drive. Build review steps into the workflow:
- Research: pull the sources yourself from trusted databases; compare holdings before relying on summaries.
- Drafting: let AI help with structure and speed, then edit for tone, jurisdiction quirks, and citation style.
- Diligence: use AI to cluster and flag; lawyers confirm the important parts with spot checks.
After Avianca, many firms added a two‑step review: lawyer verifies sources, then a second reviewer signs off on critical filings. Protect privilege: store prompts and outputs in your DMS with the matter number, not in random browser tabs, to safeguard attorney‑client privilege and work product when using AI.
Bonus: keep an internal “prompt‑and‑response” appendix for major filings. It speeds partner review and shows your Model Rule 5.3 supervision if anyone ever asks.
Vendor diligence and selection criteria
Vet AI vendors like you would an eDiscovery provider. Document everything. Look for:
- Security posture: SOC 2 Type II or ISO 27001, encryption, SSO/MFA, RBAC, audit trails.
- Data handling: no training on client data, retention controls, data residency options, and a clear DPA.
- Operations: transparent subprocessors, tight incident SLAs, prompt breach notice.
- Product controls: redaction tools, per‑matter workspaces, exportable logs.
Ask scenario questions: “If we paste PII by mistake, how fast can you purge it from all systems and logs?” Get that in the contract. Many firms keep a law firm AI policy and governance checklist and won’t approve a vendor without it.
Try a tabletop exercise with the vendor: simulate a judge’s certification request and a client privacy inquiry. You’ll learn more in an hour than from a long questionnaire and make sure your data processing addendum (DPA) for legal AI vendors actually works.
Implementation playbook for firms
Policy only matters if people can follow it. Make it concrete:
- Policy: define allowed use cases, supervision steps, retention rules, billing, and consent triggers. Train partners first.
- Intake: record client AI preferences, data residency needs, and industry constraints.
- Consent tracking: store it in the matter; add fields in your PMS/DMS so it shows up everywhere you work.
- QA: checklists for research, drafting, and diligence; do periodic audits and random cite checks.
- Incidents: spell out how you’ll triage confidentiality issues and notify clients if needed.
Many firms pair this with their info‑governance calendar: quarterly checks, annual refresh, new‑hire training. Track simple KPIs—fewer routine research hours, faster first drafts, zero citation errors—to show leadership the risk and the payoff.
Special scenarios that change the consent calculus
Some matters raise the bar:
- Highly sensitive data: health (HIPAA/BAA), juvenile, immigration, government, export‑controlled. Use redaction, minimization, and strict no‑training environments; often get written consent.
- Cross‑border: GDPR and data residency may limit where processing happens. Map flows and pick EU‑hosted options when needed.
- NDAs/protective orders: they may ban sharing with unnamed vendors. Get permission or skip AI for that data.
- Client says no: offer an AI‑free path with clear timing and cost differences.
Example: in a trade secrets case with a tight protective order, you may need court permission to use an AI vendor or a fully isolated, on‑prem/no‑training setup with named subprocessors. Here, do lawyers need client consent to use AI? Yes—and you might also need the court or opposing counsel to agree. Always have a fallback so the team doesn’t stall.
Malpractice, privilege, and insurance considerations
Privilege can hold when a vendor is necessary to deliver legal services and confidentiality is preserved—think of Kovel (U.S. v. Kovel, 2d Cir. 1961), often used with eDiscovery providers. Apply that logic carefully to AI vendors: use a DPA, limit access, and document lawyer supervision.
Malpractice risk centers on supervision and candor. After Avianca, insurers started asking about AI governance—approved tools, training, verification steps, and incident plans. Show them your policy, your audit logs, and a sample verification checklist.
Two simple moves:
- Protect work product: keep prompts/outputs in your DMS under the matter, not in personal accounts.
- Show reasonableness: note the human review and cite checks when AI speeds the work. If a claim comes, those records help show you met Model Rules 1.1, 1.4, 1.6, and 5.3.
Underwriters are warming to firms that can prove a mature AI program. It often makes renewals smoother.
FAQs
Is disclosure needed for internal research only?
If you use an enterprise tool that doesn’t train on your data and no client info leaves your systems, disclosure may be optional. If it changes cost or approach in a meaningful way, disclose under Model Rule 1.4.
What if the AI tool never stores client data?
Lower risk, yes—verify that promise in the contract. Many tools keep logs or telemetry. A no‑training, no‑retention mode still needs lawyer review.
Can I rely on “general” consent in my terms of engagement?
Good start. For sensitive work or material impacts, get matter‑specific informed client consent for AI in legal services.
Do I need to disclose AI to opposing counsel or third parties?
Not unless a court order or agreement requires it. Your duties are to your client and the court.
How do I handle a client who refuses AI use?
Offer an AI‑free workflow, explain timing and cost differences, and record the preference. Revisit later if things change.
How LegalSoul supports compliant, client-friendly AI use
LegalSoul is a privacy‑first AI copilot built for law firms. Here’s what matters:
- Data protection: no training on your data, per‑matter isolation, encryption, and retention controls—aligned with SOC 2‑style practices.
- Consent and governance: capture client AI preferences at intake, store consent with the matter, and enforce rules with role‑based access and audit logs.
- Lawyer‑centric workflows: supervised research and drafting, built‑in citation checks, jurisdiction‑aware prompt libraries, and redaction tools.
- Billing clarity: clean usage exports for AI billing ethics and pass‑through costs, plus narratives clients can follow.
- Court alignment: generate verification workpapers and disclosure language that match common court rules and certifications for AI‑assisted filings 2025.
LegalSoul helps you put this policy into daily practice so you move quickly without risking confidentiality, supervision, or transparency.
Key takeaways and downloadable templates
- Consent triggers checklist: (1) confidential data leaves firm systems or may be retained/trained by a vendor; (2) AI materially affects strategy, cost, or outcome; (3) court/client rules require disclosure.
- Engagement letter clauses: include opt‑in/opt‑out choices, no‑training assurances, supervision promises, and clear billing terms.
- Client disclosure emails: short approvals for routine tasks; detailed notes for high‑impact phases showing time/cost implications and verification steps.
- Court certification alignment: keep cite‑check logs, verification workpapers, and consistent QA so signing certifications is low‑stress.
Your next step: fold these into intake and matter templates, choose enterprise tools that protect confidentiality by design, and train your team on verification. When a client asks, “How are you using AI on my matter?” you’ll have a simple, credible answer that scales.
You can use AI on client matters—just guard confidentiality, supervise, verify citations, and be straight about fees. Get consent when information leaves your systems, when AI changes cost or strategy, or when rules say so. Bake this into your engagement letters, intake, and billing. Lock down vendor safeguards and record consent by matter. Want this to be easy? Book a LegalSoul demo to run privacy‑first, no‑training AI with per‑matter controls, consent capture, audit trails, and billing integrations—so you move faster without losing client trust.