Do courts require lawyers to disclose AI use in briefs and filings? 2025 local rules, standing orders, and sample certifications
More judges are asking a simple thing: did you use AI to write this brief? In 2025, there isn’t one national rule. You’re dealing with a patchwork—local rules in some places, judge-by-judge standing o...
More judges are asking a simple thing: did you use AI to write this brief? In 2025, there isn’t one national rule. You’re dealing with a patchwork—local rules in some places, judge-by-judge standing orders in others.
If you file across venues, that matters. The risks are real too: Rule 11 problems from fake citations, a hit to your credibility, and privacy issues if you feed client facts into public tools without protection.
Here’s the plan. We’ll answer whether courts require disclosure right now, where those rules come from, and what they usually ask for. You’ll get practical certification language you can copy, a tight checklist for citation and quote verification, and a workable process you can drop into your brief workflow.
We’ll also touch on confidentiality and client consent, how to keep up with updates, easy-to-miss pitfalls, and how LegalSoul can help you handle certifications, logging, and rule tracking without slowing your team down.
The short answer and why it matters in 2025
Sometimes yes. There’s no universal rule, but more judges are asking for disclosure or a certification that a human checked the work. After the Avianca mess in 2023 (Mata v. Avianca, S.D.N.Y.), several federal judges rolled out standing orders that say: tell me if you used generative AI, and prove a lawyer verified the quotes and citations.
You’ll see this in districts like N.D. Tex. and N.D. Ill., and a growing number of state trial courts. It’s not consistent, and it changes without much notice. That’s the headache.
Why you should care: Rule 11 risk, reputational stakes, and client expectations. If you want the speed benefits of AI, you also need a clean way to disclose when required and show that a lawyer actually reviewed every line. A simple add-on “AI-use certification” you can flip on or off per judge is usually enough—and it keeps review tight without dragging down your timeline.
What counts as “AI use” for disclosure purposes
Most judges mean generative tools that create or rewrite legal text or analysis. Think: drafting arguments, summarizing cases, or writing a fact section. That’s in scope for many disclosure requirements.
Purely mechanical help—spellcheck, formatting, grammar tweaks—usually doesn’t require disclosure. The gray area is research platforms with AI summaries or drafting suggestions. Some judges treat that as “AI use.” Others don’t. When you’re unsure, a short disclosure plus a human-review certification avoids motion practice and side skirmishes.
Good rule of thumb: if a tool touched the substance, log it. If it only cleaned up style, you can probably skip disclosure. A few judges also ask you to identify the parts of a filing that were AI-assisted. If so, describe categories (e.g., “drafted the first factual background pass”), not line numbers.
The 2025 rule landscape: where requirements come from
These rules show up in four places: local rules, individual judges’ standing orders or “Individual Practices,” administrative orders from court leadership, and practice advisories from tribunals. The fastest growth since 2023 has been at the chambers level—individual judges posting their own expectations.
Appellate courts are dabbling with certificates, but most firm requirements live in trial courts. The tricky part is pace. Orders change quietly and not on a set schedule. Treat this like e-discovery protocols a decade ago: it’s evolving. Build a venue matrix for each matter—disclosure-only, human-review certification, citation-validation emphasis, or nothing explicit—and attach it to your brief shell so you don’t scramble at filing.
Common types of court requirements you’ll see
- Disclosure-only: say yes/no and sometimes describe the scope of AI use.
- Human-review certification: a lawyer verified every fact, quote, and citation against the source.
- Scope limits: some judges bar AI for legal analysis but allow clerical edits without certification.
- Sanctions warnings: many orders call out fake citations and misquotes as Rule 11 territory.
In some courts, you attach a short certification to each brief. Others want you to file copies of cited authorities if AI helped, or to identify where quotes came from. A few flag confidentiality and privilege—uploading client data to public tools can be risky.
Practical move: keep three versions ready in your templates—disclosure-only, full certification, and a limited-use note for clerical edits. You’ll meet most expectations with minimal rework.
Where you’re most likely to encounter requirements
- Federal district courts via judge-specific standing orders posted on Individual Practices pages.
- State trial courts testing pilot guidance, often in larger metro circuits.
- Some appellate courts experimenting with a certificate attached to briefs.
- Administrative tribunals sharing advisories on accuracy and confidentiality.
After the 2023–2024 wave of “AI hallucination” stories, judges in places like N.D. Tex. and N.D. Ill. posted expectations about disclosure and verification. State leadership in several systems circulated reminders about confidentiality when using public tools and encouraged disclosure where text was AI-assisted.
Rule of thumb: trial courts tend to be more prescriptive; appellate courts focus on brief integrity and may ask for a short statement in your certificate of compliance. Keep a venue card in each matter that notes any e-filing steps tied to AI disclosures or certificates.
What an AI-use certification should include
A solid certification usually covers:
- Whether AI was used, and for what (drafting, summarizing, editing).
- A line-by-line human review by a licensed attorney, including factual checks.
- Verification that every case, statute, and quotation is accurate, with subsequent history considered.
- A statement that no confidential or privileged information went into public tools without safeguards.
- Signature, date, case caption, and acceptance of responsibility.
Courts care about accountability, not which tool you used. You can mention how you verified (e.g., official reporters or court websites) without naming vendors. If a judge wants you to identify AI-touched portions, give general categories rather than pinpointing strategy. Drop this language into your brief shells and pull it from a quick usage log so certificates don’t become a fire drill.
Sample certification language (adaptable)
Use these pieces and adjust to your judge’s order:
Disclosure-only
“Counsel states that generative artificial intelligence was [not used / used in limited fashion] in preparing this filing.”
Human-review and citation-check certification
“If generative AI assisted, a licensed attorney reviewed the filing in full, verified the accuracy of all factual statements, and confirmed that every case, statute, and quotation is correctly cited and quoted from the authoritative source, including subsequent history. Counsel accepts full responsibility for the content.”
Limited-use clarification
“Any AI assistance was limited to clerical edits (formatting, grammar, and style). No AI-generated legal analysis or factual assertions were included.”
Confidentiality assurance
“No confidential or privileged client information was provided to public AI systems without adequate safeguards.”
Combined template
“Counsel certifies that generative AI was [not used / used as follows: drafting summaries, grammar edits]. A licensed attorney performed a line-by-line review, validated all citations and quotations against authoritative sources, and accepts full responsibility. No confidential or privileged client information was submitted to public AI systems without adequate protections.”
Paste one of these into your certificate of compliance or attach it as an exhibit if the local practice calls for that.
Building a compliant workflow inside your firm
Think process, not heroics. Here’s a simple setup:
- Write a firm AI policy with room for matter-specific limits. Spell out what’s allowed and what isn’t.
- Create a quick “AI plan” at opening: approved tools, reviewer, checkpoints before filing.
- Keep a small log of AI-assisted tasks—who used what, and when—to support certifications.
- Save three certificate templates in your brief shells: disclosure-only, full certification, limited-use.
- Train teams to spot red flags (fake cases, misquotes, stale law) and run citator checks.
One small tweak pays off: add an “AI-used?” field in your DMS profile and require it before final. That flip sends the document through the right review path and pulls the correct certification language. It also helps partners sleep at night.
Citation and quotation verification process
Courts want proof you checked the sources. A quick, defensible flow:
- Pull the authoritative source for every citation (official reporter or court site).
- Match quotations exactly, confirm pin cites, and read enough context to be fair.
- Run a citator, note subsequent history and any limits.
- Label secondary sources correctly—don’t dress them up as binding law.
- Save a short checklist with your work product.
The Avianca issue wasn’t “using AI.” It was filing fake cases and not verifying. Since then, many orders stress citation and quote checks. Consider a second set of eyes on key authorities and a “no orphan citations” rule: if you can’t retrieve it, don’t cite it.
For foreign sources or translations, attach the certified translation if it matters. Archive the PDFs or links you verified. If anyone questions you months later, you can show what you reviewed and when.
Confidentiality, privilege, and client-consent considerations
Ethics opinions from 2023–2024 generally allow generative AI if you protect client confidences and supervise the output. Public tools may log prompts. If you paste client facts into them, you could risk privilege or breach confidentiality.
Practical steps:
- Prefer enterprise deployments with updates commitments, data isolation, and no training on your prompts.
- Redact or abstract client identifiers when testing.
- Add short AI language to engagement letters: scope, safeguards, billing clarity.
- Follow outside counsel guidelines that restrict certain tools.
Remember: “confidential” includes strategy and nonpublic context. If you wouldn’t email it to opposing counsel, don’t paste it into a public prompt box. If the court asks, show your secure setup and written controls. That ends most debates.
Monitoring rules and standing orders over time
These requirements shift. Build monitoring into your case plan:
- Maintain a watchlist of courts and assigned judges with links to rules and Individual Practices.
- Calendar checks before major filings—say, 21 and 7 days out—to see if anything changed.
- Assign an owner to snapshot pages and save them to the matter file.
- Subscribe to court notices or RSS feeds when available.
If a case gets transferred or reassigned, refresh immediately. For multi-venue matters, keep a simple matrix: disclosure-only, certification, or no explicit rule. Note which version applied at filing in case the rule shifts mid-briefing.
Document any back-and-forth with chambers about certificate language. Those notes help when co-counsel or clients ask why you disclosed—or didn’t—on a specific filing.
Common pitfalls and how to avoid sanctions
- Skipping disclosure when a standing order requires it, or being inconsistent across filings.
- Citing cases that don’t exist or quotes that don’t match the source.
- Letting AI draft legal analysis without close human review.
- Feeding client facts into public tools without consent or safeguards.
- Signing a certification at the eleventh hour without doing the checks.
Courts have ordered corrective filings and asked lawyers to attach the authorities they cited to prove they exist. Avoid the mess by verifying early, logging your steps, and clearly separating “clerical AI” from “substantive AI.”
If you find an error, move quickly: tell opposing counsel and the court, file a corrected brief, and explain the verification fixes you’ve made. A short checklist tied to your closing routine saves you when deadlines get tight.
FAQs
Do I have to disclose if I only used AI for grammar? Usually no, unless the judge says otherwise. In cautious courts, a one-line “clerical edits only” note is fine.
What about AI features in my research platform? If it summarized cases or suggested arguments, treat it as AI use and disclose if required. If it just improved search, disclosure is typically not expected.
Should associates disclose if partners didn’t use AI? The filing speaks for the firm. Disclose firm use on the matter. Don’t split hairs about roles.
Do I need client consent to use AI? For public tools, informed consent is wise. For secure enterprise tools, cover it in your engagement letter and outside counsel guideline responses.
What if opposing counsel misuses AI? Meet and confer. If the problems persist—fake cites, misquotes—seek focused relief. Judges prefer targeted requests tied to accuracy and verification.
Looking ahead: 2025–2026 trends to watch
- More chambers adopt short certifications centered on human verification and source accuracy.
- Courts shift from naming tools to focusing on outcomes: are your citations and quotes real and current?
- E-filing systems start adding checkboxes or uploads for AI-use certificates in pilot programs.
- Clients bake AI governance into outside counsel guidelines, asking about tools, data handling, and review steps.
- Bar rules refine competence, supervision, and confidentiality guidance for AI work.
The smart move is light documentation that proves accountability. If you can show who used AI, for what, and how it was verified, you’ll resolve questions fast—and keep your filings moving.
How LegalSoul helps you comply
- Policy guardrails: set matter-specific permissions that match local and judge rules. Block unsupervised analysis, allow clerical edits.
- Usage logging: automatic, tamper-evident records of AI-assisted work, who did it, and who reviewed it—ready for certifications.
- Certification generator: one click produces judge-specific disclosures and review statements from your matter metadata.
- Citation and quote checker: pulls sources, compares quotes, checks subsequent history, and flags issues for attorney review.
- Rules monitor: tracks updates to Individual Practices and local rules and alerts your team before key deadlines.
- Templates: drop disclosure-only or full-certification blocks right into your brief shells and e-filing packets.
With LegalSoul, you build a predictable, auditable routine that fits how your team already works—so you can say “yes” to efficiency without taking on avoidable risk.
Key Points
- No single rule in 2025. Disclosure depends on local rules and, more often, the assigned judge. Check again before each major filing.
- Typical asks: a yes/no disclosure, a human-review certification, limits on AI for legal analysis, and warnings about fake cites and misquotes.
- Good certificates say where AI was used, confirm a line-by-line attorney review, verify citations/quotes with subsequent history, and address confidentiality—signed, dated, and captioned.
- Make it routine: a firm policy, matter-level AI plans, a small usage log, a verification checklist, and a rules watchlist. LegalSoul helps with policy controls, logging, one-click certificates, source checks, and rule tracking.
Conclusion
Here’s the bottom line: AI disclosure in 2025 is courtroom-specific. Build a simple playbook—policy, usage logs, real source checks, privacy safeguards, and a quick rules review before you file. Judges care most about human accountability and accurate citations, not which tool drafted your first pass.
LegalSoul helps you run that play without extra drama: policy controls, automatic logs, fast certifications, deep citation/quote checks, and updates when judges change course. Want to move faster and keep the court comfortable? Book a short demo, load the judge-ready templates, and put a clean, auditable process in place across your litigation teams.