November 22, 2025

Are AI intake chatbots considered solicitation under ABA Rule 7.3? Ethics requirements for law firm websites in 2025

Your website greets people with a friendly AI chat. Great for conversion. But is that “solicitation” under ABA Model Rule 7.3, or just plain advertising? In 2025, bar regulators are looking closely at...

Your website greets people with a friendly AI chat. Great for conversion. But is that “solicitation” under ABA Model Rule 7.3, or just plain advertising?

In 2025, bar regulators are looking closely at law firm chat tools. They care about live, real-time pressure, confidentiality, and any claims that sound too good to be true.

This guide breaks down when an AI intake chatbot is treated as advertising versus solicitation. We’ll unpack the Rule 7.3 definition, why targeted outreach is the hot zone, and how to stay on the right side of the rules across states.

We’ll also hit the practical stuff: companion rules (7.1, 1.18, 1.6, 5.3), “Attorney Advertising” labels, risky scenarios like proactive website chat, a straightforward compliance checklist, privacy/security, transcript retention, conflicts and escalation, avoiding legal advice/UPL, accessibility, vendor oversight, a 30/60/90 rollout plan, FAQs, and how LegalSoul helps you do this without headaches.

Key Points

  • Under ABA Rule 7.3, a user-initiated, on-site AI chat is usually advertising, not solicitation—so long as it’s easy to ignore and not pinging specific people. Risk jumps with targeted outreach or offsite texts/DMs without clear consent.
  • You still have to meet the rest of the rules: 7.1 (no misleading claims), 1.18 (prospective clients), 1.6 (confidentiality/security), 5.3 (vendor supervision), and 5.5 (no UPL). Use clear disclaimers, informed consent, minimal data, conflicts pre-screens, and hard stops on advice.
  • States differ on “Attorney Advertising” labels, filing/recordkeeping (often 2–3 years), testimonials, specialization claims, and accessibility expectations. Map your states and follow the strictest one.
  • Practical guardrails: keep chat on-site, block outbound SMS/DM unless you have explicit consent, separate marketing from legal facts, log transcripts and approvals, and audit prompts. LegalSoul bakes these controls in and gives you audit-ready records.

TL;DR — Are AI intake chatbots “solicitation” under ABA Rule 7.3?

Short answer: usually no. If the visitor clicks to open the chat and can close it anytime, bars tend to treat it like advertising, not solicitation.

Rule 7.3 is aimed at “live person-to-person contact” for money—think in-person, live phone, or similar human pressure. The concern is hard-to-ignore sales tactics on people who might be vulnerable. Written or asynchronous messages (websites, emails) don’t fit that mold.

Example: an on-site widget that waits quietly is fine. A bot that scrapes accident reports and fires off texts to victims within hours? That’s a solicitation problem. Courts drawing this line include Florida Bar v. Went For It, Inc., 515 U.S. 618 (1995), and Shapero v. Kentucky Bar Ass’n, 486 U.S. 466 (1988). Keep it user-initiated and non-intrusive and you’re in safer territory.

Why this matters in 2025 for law firm marketing and intake

Clients want instant answers. Bars want honest, non-misleading communications. You sit right in the middle.

Your chat can lift conversions and improve response time. It also creates records that some states treat as “advertising” and can trigger duties to prospective clients.

One more thing: the FTC keeps warning firms not to overhype “AI.” That lines up with Rule 7.1. Keep claims grounded and verifiable. A smart extra move: split the chat into two lanes—basic marketing info first, then (with consent) a narrow legal intake lane. You reduce confidentiality risk and get faster conflicts checks.

What ABA Model Rule 7.3 actually regulates

Rule 7.3 restricts solicitation by live person-to-person contact when the goal is paid work, with limited exceptions. The idea is simple: real-time human pressure is risky.

The Comments note that emails, websites, and other written messages aren’t “live person-to-person,” and people can ignore them. That’s why most on-site chats are treated like a web form.

Targeting still matters. Outreach to someone you know needs a lawyer for a specific issue can trigger restrictions, even if the medium is normally okay. Some states treat “real-time electronic contact” more strictly, so check local rules. Designing around the “human pressure” logic will keep you aligned with the Rule.

Where AI intake chatbots fit within 7.3

Most on-site chats sit quietly until a user clicks. That’s advertising, not solicitation.

Things change when your system initiates individual contact. DM’ing a crash victim on social, or auto-texting without consent, looks like targeted outreach to someone facing a specific problem. That’s the danger zone for Rule 7.3.

Another wrinkle: the Telephone Consumer Protection Act (47 U.S.C. § 227) can bite if you start texting without consent. Best pattern: keep the bot on-site, wait for the user, and don’t start new channels (SMS/DM) unless there’s clear permission.

Related ethics rules that govern AI intake on websites

Even if your chat isn’t solicitation, other rules still apply.

Rule 7.1: don’t exaggerate. No guarantees, no “we’re the best.” Rule 1.18: treat prospects carefully; don’t collect more facts than you need before conflicts. Rule 1.6: protect confidentiality with encryption, access controls, and solid vendor practices.

Rule 5.3: you must supervise your tech and vendors. Rule 5.5: the bot can’t act like a lawyer or give legal advice. A practical setup: two lanes—marketing basics first, then a consent-gated, limited facts lane with an easy handoff to a human.

State variations and 2025 trends to watch

States don’t agree on everything. New York has long required “Attorney Advertising” labels in certain contexts and ad retention. Florida and Texas have detailed ad regimes. Some states still require filings or pre-approval for certain materials.

Across the board, expect sensitivity to targeted outreach after major incidents and to overpromising with “AI.” Accessibility is getting more attention, too. The DOJ’s 2024 Title II rule for public entities points to WCAG as the yardstick; many firms follow it as best practice.

Best approach: list every state where you market or practice. Apply the strictest combo of labels, retention, and claims rules. Configure your chat to show state-specific notices automatically. One configuration, fewer surprises.

Risk scenarios to avoid (with examples)

  • Aggressive pop-ups: full-screen, instant prompts that push hard to book can look like pressure. Keep prompts gentle and easy to close.
  • Offsite messages without consent: starting SMS or social DMs from chat info risks Rule 7.3 and TCPA problems.
  • Targeting known victims: contacting people after a crash or arrest is a classic red flag under Florida Bar v. Went For It, Inc.
  • Legal advice or guarantees: “You have a strong case” or “We can get this dismissed” can violate Rule 7.1 and drift into UPL.
  • Over-collecting facts: pulling deep confidential info before conflicts screening runs into Rules 1.18 and 1.6.

Watch for “shadow prompts,” too. If someone adds persuasive lines the ethics team never approved, the bot can wander into risky claims. Treat prompts like ad copy: review, approve, and version-control them.

Compliance checklist for AI intake chat on law firm websites

  • Make chat user-initiated. No auto-opening, full-screen takeovers.
  • Identify the firm and where you practice.
  • Use plain disclaimers: no legal advice; no attorney-client relationship until engagement; link to privacy notice.
  • Get informed consent before collecting facts. Separate consent for SMS/email marketing.
  • Collect the minimum: name, contact, practice area. Save details for later.
  • Add a conflicts gate: quick adverse party screen, then pause if needed.
  • Define human escalation triggers and follow-up timelines.
  • Keep transcripts per state ad record rules (often 2–3 years) and your policy.
  • Test regularly: red-team the bot to confirm it refuses advice and routes emergencies fast.

Bonus: set “channel fences.” Don’t let the bot start SMS/DM unless a consent token exists in your CRM.

Data privacy and security requirements for intake chat

Confidentiality and privacy laws overlap here. Use encryption in transit and at rest. Lock down access. Log everything. Keep only what you need, and delete on a schedule.

Vendors should meet recognized standards like SOC 2 Type II or ISO 27001. If you message people, follow the TCPA for texts and CAN-SPAM for email. Map state privacy laws like CPRA, Colorado’s law, and Virginia’s CDPA, especially around opt-outs and sensitive data.

If health info might surface, consider HIPAA risk and avoid detailed medical facts before engagement. For cross-border transfers, use a DPA and, where needed, Standard Contractual Clauses. Pro tip: split marketing metrics from prospective client content so nobody can reuse chat data to train models. That aligns with Rule 1.6 and modern privacy expectations.

Designing disclaimers, consent gates, and intake flows

Put short, plain notices at chat launch with links to the full text. Try something like: “This chat is for information only. It does not create an attorney-client relationship. Don’t share confidential details until we complete a conflicts check. By continuing, you agree to our Privacy Notice.”

Use a checkbox or a clear “Continue” to capture consent—especially before fact questions. Offer separate consent for SMS/email follow-ups. If you practice in multiple states, show “Attorney Advertising” where required.

Start simple: name, contact, practice area, city/state. After consent, ask a few screening questions and escalate if needed. Give a “talk to a human now” option for sensitive matters. Then A/B test wording and placement so the message is clear and still gets read.

Conflicts screening and escalation workflow

Support conflicts checks without pulling a whole narrative. After basics and consent, ask one targeted question: “Is the opposing party one of these?” Tie it to a type-ahead against your conflicts database.

Hit on a name? Pause the chat, tell the user you’re running a quick conflicts review, and stop collecting facts. Route to a conflicts specialist with a tight SLA (say, within an hour during business hours).

If cleared, resume and schedule. If not, send a polite non-engagement note and, when appropriate, refer them to a bar referral service. Keep a standard decline template. Log the decision path and transcript metadata for audits. Add a “hard stop” when the user mentions minors or health info—force a human review before anything else.

Preventing legal advice and UPL in AI interactions

The bot should never give legal conclusions or tell someone what to do. Use refusal language like, “I can share general info, but I can’t provide legal advice. Let me connect you with an attorney.”

Detect advice-seeking (“Do I have a case?” “How do I beat this?”) and trigger a handoff. Keep content jurisdiction-neutral unless a lawyer is involved. Monitor with red-team scripts to catch edge cases.

Bars worry about nonlawyers providing legal services; AI falls into that concern. Treat the bot as a triage assistant. Helpful trick: time-box the chat. If a thread goes past a set number of turns on a legal issue, hand it to a human immediately.

Recordkeeping, auditing, and regulator-ready documentation

  • Store transcripts with timestamps, shown disclaimers, and consent events.
  • Version-control prompts, disclaimers, and workflows with approval notes.
  • Log model and configuration changes.
  • Track incidents, corrections, and follow-ups.
  • Keep vendor diligence files (SOC 2/ISO, pen tests).

Many states want ad records kept for 2–3 years. Treat chat transcripts and prompt histories as part of that file. If there’s litigation, align retention with your legal hold process to avoid spoliation under FRCP 37(e).

Run periodic audits tied to Rules 7.1, 7.3, 1.6, 1.18, and 5.3. Try a mock regulator review once a year: sample transcripts, check that labels appeared where required, and confirm escalation triggers fired on cue.

Accessibility and language access considerations

Expect WCAG 2.1 AA (and 2.2 coming) to be the bar. Even private firms get sued over inaccessible sites. For chat, make sure keyboard navigation works, screen-reader labels are clear, color contrast is solid, focus states are visible, and timeouts can be extended.

Offer multilingual disclaimers for common languages in your market and have a human review legal phrasing. Keep reading level plain. Add a “simple language” toggle. Log accessibility tests (NVDA/JAWS, VoiceOver) and fix issues on a schedule. Add a “request accommodation” button that routes to a human and records the request.

Vendor management under Rule 5.3

Your AI vendor is a supervised nonlawyer assistant under Rule 5.3. Do your homework: SOC 2 Type II or ISO 27001, encryption standards, data residency, subprocessors, incident history, and whether they use your data for training.

Lock in contracts: confidentiality, IP ownership of prompts/outputs, deletion SLAs, breach notice timelines, audit rights, and indemnities. Add a DPA and transfer tools if data crosses borders. Review quarterly, audit access, and run red-team tests. Ask for release notes on model changes.

And set a hard control: no SMS/DM unless your CRM shows a consent flag. That one switch avoids a lot of Rule 7.3 and TCPA pain.

Implementation roadmap (30/60/90 days)

  • 30 days: Map your states and pick the strictest ruleset. Draft disclaimers and consent flows. Choose a vendor, finish security/legal reviews, and build a prototype with on-site-only chat, minimal data, and conflicts pre-screen. Write refusal scripts.
  • 60 days: Pilot on a few pages. Train staff on triage and escalation SLAs. Run accessibility checks, red-team for 7.1/7.3 issues, and confirm transcript retention. Connect CRM with consent tokens for SMS/email.
  • 90 days: Go sitewide. Turn on monitoring dashboards (sampling, escalation rates, advice flags). Do a mock regulator audit and finalize your documentation packet. Schedule quarterly reviews and an annual pen test with your vendor.

Track “good friction,” like the percentage of chats that paused at conflicts when they should. You’ll have proof your controls work in practice.

FAQs from lawyers evaluating AI intake chat

Does a proactive chat invite count as solicitation? A gentle on-site nudge is usually advertising. Targeted offsite messages to specific people are the risk.

Do I need “Attorney Advertising” on chat? In some states, yes. Detect location and show labels where required.

Can the bot schedule consultations or quote fees? Scheduling is fine. Fee quotes can get tricky under Rule 7.1—keep it general and let a human confirm.

How long should I keep chat transcripts? Follow state ad record rules (often 2–3 years) and your policy. Save consent and disclaimer events, too.

What if a user shares confidential info before conflicts check? Treat it under Rule 1.18. Stop, pause intake, and escalate to a human.

How do we block the bot from initiating SMS/DM without consent? Use a technical control that requires a CRM consent token before any offsite message, and audit it regularly.

How LegalSoul supports an ethics-first intake program

LegalSoul is built for law firm compliance. You can keep chat user-initiated, add jurisdiction-aware “Attorney Advertising” labels, and show short disclaimers with consent gates.

It splits marketing from legal facts, adds conflicts pre-screens, and auto-escalates when the bot detects advice-seeking, emergencies, or minors. Security includes encryption, role-based access, audit logs, retention controls, and data residency options.

LegalSoul blocks legal advice with guardrails and monitoring flags. It can enforce on-site-only behavior and honor consent tokens for SMS/email. You also get exportable proof for Rule 5.3: transcript archives, version histories for prompts and policies, and quarterly compliance reports. Accessibility features include screen-reader labels, keyboard support, and WCAG-friendly themes.

Bottom line and next steps

A user-initiated, on-site AI intake chatbot is generally advertising—not prohibited solicitation—under ABA Rule 7.3. Avoid targeted, live outreach to specific individuals and keep chat easy to ignore. That’s the bright line.

The rest is execution: honest content (Rule 7.1), confidentiality (1.6), prospective client care (1.18), vendor supervision (5.3), accessibility, and tidy records. Map your states, set clear disclaimers and consent, collect minimal data, gate conflicts, fence off SMS/DM without consent, and document everything.

Want a faster path? Book a LegalSoul demo or ask for a quick site audit. We’ll help you set up disclaimers, consent gates, conflicts checks, and security that fit your jurisdictions—without slowing intake.

Unlock professional-grade AI solutions for your legal practice

Sign up