Is Thomson Reuters CoCounsel safe for law firms? Confidentiality, data retention, and admin controls for 2025
Clients and insurers don’t want vague promises about “secure AI.” They want proof that what you use will keep privilege intact, follow outside counsel rules, and not add new risk. So the big question ...
Clients and insurers don’t want vague promises about “secure AI.” They want proof that what you use will keep privilege intact, follow outside counsel rules, and not add new risk.
So the big question many firm leaders are asking right now: is a modern AI legal assistant actually safe for law firms in 2025?
Below, we spell out what safe should look like—confidentiality and attorney–client privilege, data retention and deletion, model‑training policies, and the admin controls you need. We’ll talk evidence (SOC 2 Type II, ISO 27001, pen tests, subprocessors), reliability guardrails (source‑linked answers and citation checks), and regulatory fit (DPA/BAA, EU AI Act, OCGs). You’ll also find a practical checklist, a quick pilot plan, red flags to avoid, and how LegalSoul handles zero retention, no model training on your data, and BYOK.
By the end, you’ll have a clean, workable way to decide what’s safe for your firm.
Key Points
- Safety in 2025 = real controls: zero data retention by default, no training on your data, strong encryption, SSO/MFA, matter‑level RBAC with ethical walls, and answers that cite their sources.
- Ask for proof: SOC 2 Type II or ISO 27001, recent pen tests, data flow diagrams, subprocessor transparency, data residency options, BYOK/CMK, and contracts that cover DPA/SCCs, HIPAA BAA (if needed), and EU AI Act plans.
- Run it like an enterprise tool: granular admin (retention, legal holds, DLP/redaction, model/task allowlists, device/download limits) plus full audit logs. Pilot with scrubbed matters, map to OCGs, and review settings every quarter.
- Know the red flags: default retention, fuzzy “training,” weak RBAC/audit, missing deletion SLAs. Use a due‑diligence checklist, a shared responsibility matrix, and keep a kill switch ready. LegalSoul ships with these safeguards.
What “safe” should mean for AI legal assistants in 2025
“Safe” isn’t a feeling—it’s a set of checks you can verify. At a minimum, an AI assistant should protect confidentiality and privilege, collect the least data possible, and give you admin tools you can actually use.
That looks like encryption in transit and at rest, SSO/MFA, least‑privilege access at the matter level, and real isolation of your data. It also means zero data retention by default, no model training on your content unless you explicitly opt in, and deletion timelines in days, not months. On the proof side, look for SOC 2 Type II or ISO 27001, fresh third‑party pen tests, and a current list of subprocessors.
Many financial‑services OCGs now demand no training on client data, clear deletion windows, and data residency controls. If a vendor can’t answer those in an RFP, it’s not ready for client work.
Easy gut check: would you hold this system to the same privilege, retention, and audit standards you use for your DMS or eDiscovery? If not, pause and close the gap before rollout.
Confidentiality and privilege requirements for client matters
Start with identity and access. Turn on SSO/MFA, keep sessions short, and use matter‑level RBAC so people only see what they need. Extend ethical walls into the AI workspace, not just your DMS.
Set up private tenancy or strong logical isolation so your prompts and outputs never mingle with other customers. Encryption is table stakes, but key management matters too—know who holds the keys and how quickly you can contain an incident.
The ABA pushes risk‑based controls for client communications; treat AI the same way, including logs. Think “privilege diary”: can you show who accessed what, when, and under which policy?
Example: an internal investigation under a litigation hold. You should be able to (1) lock the workspace to named custodians, (2) block downloads and copying for everyone else, and (3) pull an access report instantly. Also, make users add a matter code to each prompt—it improves governance, billing notes, and speeds up audits when a client asks questions.
Data retention, deletion, and model training policies to demand
Retention is where risk creeps in. Push for zero retention by default—no storage of prompts, files, or outputs after processing unless you turn it on for a specific reason.
If you do enable retention for QA or audit, keep it short and visible in admin settings. Nail down deletion SLAs (say, primaries within 7–30 days; backups 30–90) and document backup/restore windows.
For models, the baseline is no training on firm data unless you opt in, and ideally only with written approval in the contract. Modern DPAs should cover data residency (U.S./EU), subprocessor disclosure, and transfer mechanisms. Ask whether logs include content snippets and when those are purged.
Don’t forget legal holds. If a matter is on hold, the system should preserve prompts, files, outputs, and citations for that matter while deleting everything else on schedule. Also confirm deletion reaches everywhere—evaluation sets, caches, vector indexes—not just the main database.
Enterprise admin and governance controls for law firms
Admins need actual levers, not marketing slides. The basics: SSO/MFA, device and download restrictions, fine‑grained RBAC, retention policies you control, legal holds, and searchable/exportable audit logs. Add DLP and auto‑redaction to catch client names, account numbers, and PHI before it leaves approved boundaries. Device rules (no downloads, clipboard, or screenshots) help in sensitive practices.
Example controls firms like:
- Model/task allowlists by practice (e.g., research and summarization for antitrust; upload redaction for healthcare).
- Policy rules that block uploads marked “Confidential—Client X” unless a matter code is set.
- Just‑in‑time privilege elevation for partners or KM with a reason required and auto‑revert.
Keep AI governance aligned with your records and DMS policies so lawyers don’t juggle two playbooks. Watch for “governance drift”—review roles, allowlists, and retention quarterly as OCGs evolve. And build an incident runbook for the AI environment: who to alert, which logs to save, and how to disable specific features fast without shutting everything down.
Independent security assurances and transparency
Trust comes from evidence. Ask for SOC 2 Type II (covering a full year) or ISO 27001 with the current scope. You should also see a pen‑test summary, vulnerability scan cadence, and remediation timelines.
Useful artifacts include:
- A security whitepaper with architecture and data‑flow diagrams, including any third‑party LLMs, vector databases, or telemetry tools.
- A live subprocessor list with change notifications and a right to object.
- Details on crypto key management, including BYOK/CMK and HSM usage.
Ask for a shared responsibility matrix tailored to AI—what the vendor secures vs. what the firm handles. It helps during audits and claims. Also request model/tool version disclosures and evaluation practices so a sudden upstream model swap doesn’t surprise your workflows.
Lock in breach notice terms with clear timelines and cooperation steps that match your insurer’s expectations.
Reliability guardrails and quality assurance
Security locks don’t help if the output is unreliable. Prefer answers that link to their sources and offer citation checks, so attorneys can verify fast.
Retrieval‑augmented generation with authoritative sources reduces hallucinations. Red‑team against your common tasks—privilege calls, depo summaries—so you catch failure modes early.
For high‑risk matters, pair confidence scores with review rules. Lower‑confidence answers get a second reviewer by default. If uploads are allowed, combine them with DLP and automatic redaction to keep PHI or client identifiers from sneaking into prompts.
Two tips that pay off:
- Use a separate citation checker, independent from the generator. It catches subtle mismatches between quotes and holdings.
- Keep a rolling “gold set” of firm‑approved examples. Track accuracy by task type and matter risk, not just a single score.
Perfection isn’t the goal. Predictable behavior with guardrails and an auditable review trail is.
Regulatory and contractual alignment
Your AI setup needs to fit your regulatory world and your clients’ OCGs. Start with a solid DPA/SCCs, a data map, and breach terms. If you touch PHI, you’ll want a HIPAA BAA. For cross‑border work, confirm data residency and transfer mechanisms.
Many OCGs now require vendors to promise no model training on client data and to meet deletion timelines. Put that in the contract. The EU AI Act is coming into focus; look for documented risk assessments, transparency duties, and a plan to meet deadlines as parts of the law roll out.
Examples: public sector or defense might require U.S. residency and export controls; financial services might insist on BYOK and quarterly access reviews. The vendor should show exactly how admin settings meet those asks.
Good pattern: add annexes mapping each OCG line item to a control (e.g., “no data training” → “opt‑in disabled; clause X”). Include a right to audit security controls and change‑notice clauses for subprocessors or model providers.
Due diligence checklist and RFP questions for firm evaluators
Keep asks short and verifiable. Use an ISO 27001‑style questionnaire as a base and adapt it for AI.
- Provide SOC 2 Type II or ISO 27001 (with scope), latest pen‑test summary, and remediation cadence.
- Explain encryption, key management, and BYOK/customer‑managed keys.
- State default retention for prompts/files/outputs; deletion SLAs for primaries and backups; data residency options.
Governance and controls
- Describe SSO/MFA, matter‑level RBAC, ethical walls, and device/copy/download restrictions.
- Show audit log schemas, retention, search/export; legal hold coverage (prompts, outputs, vector indexes).
- Explain model/task allowlists, content filters, and policy guardrails.
Reliability and oversight
- Share evaluation methods, gold‑set benchmarks, and error categories.
- Demonstrate source‑linked answers and automated cite checks; red‑team results for legal workflows.
Contracts and operations
- Send DPA/SCCs, breach notice timelines, subprocessor list, and change process.
- Clarify support SLAs, uptime/SLOs, maintenance windows, and roadmap governance.
Two advanced asks worth adding: a “model bill of materials” (all foundation/embedding models and versions) and a shared responsibility matrix. They make audits faster and reduce surprises when upstream providers change things.
Implementation roadmap: piloting, change management, and training
Treat this like rolling out any regulated system. Start with low‑risk, high‑volume tasks—think transcript summaries using public filings—and scrub inputs. Limit access to a small cohort, require matter tags, and review audit logs weekly.
Match data residency to your client base and restrict tasks via allowlists to what you’ve already validated. Use matter‑level RBAC and ethical walls to check privilege boundaries during the pilot.
Example pilot (6–8 weeks):
- Weeks 1–2: Lock down security settings; build a gold set for evaluation; train the pilot group on policies and prompts.
- Weeks 3–5: Run scoped tasks with review gates; track time saved and errors; tune guardrails and DLP rules.
- Weeks 6–8: Expand to a second practice; test legal hold behavior; finalize the admin runbook and training docs.
Create a simple scorecard for accuracy, time saved, review effort, and any risk events by workflow. It keeps the conversation honest when you expand. Fold AI enablement into KM/PD—short monthly clinics with real documents beat one‑and‑done trainings.
Common red flags and how to mitigate them
Some patterns are warning lights. Don’t ignore them.
Red flags
- Default retention or fuzzy “quality improvement” language that really means training on your content.
- No deletion SLAs for backups; no legal holds for AI artifacts.
- Missing SSO/MFA, coarse RBAC, or no device/copy/download limits.
- Opaque subprocessors or silent model swaps; no pen‑test proof.
- Outputs without citations or verification; lots of disclaimers instead of controls.
Mitigations
- Contract for zero retention by default and explicit opt‑in for any model training; add audit and termination rights.
- Document retention across primaries, logs, caches, vector stores; verify with a real deletion test.
- Make SSO/MFA, device rules, and matter tagging a go‑live requirement.
- Require change notices for subprocessors and model versions; ask for a quarterly security report.
- Tie fee holdbacks or pilot exit criteria to meeting deletion SLAs and backup timelines.
One more safety net: a kill switch that can disable risky capabilities (like document uploads) instantly if a control regresses—no need to shut down the entire system.
How LegalSoul approaches confidentiality, retention, and admin control
LegalSoul is built for firm‑grade needs from day one. By default, it keeps zero data—no prompts, files, or outputs saved after processing—and it doesn’t train on your data unless you explicitly opt in through the contract.
Data stays encrypted in transit and at rest. You can bring your own keys (BYOK/CMK) and run in a private tenant if you want stronger isolation. Admins get SSO/MFA, matter‑level RBAC, ethical walls, retention controls, and exportable audit logs.
Legal holds cover prompts, attachments, outputs, and vector entries for specific matters while normal deletion continues elsewhere. DLP and auto‑redaction help keep PHI or client identifiers from leaving approved boundaries. Model/task allowlists align capabilities with each practice’s comfort level.
For reliability, answers include source links and automated citation checks, plus tools to evaluate workflows before you scale. On assurances, LegalSoul aligns to SOC 2 Type II and ISO 27001 practices, lists subprocessors publicly, supports U.S./EU residency, and offers robust DPAs/SCCs and HIPAA BAA when needed.
Our north star is simple: AI workspaces should meet or beat the controls you trust in your DMS. That’s how you move faster and still pass OCG reviews with less friction.
Bottom line: decision framework for firm leaders
Here’s a fast way to reach a defensible yes or no.
First, set non‑negotiables tied to confidentiality and privilege: zero retention by default, no training on your data without explicit opt‑in, matter‑level RBAC and ethical walls, and source‑linked answers. Next, verify proof—SOC 2 Type II/ISO 27001, pen‑test summaries, data‑flow diagrams, and a live subprocessor list.
Then test it in your environment: SSO/MFA on, guardrails active, audit logs and legal holds working, data residency set. Pilot with scoped workflows and scrubbed inputs. Track accuracy, time saved, and any policy breaks and require fixes before expanding.
Make contracts match operations: DPA/SCCs, HIPAA BAA if needed, deletion SLAs, change notices for models and subprocessors, and clear breach timelines. Map controls to each client’s OCG so approvals go faster.
Keep stewardship going—quarterly reviews of security and governance, evaluation of new models and tasks, and refreshers for users. If a platform meets your bar, proves it on paper, and behaves in a pilot, you’re set. If not, move on. In 2025, you don’t have to compromise on confidentiality or control.
Conclusion
Safety today is something you can verify. Look for zero retention by default, no training on your data, SSO/MFA, matter‑level RBAC with ethical walls, DLP/redaction, legal holds, audit logs, and source‑linked answers. Back it with SOC 2 Type II/ISO 27001, clear deletion SLAs, data residency, BYOK, and transparent subprocessors—then test it with a focused pilot that maps to your OCGs.
Want to see it in action? Request LegalSoul’s security pack and run a 6–8 week pilot on your highest‑value workflows. Book a demo and check how LegalSoul handles confidentiality, governance, and day‑to‑day reliability—without the hand‑waving.