AI is now a matter of legal competence, not convenience

The legal profession is right to be paying close attention to how artificial intelligence is being used in practice. But the real issue is not whether young lawyers are using AI. It is whether the AI being used is fit for legal work at all.

AI is now embedded in the day-to-day reality of legal practice – research, drafting, summarisation, and analysis. As a result, the definition of legal competence is evolving. Today, competence includes not only legal knowledge and judgment, but also the ability to use technology as an efficiency enhancer, risk reducer, talent transformer, innovation igniter, and strategic value driver. All in a way that is accurate, accountable and defensible.

Courts and regulators do not prohibit the use of AI. What they require is that lawyers stand behind their work. That means advice must be based on verifiable sources, client confidentiality must be preserved, and professional judgment must remain firmly with the lawyer. Whether a draft submission is generated by a junior associate or an AI tool, the responsibility for the final output still stands with the lawyer.

This is where a critical distinction must be made. Not all AI is equal. And the difference between AI built for general use and AI designed specifically for legal practice is widening.

Many widely available, consumer-grade AI tools were built for general productivity. They are not designed to preserve privilege, do not reliably cite authoritative sources, and may retain or repurpose data in ways that are incompatible with professional obligations. Used without safeguards, these tools can expose firms to real risk – from inaccurate legal analysis to confidentiality breaches.

By contrast, fiduciary-grade AI designed specifically for the legal profession operates within clear constraints. At Thomson Reuters, our 150-year heritage serving the legal profession in Australia has taught us that professional tools require professional standards. The AI we’ve developed reflects this understanding. It is trained on authoritative legal content, produces transparent and verifiable outputs, respects jurisdictional boundaries, and is built with security, auditability and accountability by design.

These are not optional features. They are essential if AI is to be used responsibly in legal practice.
This distinction matters because the legal profession’s trusted role in society depends on maintaining the highest standards of accuracy and accountability. When AI assists legal work, it must meet the same standards that govern human legal professionals.

Furthermore, the challenge facing firms is not generational. It is not just junior lawyers who are using the tools available to them, often in the absence of clear guidance. The responsibility sits with leadership. Firms must decide which tools are appropriate, how they are governed, and how lawyers are trained to use them competently.

  • The Tech, AI and the Law report revealed that 31% of legal professionals admitted to using ‘unofficial’ AI tools to support their work. In the study, ‘unofficial’ was defined as any tool not approved by the organisation for professional use.
  • The Future of Professionals 2025 report found that only 14% of Australian organisations have a visible AI strategy. Those who do are 3x more likely to realise measurable benefits from AI and 2x more likely to experience revenue growth.
  • The Future of Professionals 2025 also reported that 46% of organisations report skills gaps in their teams, with 31% stating gaps in technology and data skills.

Training should not focus on efficiency alone. Effective training programmes should help lawyers understand not just how to use AI tools, but how to evaluate the outputs critically, verify their sources, maintain professional judgment throughout the process, and document their use of the technology. These are longstanding professional obligations applied to new tools.

Used properly, AI can strengthen legal practice – improving consistency, supporting better research, and freeing lawyers to focus on higher-value judgment and client service. Used poorly, it creates avoidable risk.

The future of legal practice will be shaped by AI. But it’s the profession’s responsibility to determine what AI shapes it. And the answer is fiduciary-grade AI that upholds the profession’s fundamental obligations to accuracy, confidentiality, and accountability.

The choice is not whether to embrace AI, but which AI to trust when the stakes are highest.

Subscribe toLegal Insight

Discover best practice and keep up-to-date with insights on the latest industry trends.

Subscribe