Facing the high costs of legal representation, a growing number of self-represented litigants—individuals without a lawyer—are turning to Generative Artificial Intelligence (AI) tools like ChatGPT for legal advice, research, and drafting court submissions. While the promise of free and accessible legal assistance appears to solve a critical access-to-justice problem, this reliance on untested AI carries profound risks. Unverified AI outputs frequently contain fictitious case law (“hallucinations”) and procedural errors that can lead to court documents being rejected, valid claims being lost, and, critically, the litigant being penalized with costs orders to pay the opponent’s legal fees. This highlights the urgent need for caution and better solutions for affordable legal aid.
The Temptation of Free AI in the Justice System
The legal system is notoriously expensive, creating a severe “access to justice” crisis where many people with valid claims cannot afford a lawyer. For these self-represented litigants, the emergence of free or low-cost generative AI tools is an alluring solution. These individuals, navigating complex cases from property disputes to employment and migration issues, are using AI to draft complex legal arguments, summarize facts, and search for legal precedents.
Judges themselves acknowledge the temptation. Yet, this reliance shifts the burden onto the litigant to act as their own lawyer, with the added complexity of vetting a tool known for making errors. Without the proper training to critically evaluate the AI’s output, the litigant risks damaging their case beyond repair, turning a cost-saving measure into a costly mistake.
The Danger of Fictitious Case Law (Hallucinations)
The most significant and financially damaging risk of using AI for legal research is the phenomenon of “AI hallucinations.” Unlike professional legal databases, which cite verifiable sources, consumer AI models frequently fabricate case citations, statutes, or even entire legal passages that do not exist.
In real-world cases involving both self-represented individuals and even professional lawyers, courts have been presented with dozens of these fictitious legal precedents. When a court discovers that a submission relies on sham authorities, the integrity of the entire case is compromised. This results in court documents being rejected, proceedings being delayed, and the ultimate risk of the litigant losing their case because their foundational legal arguments are unsound.
The Financial and Legal Consequences of Misuse
The consequences of relying on inaccurate AI-generated law extend beyond merely losing the case; they carry severe financial penalties. When a litigant files unverified or false information, the court can issue a costs order against them.
This means the self-represented individual could be ordered to pay for the legal fees incurred by their opponent to review, respond to, and ultimately debunk the fabricated material. This can transform a person’s attempt to save money into a substantial debt. Furthermore, using AI without understanding its procedural limitations can lead to unintentional breaches of court rules, such as accidentally disclosing private or confidential information, or violating suppression orders, which carry their own set of serious sanctions.
The Ethics Gap: A Lawyer’s Duty vs. AI’s Disclaimer
A fundamental difference separates a human lawyer from an AI tool: professional ethics and accountability. Trained lawyers are officers of the court with a duty to verify all facts and legal precedents they present. If they misuse AI, they face serious sanctions, including financial penalties, professional admonishment, or suspension.
In contrast, most generative AI tools come with disclaimers warning users not to rely on their outputs for professional advice. The AI bears no responsibility, leaving the self-represented litigant entirely liable for the consequences of the machine’s errors. This places an unreasonable and often insurmountable burden on non-lawyers to perform due diligence that requires years of specialized legal training.
The Real Solution Lies in Affordable Legal Services
The widespread misuse of AI in court is a symptom of a larger systemic problem: the inaccessibility of affordable legal help. While AI holds promise for making legal processes more efficient for lawyers, it is not yet a safe replacement for the judgment and ethical diligence of a human advocate, especially for complex litigation.
The real solution to the access-to-justice gap requires systemic investment in making legal services affordable and accessible. This includes funding legal aid organizations, supporting simplified court procedures, and perhaps using AI safely in the future to power vetted, non-generative tools. Until that time, individuals must understand that replacing a lawyer with a chatbot carries a cost far greater than just the premium saved.