Artificial intelligence is no longer an abstract, futuristic concept in arbitration proceedings. It has already entered the hearing room (albeit, in most cases, unannounced), comfortably seated itself at the tribunal table and proceeded to (with alarming speed) hand down the award. It is not leaving anytime soon. So how do we make sure it behaves?
AI passes the bar: What lawyers are actually doing with it in arbitration
AI tools are being used in arbitral practice to review documents, organise evidence, and assist with early-stage drafting and legal research. The main appeal of using AI is speed, accessibility, and cost. Where e-discovery platforms once required dedicated specialists and budgets fit for large-scale litigation, practitioners can now conduct early case assessments across thousands of documents using legal-grade AI tools at a fraction of the cost and time.
However, that accessibility brings its own problems. Arbitral institutions are catching up, but slowly. The Association of Arbitrators (Southern Africa) and the Chartered Institute of Arbitrators (Ciarb) have issued guidelines on the use of AI in arbitration proceedings, but no clear and definitive rules have been issued governing the use of AI in South Africa.
Internationally, where rules governing AI usage in arbitrations exist, they have not always kept up with the pace at which AI technology is developing. This is problematic because where AI usage policies contain gaps, people are left to make their own choices and often turn to tools that were never built for legal work. This gives rise to “shadow AI”, where individuals use prohibited consumer tools on personal devices for legal research, drafting, and document management.
General-purpose AI tools may draft fluently, but they are not designed to understand legal privilege, confidentiality, or the standards of accuracy on which legal outcomes depend, which may end up compromising the integrity of the entire arbitral process.
AI enters the hearing room: The risks
Data security is the most immediate concern. Using a general-purpose AI tool available freely to the public, without proper data processing agreements in place, risks exposing confidential client information. That information could end up feeding the AI's training data and surfacing in someone else's query later. Legal-grade AI tools must meet recognised security standards. This is a professional obligation, not a preference.
Algorithmic bias is subtler but equally serious. AI systems are typically trained on historical data, and that data reflects the assumptions and inequalities of the world from which it is drawn. For example, an AI hiring tool trained mostly on past male applicants may start ranking men higher than equally qualified women. In arbitration, similar distortions could affect everything from how an arbitrator is selected to how facts are analysed, often without anyone realising it.
Hallucination is the generation of inaccurate or fabricated content. This is not a theoretical risk. It has already played out in courtrooms. The most obvious example is fabricated case citations, and South African courts have already seen practitioners file submissions referencing cases that do not exist. But the more insidious risk lies in the subtler errors: a real case cited with the wrong court, a finding slightly mischaracterised, or facts quietly adjusted to fit the argument being advanced. These errors are more difficult to catch because they look correct from the outset.
Skills atrophy may be the least discussed risk of all. If AI does all the thinking and the lawyer simply applies the output to their work, that is not legal advice – it is delegation without accountability. Saving two hours on research should prompt a lawyer to reinvest part of that time to independently interrogating the AI’s reasoning within the context of the matter. Lawyers are appointed as trusted advisers accountable for their advice. "My AI said so" is not a defence.
AI issues the award: Now try enforcing it
Under the New York Convention, courts can refuse to enforce an arbitral award where the arbitral process was procedurally unfair. AI usage opens up new avenues for precisely these kinds of challenges. A party could challenge an award on the basis that the tribunal used flawed AI analysis, produced reasoning that cannot be adequately explained, or handed over judgment that should have been exercised by a human being.
In a recent case in the US Federal Court, LaPaglia v Valve Corp., the claimant submitted a petition to vacate an arbitral award on the grounds that, amongst others, the arbitrator relied on AI and outsourced his adjudicative role. The award allegedly cited incorrect facts not established at trial or in the record, and the arbitrator had told the parties that he used ChatGPT to assist with writing articles. The Court ultimately held that it lacked jurisdiction, but the matter raises important questions about whether enforcement of an arbitral award can be refused where AI rendered the award.
The enforcement threshold for arbitral awards remains high. Even with party consent to AI use, courts can still refuse enforcement if, for example, AI reasoning is opaque or raises credible, due process, or public policy doubts.
Courts will likely also look to whether the tribunal exercised independent judgment. In this context, where AI is used as a tool (and not as a decision-maker) with proper oversight, it is unlikely to constitute a procedural defect. The emerging consensus is clear: AI can assist arbitrators, but it cannot make decisions for them. The distinction between assistance and delegation is not always obvious in practice, but it is the distinction on which the enforceability of an award may ultimately turn.
All rise: Will AI become the arbitrator?
Not any time soon. The concept of an AI arbitrator refers to the use of an automated system or algorithm to determine the outcome of a dispute, without a human decision-maker reviewing or approving that determination. Current soft law and institutional guidelines treat AI usage in dispute resolution as high risk and specifically require meaningful human oversight at every stage of the process. The American Arbitration Association's International Centre for Dispute Resolution (AAA-ICDR) recently launched its AI arbitrator for documents-only construction disputes, yet even this cutting-edge development underscores the need for a human arbitrator to review and refine the AI-generated award before handing down the final decision.
Even where both parties consent to an AI arbitrator, a court asked to enforce the resulting award could still refuse enforcement, particularly where the AI's reasoning cannot be examined, traced, or adequately explained.
In South Africa, the Association of Arbitrator’s guidelines on the use of AI in arbitral proceedings provide that AI can assist with organising, drafting, translating, and summarising, subject always to proper disclosure and verification. But the authority to decide must remain with a human being. Legitimacy in arbitration is not derived from the quality of the output alone; it is derived from the accountability of the person who uses it.
This sentiment is echoed by the Ciarb guidelines, which are unequivocal in the view that arbitrators should not relinquish their decision-making powers to AI. The guidelines view AI's role as supportive, assisting with accurate and efficient processing of submitted information. Guideline 8.4 specifically places responsibility at the arbitrator's door, noting that: "An arbitrator shall assume responsibility for all aspects of an award, regardless of the use of AI to assist with the decision-making process."
Sorry AI: Some things still need the human-lawyer edge
AI will not replace the thinking lawyer. Strategic reasoning, contextual judgment, and accountability for advice remain irreplaceable. What AI can do, when used properly, is amplify legal thinking, identify contradictions in the opposing party's evidence, improve efficiencies, brainstorm counterarguments, refine drafts, and surface patterns in large document sets.
Lawyers who embrace human-AI collaboration will be better equipped, faster, and sharper than those who either outsources their judgment to a machine or refuse to engage with AI technology at all.
So yes - AI has walked into arbitration. It has taken a seat, it has read the submissions, and it has strong views on the outcome. But the decision, the reasoning, and the responsibility? Those remain stubbornly, irreplaceably human.
Written by Priyesh Daya, Partner; Brittany Leroni, Senior Associate; Caitlin Leahy, Candidate Attorney at Webber Wentzel
EMAIL THIS ARTICLE SAVE THIS ARTICLE ARTICLE ENQUIRY FEEDBACK
To subscribe email subscriptions@creamermedia.co.za or click here
To advertise email advertising@creamermedia.co.za or click here









