Skip to main content

JAMS ADR Insights

Artificial Intelligence Alternative Dispute Resolution Mediation

AI’s Double-Edged Role in Dispute Resolution

Insights From a Simulated Mediation

Recently practitioners, scholars and enthusiasts of alternative dispute resolution gathered—virtually and in person—at a JAMS Resolution Center to examine one of the most pressing and intriguing questions in the field: What happens when the very technology causing or complicating a dispute is also asked to help solve it? At the event, titled “AI’s Double-Edged Role in Dispute Resolution: When the Machine Tries to Solve the Dispute It Created,” participants witnessed a meticulously staged simulation of a challenging international mediation. Through this scenario, attendees explored how artificial intelligence (AI) could shape the future of conflict resolution and how human mediators might continue to excel despite, or perhaps because of, the new tools at their disposal.

Setting the Stage: A Complex, Cross-Border Dispute

The central case study for this event featured a dispute between two parties: AI Horizon (Horizon) and Quantum Cognition (Quantum). Horizon, a Luxembourg company owned by a Beijing-based conglomerate, had commissioned Quantum, a Silicon Valley startup, to develop a large language model (LLM) intended to rival global conversational AI systems. The final product was supposed to handle multiple languages, comply with diverse legal regimes (notably the EU AI Act and various Chinese regulations) and ultimately enhance Horizon’s brand worldwide.

Yet trouble emerged quickly. Horizon withheld an $8 million payment, claiming that the LLM produced “hallucinations”—factual misstatements or offensive content that could be legally and reputationally hazardous, especially in the Chinese market. Horizon insisted the malfunctions endangered its compliance with strict EU and Chinese regulations. Quantum, on the other hand, argued it delivered exactly what Horizon requested on a constrained budget, using only open-source data. It saw the hallucinations as an inherent artifact of current AI technologies, not a breach of contract. Instead of paying, Horizon demanded $95 million in damages—an astronomical figure that Quantum dismissed as outrageous.

The event’s simulation allowed attendees to follow the dispute from a joint mediation session through private caucuses and back again, showcasing how a human mediator might facilitate a resolution. Attendees were also invited to contrast how an AI-driven mediator—or AI-assisted mediator—might handle such a complex scenario.

blog post image

Human Mediation in Action: Uncovering Interests and Managing Emotions

Attendees watched as a human mediator navigated the emotional and legal thicket. In the initial joint session, tensions ran high: Horizon’s representatives insisted on compensation for reputational damage and compliance risks, while Quantum’s counsel threatened to leave, calling the demands “nuts.” The human mediator’s role here was to cool tempers, invite a short recess and encourage more constructive dialogue. With gentle but firm interventions, the mediator guided the parties toward separate caucuses.

In private sessions, the mediator dug deeper to understand each side’s underlying interests. Horizon’s team privately admitted they were desperate for a fix—less concerned with the large damages figure than with preventing reputational ruin in China. Quantum’s representatives, behind closed doors, expressed frustration at not being paid and worried about setting a precedent for endless liability. They acknowledged that the hallucinations could be mitigated, but not eliminated entirely with better data and more resources.

For attendees, this underscored a classic mediation lesson: Beneath the bravado and extreme demands lie real interests. Here, Horizon wanted reassurance, brand protection and regulatory compliance; Quantum wanted fair compensation, limited liability and long-term viability.

Introducing AI Into the Equation: Risks and Opportunities

The event did not merely showcase traditional mediation. It framed the dispute as one in which AI was both the source of the problem and a potential tool for its resolution. Attendees learned that, while current AI language models can summarize positions, predict outcomes and propose bargaining ranges, they lack the nuanced empathy and cultural awareness that can defuse emotional standoffs.

When tensions rose, would an AI mediator have recognized the cultural sensitivities at play—Horizon’s Chinese parent company’s emphasis on “face” and reputation—or the softening stance of Quantum’s CEO, who felt “bad” for Horizon’s predicament? Probably not, at least not without sophisticated cultural training and emotional intelligence that current generative models struggle to replicate. By comparing the human mediator’s approach—acknowledging cultural concerns, suggesting creative solutions such as an apology and an insurance policy—to what a purely AI-driven tool might do, participants concluded that human insight remains invaluable.

However, the event did highlight how AI can assist mediators. AI-driven tools could have quickly analyzed the contract language, outlined relevant EU and Chinese regulations, and even reviewed prior precedent on AI-related disputes. It could have provided data-driven risk assessments and suggested best practices for contract amendments. The key is that these capabilities supplement, rather than supplant, the human mediator’s role.

Creative Solutions: From Insurance to Apologies

A turning point in the simulated mediation was the mediator’s suggestion to use specialized insurance to cover reputational harm caused by AI hallucinations. This represented the hallmark of effective dispute resolution: expanding the pie and finding options beyond a simple monetary exchange. By insuring against the risks of offensive hallucinations, Quantum could offer Horizon a level of reassurance without promising the impossible—zero hallucinations.

Similarly, the mediator proposed a carefully crafted apology that would resonate with Horizon’s Chinese stakeholders. Quantum’s representatives, initially hesitant, agreed to a statement that acknowledged the difficulties and consequences without admitting legal liability. This culturally sensitive gesture, coupled with insurance to mitigate future problems, allowed both sides to move beyond their entrenched positions.

This creative problem-solving demonstrated the value of a human mediator. The human element—empathy, creativity, cultural literacy—remains irreplaceable.

Balancing Rationality and Emotion

The event demonstrated the delicate interplay between rational negotiation and emotional satisfaction. Horizon’s large damages claim initially seemed irrational, but in private, it became clear it needed to signal seriousness and protect its image with its parent company. By the end of the mediation, Horizon was willing to reduce that figure drastically if it could secure data licensing, additional training, hosting agreements and proper safeguards against reputational fallout.

Quantum learned that simply insisting “we met the contract” would not solve the problem, as it ignored Horizon’s deeper interests. The insurer’s involvement and a thoughtful apology helped bridge this emotional and cultural gap. While on purely rational grounds Quantum might have resisted further concessions, addressing Horizon’s emotional needs allowed both sides to reach a workable solution.

Looking Forward: AI and the Future of Dispute Resolution

The event concluded with reflections on the role AI might play going forward. Could AI-driven mediators one day handle such disputes independently? Perhaps, but current AI tools struggle with nonverbal cues, deep cultural contexts and the emotional intelligence required for high-stakes conflict resolution. Instead, a more realistic near-term scenario is a hybrid model: a human mediator supported by AI analytics. Technology could handle data-intensive tasks while the human mediator maintains rapport, injects creativity and displays sensitivity.

Attendees left with a clearer appreciation of both the potential and limits of AI in dispute resolution. Although AI can cause or complicate disputes, it can also provide tools to manage them more efficiently. Still, it is the human mediator’s judgment, empathy and creative problem-solving that ultimately bring parties to a durable, culturally appropriate and mutually satisfactory resolution.

If AI Gains the Ability to Read Emotions: A Future Outlook

In recent developments, AI has been enhanced to be able to understand visual data. It can now be given access to your computer screens or video camera and be able to interpret what it “sees” with great accuracy. As of the writing of this piece, it does not yet have the ability (or is not yet allowed) to interpret emotional states. However, if AI systems became adept at recognizing nonverbal cues, cultural signals and subtle emotional states, this may actually prove beneficial for human mediators.

In such a scenario, human mediators could leverage these advanced tools as an emotional radar of sorts, detecting shifts in tone or stress before disputes escalate. This partnership would enhance mediators’ ability to respond more effectively. However, mediators would likely focus even more on complex, deeply rooted conflicts that require moral judgment, cultural nuance and the ability to address existential questions that machines cannot yet replicate authentically.AI offers numerous advantages, but it will ultimately be human behavior that dictates its adoption. Just because AI is available doesn’t mean it will always be used. In complex mediations involving significant financial stakes, AI can provide best practice suggestions and informed recommendations. However, when the stakes are highest, people often prefer to speak directly with the top decision-maker or advisor.

While AI might handle more routine cases or assist in reading parties’ emotions, it remains likely that human mediators would distinguish themselves by providing genuine empathy, trust-building and creative problem-solving. Far from rendering human mediators obsolete, emotionally perceptive AI could push them into more specialized, higher-level roles. Human mediators would emphasize what makes them uniquely human—empathy, cultural intelligence, ethical reasoning and the human to human and pier to pier connection—preserving their indispensable role even in an age of emotionally savvy AI.


Disclaimer:
This page is for general information purposes. JAMS makes no representations or warranties regarding its accuracy or completeness. Interested persons should conduct their own research regarding information on this website before deciding to use JAMS, including investigation and research of JAMS neutrals. See More

Scroll to top