Wouldn’t it be wonderful if we could outsource to technology the complex, laborious, and often emotionally intense process of negotiating? Until recently, the idea of merging negotiation and AI was only a dream. However, the launch of more sophisticated AI systems has changed how we approach the process and how negotiations could evolve.
ADVERTISEMENT |
Negotiation has traditionally been seen by many as an art—an intensely human task that requires mixing collaborative and competitive moves to overcome complexity, information asymmetry, and suspicion to arrive at an acceptable outcome. Only recently has it evolved into a science focused on codifying a systematic way of problem-solving to achieve success.
The current interplay between AI and negotiation marks a paradigm shift in the latter. Today, agents powered by large language models (LLMs) emulate human behavior based on social science techniques, while other AI tools draw on economics and game theory methods. As the technology advances, it’s helpful to anticipate how it could shape negotiation strategies and to understand the possible risks involved.
Assistance vs. automation
In an article published in the Journal of Strategic Contracting and Negotiation, I explore the evolving interplay between negotiation and various AI technologies. Broadly speaking, there are three main ways in which AI can be used to provide more tailored support for negotiations: assistance, semi-automation (the AI agent closes the deal only after final consultation with a human negotiator), and automation (the AI agent closes the deal without final consultation with a human negotiator).
Generic LLMs like ChatGPT, BERT, and LaMDA can be used during the preparation stage or act as negotiation assistants. They can gather relevant market information, provide advice on best practices, act as a sparring partner in role-playing scenarios, and so on.
These tools have significantly lowered the preparation costs of obtaining relevant information, learning and adopting essential advice, and assembling a robust strategy. When employed as a negotiation preparation assistant, generic LLMs can increase the average quality of the output at a fraction of the time and cost. However, present advice can be generic, the information can be wrong, and the role-plays glitchy. Future LLMs are expected to fix these weaknesses.
Many negotiators lack the necessary expertise to leverage AI beyond the assistance level. That said, the technology can be used in a semi-automated or automated capacity to drastically reduce decision costs and help negotiators evaluate the variables and rules to reach an ideal outcome.
Corporations that adopt AI-supported negotiation systems gain a competitive advantage and early learning curve. But because this requires investing time and money to change existing processes and systems, it could take some time before we see a marked shift toward semi-automated or automated negotiation agents.
The possibilities of AI
So, what does AI bring to the negotiation team? It has the potential to compensate for human shortcomings, provide machine-efficiency advantages, and reinvent how we negotiate.
Human negotiators are limited by emotions, cognitive biases, and ignorance of best practices, all of which can hinder our ability to craft and agree to optimal solutions. Although AI systems that have trained on historical data can also develop biases, these can be reduced or eradicated more easily than with humans. Indeed, LLMs have an easier time remaining rational and sticking to best practices because they don’t experience emotions (although they can mimic them).
Negotiations today are seldom recorded, leaving us ignorant of what took place—including potential unethical or illegal practices. Using AI could increase the traceability and transparency of the process and allow for audits and learning loops. This can help organizations improve negotiation skills, outcomes, fairness, and accountability.
At the moment, those employing AI in negotiations use it to help them negotiate better. But the processes are essentially the same. An exciting opportunity would be using AI to rethink or redesign how we negotiate. For instance, AI can handle so much data at once that either side can share their interests and preferences in an AI “black box” or mediator, where neither party learns the limits or secrets of the other. AI can use the huge volume of information to produce optimal solutions that humans are unlikely to craft on their own via standard negotiation practices.
The complexity of negotiating too many issues at once can be cognitively overwhelming for humans, which reduces or caps value creation. But with AI’s vast computational ability, negotiations could juggle an enormous number of issues simultaneously to identify trade-offs and find optimal solutions quickly with fewer communication or relationship risks.
What’s more, AI may have an easier time sticking with best practices, such as tit-for-tat moves. It can start positively (because it doesn’t feel fear); reciprocate negative moves (not to punish or escalate, but to teach the counterparty); and return to collaboration in response to a positive move (because it doesn’t feel the need for revenge or retribution). AI can also resist bias exploitation, power moves, or manipulation, which could make it a great negotiator against win-lose tactics.
Challenges and pitfalls
Despite these opportunities, there are kinks in the emerging AI and negotiation partnership. For starters, AI-automated negotiations are currently limited to small-value, few-issue, repetitive, and long-tail negotiations. This is to contain the losses and risks from AI glitches and the inability to automate some essential parts of more complex processes, such as trust-building. At the moment, LLMs are still confined to an assistance or training role.
Additionally, as automated negotiations become commonplace, some companies or individuals might be motivated to discover, hack, and exploit virtual agents’ rules, decision trees, patterns, or weaknesses. Semi-automated processes, or those that put the final decision in the hands of human negotiators, may prevent such exploitation, though at the cost of efficiency.
Another hurdle would be automated agents created to intentionally negotiate using win-lose strategies or to exploit collaborative agents and humans. Currently, most designers of automated or semi-automated agents claim to promote value creation and optimization to increase gains for all parties. Unfortunately, such environments could invite exploitation. Companies claiming, factually or otherwise, the superiority of their agents can become a tempting proposition for powerful clients who can impose their choice of agents on their smaller counterparts.
Even if it’s accidental, AI-powered negotiation agents are likely to develop biases and create unfair deals or unethical interactions, especially when trained to be purely utilitarian. It’s therefore necessary to instill ethical, legal, and optimization principles in upcoming AI algorithms to avoid the negative consequences of AI biases.
AI-powered agents can also hallucinate or be too sensitive. For instance, an agent might stop the conversation at the slightest (mis)perception of an ethical violation. Or it could end negotiations after receiving a threat, insult, or even just a persistent request it had denied once before. Ending negotiations at the slightest infraction or disagreement might be necessary for compliance purposes and could raise the ethical bar for future negotiations. However, in the short term, it could significantly reduce the number of closed deals, which is a luxury some organizations can’t afford.
Expanding AI’s role in negotiations also brings legal concerns like data privacy, confidentiality, and compliance. For example, disclosing confidential details informally in a negotiation could be a common practice to build trust or untangle impasses. However, in a semi-automated or automated negotiation, confidential information may be divulged, leveraged, or exploited at another time without consent.
Another legal concern revolves around liability for AI-issued decisions or AI misbehavior. The technology can make mistakes that result in unacceptable or illegal behaviors or extremely unprofitable outcomes. In such cases, can an individual sue a company for being discriminated against? If an unprofitable deal is closed by a company’s AI, can it blame the AI’s mistake to excuse itself from performing its obligation? Who is responsible for such errors?
In short, AI negotiation agents still have several shortcomings and face significant challenges. But none seem insurmountable. The reliability of technology-based solutions tends to increase with time as problems are continuously identified and addressed to improve the system. Eventually, the balance will likely tilt toward the success of automated and semi-automated processes, even if they might not fully substitute for human-to-human negotiations.
The road ahead
AI has already begun to reshape the negotiation landscape. Although the technology could overcome human limitations to enhance negotiation outcomes, it also introduces challenges related to biases, strategy, trust, ethics, reputation, and adapting human-centered negotiation.
As the technology continues to evolve, researchers, practitioners, and developers must navigate these challenges carefully. Integrating AI into negotiation processes requires a balanced approach to harness its advantages while mitigating risks, all while ensuring that the technology is beneficial, ethical, and effective. If this is achieved, this exciting collaboration will continue to blossom.
Published Dec. 10, 2024, by INSEAD.
Add new comment