Multi-Agent Consensus Seeking via
Large Language Models

Westlake University

One-sentence summary: This work demonstrates the potential of LLM-driven agents to achieve zero-shot autonomous planning for multi-robot collaboration tasks and analyzes the impact of the agent number, agent personality, and network topology on consensus-seeking processes.

Result.

Figure 1: LLM-driven consensus seeking in a multi-robot aggregation task

Background: In recent months, multi-agent systems driven by large language models (LLMs) have received rapidly increasing attention. It is reported that the problem-solving ability of LLMs can be significantly enhanced through collaboration between multiple agents. The works in MetaGPT, CAMEL, and ChatDev break down complex tasks into simpler sub-tasks, which are then handled by different agents separately. These collaboration strategies, to some extent, can reduce hallucinations and enhance the ability to solve complex tasks.

Topic addressed: Our work considers a fundamental problem in multi-agent systems: consensus seeking. When multiple LLMs are used to solve the same task, they may have different solutions initially, but they can eventually reach the same solution through continuous negotiation. This is essentially a consensus-seeking process. Consensus seeking also widely exists in collective decision-making systems such as animal groups and human societies. It is also a core research problem in the fields of multi-robot systems and federated learning.

Research gap: Consensus seeking via LLMs has not been specifically studied so far. There are many important questions that need to be answered. For instance, if we use multiple LLMs to assist us in negotiations or problem-solving, it is important for us to know whether they can eventually reach a consensus amongst themselves. If they can, how long would it take and what factors can influence the final consensus outcome? If they cannot, what factors may lead to this failure? The answers to these questions play a pivotal role in our proper utilization of LLMs. For example, it would be beneficial if we could predict the final negotiation outcome even before deploying LLMs or we know how to obtain desired negotiation outcomes by adjusting some prompts.

Problem setup: In this work, we study a specific consensus-seeking task. Specifically, in an LLM-driven multi-agent system, each agent starts with an initial state represented by a numerical value. The objective for them is to continuously adjust their states to achieve the same final state. Throughout this process, each agent can perceive the states of the other agents, and based on this information, formulate strategies to adjust their own states.

Result.

Figure 2: An illustration of the negotiation process of two agents

Significance: This consensus-seeking task is an abstraction of more complex tasks. Understanding this simple task can lay the necessary foundations for understanding more complex ones. Specifically, in this task, the state of each agent corresponds to a point in the set of real numbers. In more complex tasks, the state of each agent may correspond to a point within a more complex set (e.g., a set of solutions).

Findings:

  1. Consensus strategy: When not explicitly directed on which strategy the agents should adopt, they often tend to use an average strategy for consensus seeking although they may also occasionally use some other strategies. By the average strategy, an agent sets its state in the next round as the average value of the current states of all agents. This strategy shows that the agent is considerate and collaborative. Interestingly, the average consensus is a well-studied problem in the field of multi-agent cooperative control. In that context, each agent is modeled as a dynamic system governed by ordinary differential or difference equations (ODEs). This work reveals the similarity between the behavior exhibited by LLM-driven and ODE-driven multi-agent systems. The existing theoretical results of ODE-driven agents can provide a theoretical foundation to help us understand LLM-driven agents.
  2. Result.

    (a) The agent counts its own state when calculating the average.

    Result.

    (b) The agent does not count its own state when calculating the average.

    Result.

    (c) Suggestable strategy

    Result.

    (d) Stubborn strategy

    Result.

    (e) Erroneous strategy

    Figure 3: Different strategies adopted by agents

  3. Impact of personality: A person's personality often plays a significant role in negotiation and collaboration tasks. Motivated by this, we examined two types of personalities: stubborn and suggestible. Compared to suggestible agents, stubborn agents tend to insist on their views and are less likely to change. We observed that stubborn agents have a dominant influence on the final consensus value of the group, leading the entire system to display a leader-follower structure.
  4. Result.

    (a) Agent 1 is stubborn; agent 2 is suggestible

    Result.

    (b) Agent 1 is suggestible; agent 2 is also suggestible

    Result.

    (c) Agent 1 is stubborn; agent 2 is also stubborn

    Result.

    (d) Agents 1-10: suggestible

    Result.

    (e) Agents 1-7: stubborn, agents 8-10: suggestible

    Figure 4: The impact of personalities

  5. Impact of topology: The flow of information in a multi-agent system corresponds to a network topology, which plays a pivotal role in negotiations. We examined several typical network topologies. For instance, when the network is fully connected, the exchange of information is most efficient, resulting in fast consensus convergence speed. When the network is not fully connected, the consensus convergence speed slows down. In the case of directed graphs, a leader-follower hierarchical structure emerges since some agents have a dominant influence on the final consensus outcome. In some systems, due to the interplay between personality and topology, consensus may not be reached, leading to clustering outcomes.
  6. Result.

    (a) Fully connected

    Result.

    (b) Not fully connected

    Result.

    (c) A leader-follower structure

    Result.

    (d) A chain structure

    Figure 5: The impact of topologies

  7. Impact of agent number: It is shown by Monte Carlo simulation that as the number of agents increases, the variance of the final consensus value decreases. This suggests that multiple agents can alleviate the randomness or hallucinations of the system so that a consistent outcome can be obtained in different trials. Moreover, while a small number of suggestible agents may cause oscillations of their states, a large number of them can suppress the occurrence of oscillations, suggesting that increasing the number of agents may stabilize group decision-making.
  8. Result.

    Figure 6: Statistical results of the final consensus values

Application to multi-robot aggregation: The LLM-driven consensus seeking framework is further applied as a cooperative planner to a multi-robot aggregation task. In this task, multiple robots starting from different initial positions plan and move to a common position in the plane. It is a consensus seeking problem in Euclidean space. This application is important since it shows the potential of LLM-driven agents to achieve zero-shot autonomous task planning based on simple verbal commands.

Result.

(a) Simulation framework

Result.

(b) Robot trajectory

Result.

(b) Control process

Figure 7: Application to multi-robot aggregation

BibTeX

@misc{chen2023multiagent,
  title={Multi-Agent Consensus Seeking via Large Language Models},
  author={Huaben Chen and Wenkang Ji and Lufeng Xu and Shiyu Zhao},
  year={2023},
  eprint={2310.20151},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
}