Scope and Topics

Stimulated by various emerging applications involving agents to solve complex problems in real-world domains, such as intelligent sensing systems for the Internet of the Things (IoT), automated configurators for critical infrastructure networks, and intelligent resource allocation for social domains (e.g., security games for the deployment of security resources or auctions/procurements for allocating goods and services), agents in these domains commonly leverage different forms optimization and/or learning to solve complex problems.
The goal of the workshop is to provide researchers with a venue to discuss models or techniques for tackling a variety of multi-agent optimization problems. We seek contributions in the general area of multi-agent optimization, including distributed optimization, coalition formation, optimization under uncertainty, winner determination algorithms in auctions and procurements, and algorithms to compute Nash and other equilibria in games. Of particular emphasis are contributions at the intersection of optimization and learning. See below for a (non-exhaustive) list of topics.

This workshop invites works from different strands of the multi-agent systems community that pertain to the design of algorithms, models, and techniques to deal with multi-agent optimization and learning problems or problems that can be effectively solved by adopting a multi-agent framework.


The workshop organizers invite paper submissions on the following (and related) topics:
  • Optimization for learning (strategic and non-strategic) agents
  • Learning for multi-agent optimization problems
  • Distributed constraint satisfaction and optimization
  • Winner determination algorithms in auctions and procurements
  • Coalition or group formation algorithms
  • Algorithms to compute Nash and other equilibria in games
  • Optimization under uncertainty
  • Optimization with incomplete or dynamic input data
  • Algorithms for real-time applications
  • Cloud, distributed and grid computing
  • Applications of learning and optimization in societally beneficial domains
  • Multi-agent planning
  • Multi-robot coordination

The workshop is of interest both to researchers investigating applications of multi-agent systems to optimization problems in large, complex domains, as well as to those examining optimization and learning problems that arise in systems comprised of many autonomous agents. In so doing, this workshop aims to provide a forum for researchers to discuss common issues that arise in solving optimization and learning problems in different areas, to introduce new application domains for multi-agent optimization techniques, and to elaborate common benchmarks to test solutions.

Finally, the workshop will welcome papers that describe the release of benchmarks and data sets that can be used by the community to solve fundamental problems of interest, including in machine learning and optimization for health systems and urban networks, to mention but a few examples.


The workshop will be a one-day meeting. It will include a number of technical sessions, a poster session where presenters can discuss their work, with the aim of further fostering collaborations, multiple invited speakers covering crucial challenges for the field of multiagent optimization and learning.


Attendance is open to all. At least one author of each accepted submission must be present at the workshop.

Important Dates

  • Feb 19, 2024 (23:59 UTC-12) – Submission Deadline
  • Mar 18, 2024 (23:59 UTC-12) – Acceptance notification
  • May 7, 2024 – Workshop Date



Time (Auckland) GMT+13 Talk
8:30 Introductory remarks
8:45 Invited Talk by Guillaume Sartoretti: "Towards Learned Cooperation at Scale in Robotic Multi-Agent Systems"
9:45 Coffee Break
Session 1
10:30 Contributed Talk: Optimal Task Assignment and Path Planning using Conflict-Based Search with Precedence and Temporal Constraints
11:00 Contributed Talk: Applying Multi-Agent Negotiation to Solve the Production Routing Problem With Privacy Preserving
11:30 Contributed Talk: Samples Sharing and Prioritization in Multi-Agent Deep Reinforcement Learning
12:00 Contributed Talk: Minimizing Negative Side Effects in Cooperative Multi-Agent Systems Using Distributed Coordination
12:30 Lunch Break
14:00 Invited Talk by Daniel Boley: "Data Driven Deep Learning in the Presence of Hard Constraints"
Session 2
15:00 Contributed Talk: Exploiting Approximate Symmetry in Dynamic Games for Efficient Multi-Agent Reinforcement Learning
15:30 Contributed Talk: Satisfaction and Regret in Stackelberg Games
16:00 Coffee Break
Session 3
16:30 Contributed Talk: Active Value Querying to Minimize Additive Approximate Error in Superadditive Set Function Learning
17:00 Contributed Talk: Efficient Decision-Focused Learning for Public Health Intervention Planning
17:30 Closing Remarks

Accepted Papers

  • Applying Multi-Agent Negotiation to Solve the Production Routing Problem With Privacy Preserving.
    Luiza P Biasoto, Vinicius R de Carvalho, and Jaime S Sichman.
  • Minimizing Negative Side Effects in Cooperative Multi-Agent Systems Using Distributed Coordination.
    Moumita Choudhury, Sandhya Saisubramanian, Hao Zhang, and Shlomo Zilberstein.
  • Optimal Task Assignment and Path Planning using Conflict-Based Search with Precedence and Temporal Constraints.
    Yu Quan Chong, Jiaoyang Li, and Katia P Sycara.
  • Satisfaction and Regret in Stackelberg Games.
    Langford White, Duong D Nguyen, and Hung Nguyen
  • Active Value Querying to Minimize Additive Approximate Error in Superadditive Set Function Learning.
    Filip Úradník, David Sychrovský, Jakub Cerny, and Martin Cerny.
  • Efficient Decision-Focused Learning for Public Health Intervention Planning.
    Sanket Shah, Arun Sai Suggala, Milind Tambe, Aparna Taneja.
  • Exploiting Approximate Symmetry in Dynamic Games for Efficient Multi-Agent Reinforcement Learning.
    Batuhan Yardim and Niao He.
  • Samples Sharing and Prioritization in Multi-Agent Deep Reinforcement Learning.
    Thibault Lahire.

Invited Talks

Towards Learned Cooperation at Scale in Robotic Multi-Agent Systems

Guillaume Sartoretti, National University of Singapore
With the recent advances in sensing, actuation, computation, and communication, the deployment of large numbers of robots is becoming a promising avenue to enable or speed up complex tasks in areas such as manufacturing, last-mile delivery, search-and-rescue, or autonomous inspection. My group strives to push the boundaries of multi-agent scalability by understanding and eliciting emergent coordination/cooperation in multi-robot systems as well as in articulated robots (where agents are individual joints). Our work mainly relies on distributed (multi-agent) reinforcement learning, where we focus on endowing agents with novel information and mechanisms that can help them align their decentralized policies towards team-level cooperation. In this talk, I will first summarize my early work in independent learning, before discussing my group's recent advances in convention, communication, and context-based learning. I will discuss these techniques within a wide variety of robotic applications, such as multi-agent path finding, autonomous exploration/search, task allocation, and legged locomotion. Finally, I will also touch on our recent incursion into the next frontier for multi-robot systems: cooperation learning for heterogeneous multi-robot teams. Throughout this journey, I will highlight the key challenges surrounding learning representations, policy space exploration, and scalability of the learned policies, and outline some of the open avenues for research in this exciting area of robotics.
Guillaume Sartoretti joined the Mechanical Engineering Department at the National University of Singapore (NUS) as an Assistant Professor in 2019, where he founded the Multi-Agent Robotic Motion (MARMot) lab. Before that, he was a Postdoctoral Fellow in the Robotics Institute at Carnegie Mellon University (USA), where he worked with Prof. Howie Choset. He received his Ph.D. in robotics from EPFL (Switzerland) in 2016 for his dissertation on "Control of Agent Swarms in Random Environments," under the supervision of Prof. Max-Olivier Hongler. His passion and research lie in understanding and eliciting emergent coordination/cooperation in large multi-agent systems, by identifying what information and mechanisms can help agents reason about their individual role/contribution to each other and to the team. Guillaume was a Manufacturing Futures Initiative (MFI) postdoctoral fellow at CMU in 2018-2019, was awarded an Amazon Research Awards in 2022, as well as an Outstanding Early Career Award from NUS' College of Design and Engineering in 2023.

Data Driven Deep Learning in the Presence of Hard Constraints

Daniel Boley, University of Minnesota
The machine learning community has borrowed methods from the optimization community, but there are still many ways the two communities could benefit each other. For example, data driven deep learning models are formulated as unconstrained minimization problems. These models do not admit hard constraints in a natural way. The most common approach is to craft a regularization term encompassing a growing penalty for violating the constraints. But this approach does not guarantee that the constraints will be satisfied. This could be problematic in some applications. We explore different ways hard constraints have been incorporated into deep learning models, including some examples where even small constraint violations cannot be accepted.
Daniel Boley received his Ph.D. degree in Computer Science from Stanford University in 1981. Since then, he has been on the faculty of the Department of Computer Science and Engineering at the University of Minnesota, where he is now a full professor. Dr. Boley is known for his past work on numerical linear algebra methods for control problems, parallel algorithms, iterative methods for matrix eigenproblems, inverse problems in linear algebra, as well as his more recent work on computational methods in statistical machine learning, data mining, and bioinformatics. His current interests include scalable algorithms for convex optimization in machine learning, the analysis of networks and graphs such as those arising from metabolic biochemical networks and networks of wireless devices. He was an associate editor for the SIAM Journal of Matrix Analysis for six years and has chaired several technical symposia at major conferences. He is a senior member of the IEEE and a distinguished scientist of the ACM.

Submission Information

Submission URL:

Submission Types

  • Technical Papers: Full-length research papers of up to 8 pages (excluding references and appendices) detailing high quality work in progress or work that could potentially be published at a major conference.
  • Short Papers: Position or short papers of up to 4 pages (excluding references and appendices) that describe initial work or the release of privacy-preserving benchmarks and datasets on the topics of interest.

All papers must be submitted in PDF format, using the AAMAS-24 author kit. Submissions should include the name(s), affiliations, and email addresses of all authors.
Submissions will be refereed on the basis of technical quality, novelty, significance, and clarity. Each submission will be thoroughly reviewed by at least two program committee members.

Best Papers

Per the AAMAS Workshop organizers:
There will be a Springer issue for best workshop papers and visionary papers, so each workshop should nominate two papers, one for each special issue. The authors should be aware that if the nominated workshop paper is also an AAMAS paper (or some other conference paper), the version in the Springer books should have additional material (at least 30% more).

For questions about the submission process, contact the workshop chairs.

Program Committee

  • Athina Georgara
  • Daniele Meli
  • Dimitrios Troullinos
  • Gauthier Picard
  • Jayesh Gupta
  • Kate Larson
  • Leen-Kiat Soh
  • Luca Capezzuto
  • Manel Rodriguez-Soto
  • Nicholas Bishop
  • René Mandiau
  • Taoan Huang
  • Terrence W.K. Mak

Workshop Chairs

Hau Chan

University of Nebraska-Lincoln

Jiaoyang Li

Carnegie Mellon University

Filippo Bistaffa


Xinrun Wang

Nanyang Technological University