OptLearnMAS-25
The 16th Workshop on Optimization and Learning in Multiagent Systems
at AAMAS 2025 at Detroit, Michigan, USA
Stimulated by various emerging applications involving agents to solve complex problems in real-world domains, such as intelligent sensing systems for the Internet of Things (IoT), automated configurators for critical infrastructure networks, and intelligent resource allocation for social domains (e.g., security games for the deployment of security resources or auctions/procurements for allocating goods and services), agents in these domains commonly leverage different forms of optimization and/or learning to solve complex problems.
The goal of the workshop is to provide researchers with a venue to discuss models or techniques for tackling a variety of multi-agent optimization problems. We seek contributions in the general area of multi-agent optimization, including distributed optimization, coalition formation, optimization under uncertainty, winner determination algorithms in auctions and procurements, and algorithms to compute Nash and other equilibria in games. Of particular emphasis are contributions at the intersection of optimization and learning. See below for a (non-exhaustive) list of topics.
This workshop invites works from different strands of the multi-agent systems community that pertain to the design of algorithms, models, and techniques to deal with multi-agent optimization and learning problems or problems that can be effectively solved by adopting a multi-agent framework.
The workshop is of interest both to researchers investigating applications of multi-agent systems to optimization problems in large, complex domains, as well as to those examining optimization and learning problems that arise in systems comprised of many autonomous agents. This workshop aims to provide a forum for researchers to discuss common issues that arise in solving optimization and learning problems in different areas, introduce new application domains for multi-agent optimization techniques, and elaborate common benchmarks to test solutions.
Finally, the workshop will welcome papers that describe the release of benchmarks and data sets that can be used by the community to solve fundamental problems of interest, including in machine learning and optimization for health systems and urban networks, to mention but a few examples.
The workshop will be a one-day meeting. It will include several technical sessions, a poster session where presenters can discuss their work, with the aim of further fostering collaborations, multiple invited speakers covering crucial challenges in the field of multiagent optimization and learning.
Attendance is open to all. At least one author of each accepted submission must be present at the workshop.
Time | |
---|---|
Morning session | |
8:45 | Introductory remarks |
9:00 | Invited Talk: A Linear Theory of Voting by Lirong Xia |
10:00 | Coffee break |
10:45 | Contributed talk: Decentralized Decomposition-Based Observation Scheduling for a Large-Scale Satellite Constellation |
11:10 | Contributed talk: Networked Communication for Mean-Field Games with Function Approximation and Empirical Mean-Field Estimation |
11:35 | Contributed talk: Nash Equilibria via Stochastic Eigendecomposition |
12:00 | Contributed talk: Solving Public Goods Games on Networks as Multi-agent Systems |
12:30 | Lunch break |
Afternoon session | |
14:00 | Contributed talk: Enhancing Lifelong Multi-Agent Path-finding by Using Artificial Potential Fields |
14:30 | Contributed talk: LLM-Mediated Guidance of MARL Systems |
15:00 | Invited Talk: Multiagent Paradigms For General Evaluation of AI Agents by Marc Lanctot |
15:45 | Coffee break |
16:30 | Contributed talk: Multi-Agent Corridor Generating Algorithm |
17:00 | Closing remarks |
Abstract: Voting mechanisms form the cornerstone of collective decision-making across numerous domains—from political elections and recommender systems to multi-agent systems, and more recently to Reinforcement Learning from Human Feedback (RLHF). The traditional expert-driven paradigm of voting system design involves formulating desiderata (often called axioms), then designing and evaluating rules accordingly. However, this approach can be inefficient due to the complexity of modern applications with diverse stakeholders, as no single voting rule or axiom applies to all applications. Can AI and ML help? This talk introduces a linear theory for AI/ML-augmented design and analysis of voting systems. We demonstrate that many established voting rules and axioms exhibit linear properties, enabling systematic enhancement through machine learning. Our framework introduces general tools that advance both design and analysis within this linear paradigm. For design purposes, we precisely characterize the sample complexity of learning several popular classes of linear rules and axioms. For analytical purposes, we propose a semi-random model paired with a polyhedral approach that transcends worst-case analysis, offering more nuanced evaluation of axiomatic satisfaction. We hope that this linear theory is a useful advancement toward more effective human-AI collaboration in voting system design—potentially transforming how we develop collective decision mechanisms for increasingly complex applications.
Biography: Lirong Xia is a Professor of Computer Science at Rutgers University-New Brunswick and the Deputy Director of the Center for Discrete Mathematics and Theoretical Computer Science. He was an NSF CI Fellow at the Center for Research on Computation and Society at Harvard University. He received his Ph.D. in Computer Science and M.A. in Economics from Duke University. His research focuses on the intersection of Computer Science and Microeconomics. He served as co-chairs of conferences such as ADT, WINE, AMMA, and workshops such as COMSOC, MPREF, and CoopMAS. He is the recipient of an NSF CAREER award, a Simons-Berkeley Research Fellowship, and was named as one of "AI's 10 to watch" by IEEE Intelligent Systems.
Abstract: As AI agents become more generally capable, they are evaluated over many separate tasks. For example, a Deep Reinforcement Learning agent may learn to play many different Atari games or a large language model model may be used for translation, coding, solving complex math problems, etc... A recurring question is the bottom-line performance: which agent or model is “best”, across all the tasks of interest? Answering this question sufficiently requires aggregating evaluation results across a potentially wide variety of different contexts. In this talk, I will show some different systems that aggregate this information into a single rating or ranking, ranging from classical rating systems such as Elo to more modern systems that adopt principles from multiagent systems, such as game theory and social choice theory. I will put particular emphasis on some important pitfalls that can arise while aggregating the information, what it means for evaluation of general agents, and discuss some potential solutions.
Biography: Marc Lanctot is a research scientist at Google DeepMind. His research interests include multiagent reinforcement learning, computational game theory, multiagent systems, and game-tree search. In the past few years, Marc has investigated game-theoretic approaches to multiagent reinforcement learning with applications to fully and partially observable zero-sum games, sequential social dilemmas, and negotiation/communication games. Most recently, Marc has been working on principled evaluation of general agents using social choice theory and game theory. Marc received a Ph.D. degree in artificial intelligence from the Department of Computer Science, University of Alberta in 2013. Before joining DeepMind, Marc completed a Postdoctoral Research Fellowship at the Department of Knowledge Engineering, Maastricht University, in Maastricht, The Netherlands on Monte Carlo tree search methods in games.
https://cmt3.research.microsoft.com/OptLearnMAS2025
All papers must be submitted in PDF format, using the AAMAS author kit. Submissions should include the name(s), affiliations, and email addresses of all authors. Submissions will be refereed based on technical quality, novelty, significance, and clarity. Each submission will be thoroughly reviewed by at least two program committee members.