Skip to content

Game on! Seminars

Game theory. Control. Intelligent systems.


Participation is open to everyone with no registration required.
The talks are typically held on Tuesdays at 16:00 CET.

Upcoming seminars

07-Apr-2026, 16:00 CET

c_maheshwari

Prof. Chinmay Maheshwari
Johns Hopkins University

Markov near-potential functions: A new paradigm for design and analysis of multi-agent system

Learning enabled services are revolutionizing several engineering domains such as robotics, mobility, energy, and online marketplaces. While significant progress has been made in developing autonomous agents that operate in isolation, deploying them in dynamic, multi-agent environments presents new theoretical, algorithmic, and societal challenges. Towards this goal, I will introduce a novel theoretical framework—Markov near-potential functions—to study multi-agent interactions in dynamic environments. Using this framework, I will provide the first characterization of the long-run outcome of interaction between decentralized (multi-agent) reinforcement learning (RL) agents in shared environments, without imposing constraints on their underlying utility structure. Additionally, I will demonstrate how this framework can be leveraged to design competitive, real-time planning and control strategies for autonomous multi-car racing. Finally, I will present a new multi-agent reinforcement learning pipeline – Near-Potential Policy Optimization – which exploits the structure of MNPFs to compute low-regret approximate Nash strategies in general-sum dynamic games.


21-Apr-2026, 16:00 CET

b_ferguson

Prof. Bryce L. Ferguson

Collaboration and Competition in Multi-Agent Systems: The Consequences of Coalitions and Model Mismatch

With the rapid growth of communication and computation technologies, systems that once operated in isolation are now coupled into large-scale multi-agent systems, including robotic fleets and autonomous transportation networks. While this increased connectivity enables new capabilities, it also introduces fundamental challenges: the collective behavior induced by decentralized decision-making is generally inefficient. This talk develops game-theoretic models for distributed control and multi-agent decision-making that quantify and shape this inefficiency. I will present recent results on the price of anarchy under structured cooperation, showing how limited coalition formation (k-strong equilibria) mediates the tradeoff between coordination complexity and system performance. These results characterize when increasing communication among agents improves efficiency, and when it introduces diminishing or even adverse returns. I will also discuss the robustness of these models to misspecification. In many settings, agents act on incorrect models of others’ objectives or behavior. We formalize this discrepancy as a game-to-real gap and show how it degrades equilibrium performance. This perspective highlights the importance of accounting for modeling error in the design and analysis of multi-agent systems.


28-Apr-2026, 16:00 CET

f_laine

Dr. Forrest Laine
Eiro Inc. (on leave from Vanderbilt University)

Mathematical Program Networks

Abstract TBA


5-May-2026, 16:00 CET

n_mehr

Prof. Negar Mehr
UC Berkeley

Interactive Autonomy: Learning and Control for Multi-agent Interactions

To truly transform our lives, autonomous systems must operate in complex environments shared with other agents. For instance, delivery robots need to move through spaces that are shared with humans, and warehouse robots must coordinate in shared factory floors. Such multi-agent settings demand systematic methods that enable efficient and reliable interactions among agents. In the first part of my talk, I focus on control challenges in such domains and will discuss game-theoretic planning and control for robots. Intelligent interaction requires robots to reason about how their decisions affect and are affected by others. I will present our recent results showing how exploiting structural properties of interactions leads to motion planning algorithms that are both efficient and tractbale for real-time deployment. The second part of the talk will focus on learning in interactive domains, including imitation learning and reinforcement learning. While these approaches have advanced significantly in single-agent settings, multi-agent domains present unique learning challenges because decisions are tightly coupled across agents. I will highlight how some of these challenges can be addressed to make learning feasible in interactive multi-agent domains.


12-May-2026, 16:00 CET

s_grammatico

Prof. Sergio Grammatico
TU Delft

Title TBA

TBA


26-May-2026, 16:00 CET

f_fabiani

Dr. Filippo Fabiani
IMT Lucca

Data-based certificates in stochastic Nash games

Many modern systems in smart grids and smart cities rely on the interaction of multiple decision-makers whose choices affect one another. These interactions can be naturally described using game-theoretic models, but in practice they are often influenced by uncertainty (e.g., fluctuating demand or renewable generation) whose statistical properties are unknown. From a mathematical perspective, this complicates enormously the evaluation of the expected cost of each agent. Most existing approaches rely on large amounts of data and guarantee convergence only in the limit of infinite samples, an assumption that is unrealistic in many real-world and safety-critical settings. This talk asks a more practical question: what can be guaranteed when only a finite amount of data is available?

Building on recent advances in stability analysis and stochastic approximation, we will introduce a data-based framework that provides computable certificates measuring how close one can get from a Nash or generalized Nash equilibrium using finite samples. The approach leverages the monotonicity property and variational inequality structure of the stochastic game at hand, together with standard Nash equilibrium seeking schemes based on operator theory, thereby enabling reliable assessment of convergence even when part of the game model is unknown and shall be approximated in a data-driven fashion. Our results thus offer finite-sample certificates that bound equilibrium residuals and stability margins directly from available uncertainty realizations, without knowing the underlying probability distribution. As such, the proposed framework provides a unifying view of learning dynamics and equilibrium verification in stochastic multi-agent systems, with implications for data-driven control, economic modeling, and large-scale learning in games. Numerical illustrations demonstrate how the proposed certificates track equilibrium quality in practice.


02-June-2026, 16:00 CET

j_shamma

Prof. Jeff Shamma
University of Illinois at Urbana-Champaign

Title TBA

TBA


09-June-2026, 16:00 CET

p_brown

Prof. Philip Brown
University of Colorado at Colorado Springs

Title TBA

TBA


16-June-2026, 16:00 CET

e_elokda

Dr. Ezzat Elokda
KTH Stockholm

Title TBA

TBA


KTHKTH WASPWASP DigitalFuturesDigitalFutures