In this paper we consider repeated coalitional games with transferable utilities (TU) over networks. Namely, we consider a set of n players that have to distribute among themselves a vector of rewards (one for each player). In our network version there is no coordinator allocating the rewards, but the agents have to agree on a common time-averaged vector by updating the local estimates of the reward vector. The common time-averaged reward vector has to approach a suitable constraint set, called core of the game, that guarantees that no agents benefit from quitting the grand coalition. We propose a doubly (over time and space) averaging distributed algorithm. At every iteration, each agent first computes a weighted average of its own time-averaged estimate and those of his neighbors and then generates a new reward vector in order to drive the time-averaged estimate towards a pre-assigned set. The main contribution of the paper is to prove that under certain assumptions, i) all agents' estimates reach consensus on the true time-averaged reward vector, and ii) the estimates (and thus the true time-averaged reward vector) approach the pre-assigned set. Conditions for this to happen are related to the connectivity over time of the communication topology and to the approachability principle.
Bauso, D., Notarstefano, G. (2012). Distributed n-player approachability via time and space average consensus. In IFAC Proceedings Volumes.
Distributed n-player approachability via time and space average consensus
BAUSO, Dario;
2012-01-01
Abstract
In this paper we consider repeated coalitional games with transferable utilities (TU) over networks. Namely, we consider a set of n players that have to distribute among themselves a vector of rewards (one for each player). In our network version there is no coordinator allocating the rewards, but the agents have to agree on a common time-averaged vector by updating the local estimates of the reward vector. The common time-averaged reward vector has to approach a suitable constraint set, called core of the game, that guarantees that no agents benefit from quitting the grand coalition. We propose a doubly (over time and space) averaging distributed algorithm. At every iteration, each agent first computes a weighted average of its own time-averaged estimate and those of his neighbors and then generates a new reward vector in order to drive the time-averaged estimate towards a pre-assigned set. The main contribution of the paper is to prove that under certain assumptions, i) all agents' estimates reach consensus on the true time-averaged reward vector, and ii) the estimates (and thus the true time-averaged reward vector) approach the pre-assigned set. Conditions for this to happen are related to the connectivity over time of the communication topology and to the approachability principle.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.