[PATH]
HOME
> Accepted Papers + Abstracts |
SAGT2010List of
Accepted Papers (ordered by submission numbers, with
abstracts)
[Hide Abstracts]
- Elchanan Mossel and Omer Tamuz.
Truthful Fair Division
Abstract:
We address the problem of fair division, or cake cutting, with the goal of finding truthful mechanisms. In the case of a general measure space ("cake") and non-atomic, additive individual preference measures - or utilities - we show that there exists a truthful "mechanism" which ensures that each of the k players gets at least 1/k of the cake. This mechanism also minimizes risk for truthful players. Furthermore, in the case where there exist at least two different measures we present a different truthful mechanism which ensures that each of the players gets more than 1/k of the cake. We then turn our attention to partitions of indivisible goods with bounded utilities and a large number of goods. Here we provide similar mechanisms, but with slightly weaker guarantees. These guarantees converge to those obtained in the non-atomic case as the number of goods goes to infinity.
Abstract:
Given a set of alternatives and a single player, we introduce the notion of a {\em responsive lottery}. These mechanisms receive as input from the player a reported utility function, specifying a value for each one of the alternatives, and use a lottery to produce as output a probability distribution over the alternatives. Thereafter, exactly one alternative wins (is given to the player) with the respective probability. Assuming that the player is not indifferent to which of the alternatives wins, a lottery rule is called {\em truthful dominant} if reporting his true utility function (up to affine transformations) is the unique report that maximizes the expected payoff for the player. We design truthful dominant responsive lotteries. We also discuss their relations with scoring rules and with VCG mechanisms.
Abstract:
A key solution concept in cooperative game theory is the core. The core of an expense sharing game contains stable allocations of the total cost to the participating players, such that each subset of players pays at most what it would pay if acting on its own. Unfortunately, some expense sharing games have an empty core, meaning that the total cost is too high to be divided in a stable manner. In such cases, an external entity could choose to induce stability using an external subsidy. We call the minimal subsidy required to make the core of a game non-empty the Cost of Stability (CoS), adopting a recently coined term for surplus sharing games. We provide bounds on the CoS for general, subadditive and anonymous games, discuss the special case of Facility Games, and consider the complexity of computing the CoS.
Abstract:
We study "logit dynamics" for strategic games. At every stage of the game a player is selected uniformly at random and she plays according to a "noisy" best-response dynamics where the noise level is tuned by a parameter $\beta$. Such a dynamics defines a family of ergodic Markov chains, indexed by $\beta$, over the set of strategy profiles. Our aim is twofold: On the one hand, we are interested in the expected social welfare when the strategy profiles are random according to the stationary distribution of the Markov chain, because we believe it gives a meaningful description of the long-term behavior of the system. On the other hand, we want to estimate how long it takes, for a system starting at an arbitrary profile and running the logit dynamics, to get close to the stationary distribution; i.e., the mixing time of the chain. In this paper we study the stationary expected social welfare for the 3-player CK-game, for 2-player coordination games (the same class of games studied in~\cite{blumeGEB93}), 2-player anti-coordination games, and for a simple n-player game. For all these games, we give almost-tight upper and lower bounds on the mixing time of logit dynamics.
Abstract:
Fictitious play is a simple learning algorithm for strategic games that proceeds in rounds. In each round, the players play a best response to a mixed strategy that is given by the empirical frequencies of actions played in previous rounds. There is a close relationship between fictitious play and the Nash equilibria of a game: if the empirical frequencies of fictitious play converge to a strategy profile, this strategy profile is a Nash equilibrium. While fictitious play does not converge in general, it is known to do so for certain restricted classes of games, such as constant-sum games, non-degenerate 2 by n games, and potential games. We study the rate of convergence of fictitious play and show that, in all the classes of games mentioned above, fictitious play may require an exponential number of rounds (in the size of the representation of the game) before some equilibrium action is eventually played. In particular, we show the above statement for symmetric constant-sum win-lose-tie games.
Abstract:
We consider network congestion games in which a finite number of non-cooperative users select paths. The aim is to mitigate the inefficiency caused by the selfish users by introducing taxes on the network edges. A tax vector is strongly (weakly)-optimal if all (at least one of) the equilibria in the resulting game minimize(s) the total latency. The issue of designing optimal tax vectors for selfish
routing games has been studied extensively in the literature. We study for the first time taxation for networks with atomic users which have unsplittable traffic demands and are heterogeneous, i.e., have different sensitivity to taxes. On the positive side, we show the existence of weakly-optimal taxes for single-source network games. On the negative side, we show that the cases of homogeneous and heterogeneous users differ sharply as far as the existence of strongly-optimal taxes is concerned: there are parallel-link games with linear latencies and heterogeneous users that do not admit strongly-optimal taxes.
-
Chinmay Karande
, Gagan Goel and Lei Wang.
Single-Parameter Combinatorial Auctions with Partially Public Valuations
Abstract:
We consider the problem of designing truthful auctions, when the bidders' valuations have a public and a private component. In particular, we consider combinatorial auctions where the valuation of an agent $i$ for a set $S$ of items can be expressed as $v_if(S)$, where $v_i$ is a private single parameter of the agent, and the function $f$ is publicly known. Our motivation behind studying this problem is two-fold: (a) Such valuation functions arise naturally in the case of ad-slots in broadcast media such as Television and Radio. For an ad shown in a set $S$ of ad-slots, $f(S)$ is, say, the number of {\em unique} audience members reached by the ad, and $v_i$ is the valuation per-unique-viewer. (b) From a theoretical point of view, this factorization of the valuation function simplifies the bidding language, and renders the combinatorial auction more amenable to better approximation factors. We present a very general technique, based on maximal-in-range mechanisms, that converts any $\alpha$-approximation non-truthful algorithm ($\alpha \leq 1$)\footnote{We use the convention that an approximation algorithm for a maximization problem is $\alpha$-approximate if the solution has value at least $\alpha$ times the optimal for $\alpha \leq 1$. Hence, all our approximation factors are fractional.} into $\Omega(\frac{\alpha}{\log{n}})$ and $\Omega(\alpha)$-approximate truthful mechanisms which run in polynomial time and quasi-polynomial time, respectively.
Abstract:
Much work has been done on the computation of market equilibria. However due to strategic play by buyers, it is not clear whether these are actually observed in the market. Motivated by the observation that a buyer may derive a better payoff by feigning a different utility function and thereby manipulating the Fisher market equilibrium, we formulate the {\em Fisher market game} in which buyers strategize by posing different utility functions. We show that existence of a {\em conflict-free allocation} is a necessary condition for the Nash equilibria (NE) and also sufficient for the symmetric NE in this game. There are many NE with very different payoffs, and the Fisher equilibrium payoff is captured at a symmetric NE. We provide a complete polyhedral characterization of all the NE for the two-buyer market game. Surprisingly, all the NE of this game turn out to be symmetric and the corresponding payoffs constitute a piecewise linear concave curve. We also study the correlated equilibria of this game and show that third-party mediation does not help to achieve a better payoff than NE payoffs.
-
Yonatan Aumann
and Yair Dombb.
Pareto Efficiency and Approximate Pareto Efficiency in Routing and Load Balancing Games
Abstract:
We analyze the Pareto efficiency, or inefficiency, of solutions to routing games and load balancing games, focusing on Nash equilibria and greedy solutions to these games. For some settings, we show that the solutions are necessarily Pareto optimal. When this is not the case, we provide a measure to quantify the distance of the solution from Pareto efficiency. Using this measure, we provide upper and lower bounds on the “Pareto inefficiency” of the different solutions. The settings we consider include load balancing games on identical, uniformly-related, and unrelated machines, both using pure and mixed strategies, and nonatomic routing in general and some specific networks.
Abstract:
We consider the computational complexity of coalitional solution concepts in scenarios related to load balancing such as anonymous and congestion games. In congestion games, Pareto-optimal Nash and strong equilibria, which are resilient to coalitional deviations, have recently been shown to yield significantly smaller inefficiency. Unfortunately, we show that several problems regarding existence, recognition, and computation of these concepts are hard, even in seemingly special classes of games. In anonymous games with constant number of strategies, we can efficiently recognize a state as Pareto-optimal Nash or strong equilibrium, but deciding existence for a game remains hard. In the case of player-specific singleton congestion games, we show that recognition and computation of both concepts can be done efficiently. In addition, in these games there are always short sequences of coalitional improvement moves to Pareto-optimal Nash and strong equilibria that can be computed efficiently.
Abstract:
We investigate the computational aspects of {\em safe manipulation}, a new model of coalitional manipulation that was recently put forward by Slinko and White. In this model, a potential manipulator $v$ announces how he intends to vote, and some of the other voters whose preferences coincide with those of $v$ may follow suit. Depending on the number of followers, the outcome could be better or worse for $v$ than the outcome of truthful voting. A manipulative vote is called safe if for some number of followers it improves the outcome from $v$'s perspective, and can never lead to a worse outcome. In this paper, we study the complexity of finding a safe manipulative vote for a number of common voting rules, including Plurality, Borda, $k$-approval, and Bucklin, providing algorithms and hardness results for both weighted and unweighted voters. We also propose two ways to extend the notion of safe manipulation to the setting where the followers' preferences may differ from those of the leader, and study the computational properties of the resulting extensions.
Abstract:
Can learning algorithms find a Nash equilibrium? This is a natural question for several reasons. Learning algorithms resemble the behavior of players in many naturally arising games, and thus results on the convergence or nonconvergence properties of such dynamics may inform our understanding of the applicability of Nash equilibria as a plausible solution concept in some settings. A second reason for asking this question is in the hope of being able to prove an impossibility result, not dependent on complexity assumptions, for computing Nash equilibria via a restricted class of reasonable algorithms. In this work, we begin to answer this question by considering the dynamics of the standard multiplicative weights update learning algorithms (which are known to converge to a Nash equilibrium for zero-sum games). We revisit a 3 × 3 game defined by Shapley [10] in the 1950s in order to establish that fictitious play does not converge in general games. For this simple game, we show via a potential function argument that in a variety of settings the multiplicative updates algorithm impressively fails to find the unique Nash equilibrium, in that the cumulative distributions of players produced by learning dynamics actually drift away from the equilibrium.
- Pranjal Awasthi, Maria-Florina Balcan,
Avrim Blum
, Or Sheffet and
Santosh Vempala
.
On Nash-Equilibria of Approximation-Stable Games
Abstract:
One of the main reasons for wanting to compute a Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or epsilon-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. In this paper, we define the notion of games that are {\em approximation stable}, meaning that all epsilon-approximate equilibria are contained inside a small ball of radius Delta around a true equilibrium, and investigate a number of their properties. We show there exist 2-player n-by-n approximation-stable games in which the Nash equilibrium and all approximate equilibria have support Omega(log n). On the other hand, we show all (epsilon,Delta)-approximation-stable games must have an epsilon-equilibrium of support O((Delta^{2-o(1)}/epsilon^2)log n), yielding an immediate n^{O((Delta^{2-o(1)}/epsilon^2)log n)}-time algorithm, improving over the bound of [LMM03] for games satisfying this condition. We in addition give improved bounds for the case that Delta and epsilon are sufficiently close together. We also consider an inverse property, namely that all {\em non}-approximate equilibria are {\em far} from some true equilibrium, and give an efficient algorithm for games satisfying that condition.
-
Uriel Feige
and Inbal Talgam.
A Direct Reduction from k-Player to 2-Player Approximate Nash
Abstract:
We present a direct reduction from k-player games to 2-player games that preserves approximate Nash equilibrium. Previously, the computational equivalence of computing approximate Nash equilibrium in k-player and 2-player games was established via an indirect reduction. This included a sequence of works defining the complexity class PPAD, identifying complete problems for this class, showing that computing approximate Nash equilibrium for k-player games is in PPAD, and reducing a PPAD-complete problem to computing approximate Nash equilibrium for 2-player games. Our direct reduction makes no use of the concept of PPAD, thus eliminating some of the difficulties involved in following the known indirect reduction.
Abstract:
We study the computational complexity of finding stable outcomes in additively-separable symmetric hedonic games. These coalition formation games are specified by an undirected edge-weighted graph: nodes are players, an outcome of the game is a partition of the nodes into coalitions, and the utility of a node is the sum of incident edge weights in the same coalition. We consider several natural stability requirements defined in the economics literature. For all of them the existence of a stable outcome is guaranteed by a potential function argument, and local improvements will converge to a stable outcome. The different stability requirements correspond to different local search neighbourhoods. For different neighbourhood structures, our findings comprise positive results in the form of polynomial-time algorithms, and negative (PLS-completeness) results.
- Uri Nadav and Georgios Piliouras.
No Regret Learning in Oligopolies: Cournot vs Bertrand
Abstract:
Cournot and Bertrand oligopolies constitute the two most prevalent models of firm competition. The analysis of Nash equilibria in each model reveals a unique prediction about the stable state of the system. Quite alarmingly, despite the similarities of the two models, their projections expose a stark dichotomy. Under the Cournot model, where firms compete by strategically managing their output quantity, firms enjoy positive profits as the resulting market prices exceed that of the marginal costs. On the contrary, the Bertrand model, in which firms compete on price, predicts that a duopoly is enough to push prices down to the marginal cost level. This suggestion that duopoly will result in perfect competition, is commonly referred to in the economics literature as the ``Bertrand paradox''. In this paper, we move away from the safe haven of Nash equilibria as we analyze these models in disequilibrium under minimal behavioral hypotheses. Specifically, we assume that firms adapt their strategies over time, so that in hindsight their average payoffs are not exceeded by any single deviating strategy. Given this no-regret guarantee, we show that in the case of Cournot oligopolies, the unique Nash equilibrium fully captures the emergent behavior. Notably, we prove that under natural assumptions the daily market characteristics converge to the unique Nash. In contrast, in the case of Bertrand oligopolies, a wide range of positive average payoff profiles can be sustained. Hence, under the assumption that firms have no-regret the Bertrand paradox is resolved and both models arrive to the same conclusion that increased competition is necessary in order to achieve perfect pricing.
Abstract:
The king of refinements of Nash equilibrium is trembling hand perfection. We show that it is NP-hard and SQRT-SUM-hard to decide if a given pure strategy Nash equilibrium of a given three-player game in strategic form with integer payoffs is trembling hand perfect. Analogous results are shown for a number of other solution concepts, including proper equilibrium, (the strategy part of) sequential equilbrium, quasi-perfect equilibrium and CURB. The proofs all use a reduction from the problem of comparing the minmax value of a three-player game in strategic form to a given rational number. This problem was previously shown to be NP-hard by Borgs et al., while a SQRT-SUM hardness result is given in this paper. The latter proof yields bounds on the algebraic degree of the minmax value of a three-player game that may be of independent interest.
Abstract:
We propose a new convex optimization formulation for the Fisher market problem with linear utilities. Like the Eisenberg-Gale formulation, the set of feasible points is a polyhedral convex set while the cost function is non-linear; however, unlike that, the optimum is always attained at a vertex of this polytope. The convex cost function depends only on the initial endowments of the buyers. This formulation yields an easy simplex-like pivoting algorithm which is provably strongly polynomial for many special cases.
-
Matus Mihalak
and Jan Christoph Schlegel.
The price of anarchy in network creation games is
(mostly) constant
Abstract:
We study the price of anarchy and the structure of equilibria in network creation games. A network creation game (first defined and studied by Fabrikant et al. [3]) is played by n players {1,2,...,n}, each identified with a vertex of a graph (or network), where the strategy of player i, i = 1,...,n, is to build some edges adjacent to i. The cost of building an edge is á > 0, a fixed parameter of the game. The goal of every player is to minimize its creation cost plus its usage cost. The creation cost of player i is á times the number of built edges. In the SumGame (the original variant of Fabrikant et al. [3]) the usage cost of player i is the sum of distances from i to every node of the resulting graph. In the MaxGame (variant defined and studied by Demaine et al. [2]) the usage cost is the eccentricity of i in the resulting graph of the game. In this paper we improve previously known bounds on price of anarchy of the game (of both variants) for various ranges of á, and give new insights into the structure of equilibria for various values of á. The two main results of the paper show that for á > 249n all equilibria in SumGame are trees and thus the price of anarchy is constant, and that for á > 129 all equilibria in MaxGame are trees and the price of anarchy is constant. For SumGame this (almost) answers one of the arguably most prominent open problems in the field – is price of anarchy of the network creation game constant for all values of á? – in an affirmative way, up to a tiny range of á.
- Vijay Vazirani.
2-Player Nash and Nonsymmetric Bargaining Games: Algorithms and Structural Properties
Abstract:
The solution to a Nash or a nonsymmetric bargaining game is obtained by maximizing a concave function over a convex set, i.e., it is the solution to a convex program. We show that each 2-player game whose convex program has linear constraints, admits a rational solution and such a solution can be found in polynomial time using only an LP solver. If in addition, the game is succinct, i.e., the coefficients in its concave program are ``small'', then its solution can be found in strongly polynomial time. We also give a non-succinct linear game whose solution can be found in strongly polynomial time.
- Volodymyr Kuleshov and
Adrian Vetta
.
On the Efficiency of Markets with Two-Sided Proportional Allocation Mechanisms
Abstract:
We study the performance of resource allocation mechanisms in which both consumers and suppliers compete. Specifically, we examine a natural extension of Johari and Tsitsiklis' one-sided mechanisms for demand-competitive and supply-competitive markets. Assuming suppliers have convex marginal costs, we show that the mechanism exhibits a price of anarchy of about $0.5887$, regardless of the number of goods in the market. This welfare ratio is achieved when the demand-side is fully competitive and the supply-side is a duopoly. When marginal costs are not concave, the price of anarchy remains bounded and drops smoothly to zero as the marginal costs converge to a constant function. We complement these guarantees by identifying a broad class of mechanisms among which ours achieves the best possible price of anarchy. We present all our results in the context of sharing bandwidth on a network, and show how our mechanism can be used to coordinate the supply and consumption of network capacity.
Abstract:
We consider the existence of Partition Equilibrium in Resource Selection Games. Super-strong equilibrium, where no subset of players has an incentive to change their strategies collectively, does not always exist in such games. We show, however, that partition equilibrium (introduced by Feldman and Tenneholtz to model coalitions arising in a social context) always exists in general resource selection games, as well as how to compute it efficiently. In a partition equilibrium, the set of players has a fixed partition into coalitions, and the only deviations considered are by coalitions that are sets in this partition. Our algorithm to compute a partition equilibrium in any resource selection game (i.e., load balancing game) settles the previously open question about existence of partition equilibrium in general resource selection games. Moreover, we show how to always find a partition equilibrium which is also a Nash equilibrium. This implies that in resource selection games, we do not need to sacrifice the stability of individual players when forming solutions stable against coalitional deviations. In addition, while super-strong equilibrium may not exist in resource selection games, we show that its existence can be decided efficiently, and how to find one if it exists.
- Martin Macko, Kate Larson and Ľuboš Steskal.
Braess's Paradox for Flows Over Time
Abstract:
We study the properties of Braess's paradox in the context of the model of congestion games with flow over time introduced by Koch and Skutella. We compare them to the well known properties of Braess's paradox for Wardrop's model of games with static flows. We show that there are networks which do not admit Braess's paradox in Wardrop's model, but which admit it in the model with flow over time. Moreover, there is a topology that admits a significantly more severe Braess's ratio for this model. Further, despite its symmetry for games with static flow, we show that Braess's paradox is not symmetric for flows over time. We illustrate that there are network topologies which exhibit Braess's paradox, but for which the transpose does not. Finally, we conjecture a necessary and sufficient condition on existence of Braess's paradox in a network, and prove the condition on existence of the paradox either in the network or in its transpose.
Abstract:
Recent results showing PPAD-completeness of the problem of computing an equilibrium for Fisher's market model under additively separable, piecewise-linear, concave utilities dealt a serious blow to the program of obtaining efficient algorithms for computing market equilibria and has prompted a search for alternative models that are realistic as well as amenable to efficient computation. In this paper we show that introducing perfect price discrimination into the model stated above renders its equilibrium polynomial time computable. Moreover, its set of equilibria are captured by a convex program that generalizes the classical Eisenberg-Gale program. Next, we introduce production into our model; our attempt here is to carve out as big a piece of the general production model as possible while still maintaining the attractive property that a single (rational) convex program captures its equilibria, i.e., the program optimizes individually for each buyer and each firm. We show an application of our price discrimination market model to the Internet advertisement market places that have recently emerged.
Abstract:
The class of \emph{weakly acyclic games}, which includes potential games and dominance-solvable games, captures many practical application domains. Informally, a weakly acyclic game is one where natural distributed dynamics, such as better-response dynamics, cannot enter \emph{inescapable oscillations}. We establish a novel link between such games and the existence of pure Nash equilibria in subgames. Specifically, we show that the existence of a \emph{unique} pure Nash equilibrium in every \emph{subgame} implies the weak acyclicity of a game. In contrast, we show that the existence of (potentially) \emph{multiple} pure Nash equilibria in every subgame is \emph{insufficient} for weak acyclicity.
Abstract:
Bounding the price of stability of undirected network design games with fair cost allocation is a challenging open problem in the Algorithmic Game Theory research agenda. Even though the generalization of such games in directed networks is well understood in terms of the price of stability (it is exactly $H_n$, the $n$-th harmonic number, for games with $n$ players), far less is known for network design games in undirected networks. The upper bound carries over to this case as well while the best known lower bound is $42/23\approx 1.826$. For more restricted but interesting variants of such games such as broadcast and multicast games, sublogarithmic upper bounds are known while the best known lower bound is $12/7\approx 1.714$. In the current paper, we improve the lower bounds as follows. We break the psychological barrier of $2$ by showing that the price of stability of undirected network design games is at least $348/155\approx 2.245$. Our proof uses a recursive construction of a network design game with a simple gadget as the main building block. For broadcast and multicast games, we present new lower bounds of $20/11\approx 1.818$ and $1.862$, respectively.
Abstract:
We study the inefficiency of equilibrium outcomes in \emph{bottleneck congestion games}. These games model situations in which strategic players compete for a limited number of facilities. Each player allocates his weight to a (feasible) subset of the facilities with the goal to minimize the maximum (weight-dependent) latency that he experiences on any of these facilities. We derive upper and (asymptotically) matching lower bounds on the (strong) price of anarchy of linear bottleneck congestion games for a natural load balancing social cost objective (i.e., minimize the maximum latency of a facility). We restrict our studies to linear latency functions. Linear bottleneck congestion games still constitute a rich class of games and generalize, for example, load balancing games with identical or uniformly related machines with or without restricted assignments.
- Rajgopal Kannan and
Costas Busch
.
Bottleneck Congestion Games with Logarithmic Price of Anarchy
Abstract:
We study {\em bottleneck congestion games} where the social cost is determined by the worst congestion on any resource. In the literature, bottleneck games assume user utility costs determined by the worst congested resource in their strategy. However, the Nash equilibria of such games are inefficient since the price of anarchy can be very high and proportional to the number of resources. In order to obtain smaller price of anarchy we introduce {\em exponential bottleneck games}, where the utility costs of the players are exponential functions of their congestions. In particular, the delay function for any resource $r$ is $\M^{C_r}$, where $C_r$ denotes the number of players that use $r$, and $\M$ is an integer constant. We find that exponential bottleneck games are very efficient and give the following bound on the price of anarchy: $O(\log |R|)$, where $R$ is the set of resources. This price of anarchy is tight, since we demonstrate a game with price of anarchy $\Omega(\log |R|)$. We obtain our tight bounds by using two novel proof techniques: {\em transformation}, which we use to convert arbitrary games to simpler games, and {\em expansion}, which we use to bound the price of anarchy in a simpler game.