

COMSOC2008 Program
Notes:
 This program starts with the tutorials on Sept 2nd.
 The business meeting and conference dinner will be at The
Athenaeum, Liverpool, on the Thursday evening (i.e. Sept 4th).
We will have a (fairly short) business meeting combined with a
reception, in the "News room" of the Athenaeum (6:30 to 8:00), then
the conference dinner will begin at 8:00, in the Dining Room.
The News Room (which has tables, chairs, cash bar and coffee) will be available
for informal discussions afterwards, until about midnight.
 Click on the title to show the abstract for that talk; clicking again toggles it off.
 For printing this web page, you can use the following buttons to select talk
abstracts to display:
Tuesday 2nd September 2008

9:0010:00 MBC talk
(invited talk by Sarit Kraus) 
10:0010:30 Coffee 
10:3011:15 Tutorial: 
Overview
of the COMSOC Research Area
This tutorial will give a brief introduction to Computational Social
Choice and highlight some of the current research trends in the field.
We will review some of the fundamental concepts from Social Choice
Theory, such as preference handling, aggregation problems, voting rules,
fair division procedures, and mechanism design, and point out recent
work that has applied tools from Computer Science to these problem
domains, such as complexitytheoretic studies, algorithms, and logical
modelling.

11:3013:00 Tutorial: 
Computational
Complexity for Social Choice Theorists
This tutorial introduces to the field of Complexity Theory with a
particular focus on problems from Computational Social Choice. For
example, a famous result from Social Choice Theory (known as the
GibbardSatterthwaite Theorem) says that essentially every
nondictatorial voting system can be manipulated by strategic voting.
Computational complexity, however, can provide protection against
manipulation and other attempts of influencing the outcome of
elections, such as bribery, control, and lobbying. Since voting is a
useful mechanism for preference aggregation and collective
decisionmaking, these results have applications not only in the
context of political elections within human societies, but also for
various fields of computer science, including multiagent systems
(within distributed Artificial Intelligence), webpage ranking
algorithms for metasearch engines, and the design of recommender
systems. Other problems from Social Choice Theory and Mechanism
Design that have been analyzed complexitytheoretically are related to
multiagent ressource allocation, power indices, weighted voting games,
etc., and we will encounter some of them while we wander through the
landscape of complexity classes.
An accessible survey of this area is:
A Richer Understanding of the Complexity of Election Systems.
P. Faliszewski, E. Hemaspaandra, L. Hemaspaandra, and J. Rothe.
In Fundamental Problems in Computing: Essays in Honor of Professor
Daniel J. Rosenkrantz, S. Ravi and S. Shukla, editors.
SpringerVerlag, to appear.
Available at arxiv.org/abs/cs.GT/0609112

13:0014:30 Lunch 
14:3016:00 Tutorial: 
Judgment Aggregation
for Logicians and Computer Scientists
A growing interdisciplinary literature discusses the problem of
judgment aggregation: How can a group of individuals make consistent
collective judgments (in the form of acceptance/rejection or true/false
assignments) on some propositions based on the group members'
individual judgments on these propositions? The recent interest in
this problem was sparked by the observation that majority voting,
perhaps the most common democratic decision method, fails to guarantee
consistent collective judgments whenever there are nontrivial logical
connections between propositions. For instance, if one individual
accepts “p”, “if p then q”, and “q”,
a second accepts “p”, but “not if p then
q”, and “not q”, and a third accepts “not p”,
but “if p then q” as well as
“not q”, then there are majorities for “p”,
for “if p then q”, and for “not
q”, a logically inconsistent set of propositions, despite the
consistency of individual judgments. The theory of judgment
aggregation asks whether, and to what extent, problems of this kind
generalize to other aggregation methods and other decision problems,
and offers an axiomatic analysis of possible solutions to them. The
aim of this tutorial is to give an accessible overview of some of the
key concepts, ideas and results of this theory.
Here is a nontechnical survey paper.

16:0016:30 Coffee 
16:3017:30 MBC talk
(invited talk by Amy Greenwald) 
Wednesday 3rd September 2008

8:559:00 Opening remarks (Paul Goldberg) 
9:0010:00 Invited Talk 
(session chair: Paul Goldberg) 
Ranking,
Trust, and Recommendation Systems: An Axiomatic Approach
In the classical theory of social choice, a theory developed by
gametheorists and theoretical economists, we consider a set of agents
(voters) and a set of alternatives. Each agent ranks the alternatives,
and the major aim is to find a good way to aggregate the individual
preferences into a social preference. The major tool offered in this
theory is the axiomatic approach: study properties (termed axioms)
that characterize particular aggregation rules, and analyze whether
particular desired properties can be simultaneously satisfied. In a
ranking system [1] the set of voters and the set of
alternatives coincide, e.g. they are both the pages in the web; in
this case the links among pages are interpreted as votes: pages that
page p links to are preferable by page p to pages it does not link
to; the problem of preference aggregation becomes the problem of page
ranking. Trust systems are personalized ranking systems [3]
where the ranking is done for (and from the
perspective of) each individual agent. Here the idea is to see how to
rank agents from the perspective of a particular agent/user, based on
the trust network generated by the votes. In a trustbased
recommendation system the agents also express opinions about external
topics, and a user who has not expressed an opinion should be
recommended one based on the opinions of others and the trust network
[6]. Hence, we get a sequence of very interesting settings,
extending upon classical social choice, where the axiomatic approach
can be used.
On the practical side, ranking, reputation, recommendation, and trust
systems have become essential ingredients of webbased multiagent
systems (e.g. [9,13,7,14,8]).
These systems aggregate agents' reviews of products and services, and
of each other, into valuable information. Notable commercial examples
include Amazon and EBay's recommendation and reputation systems (e.g.
[12]), Google's page ranking system [11],
and the Epinions web of trust/reputation system
(e.g. [10]). Our work shows that an extremely
powerful way for the study and design of such systems is the axiomatic
approach, extending upon the classical theory of social choice. In
this talk we discuss some representative results of our work
[14,1,2,4,5,3,6].

10:0010:40 Social Choice and Graphs 
On the Complexity
of Rationalizing Behavior
We study the complexity of rationalizing choice behavior. We do so by
analyzing two polar cases, and a number of intermediate ones. In our
most structured case, that is where choice behavior is defined in
universal choice domains and satisfies the weak axiom of revealed
preference, finding the complete preorder rationalizing choice
behavior is a simple matter. In the polar case, where no restriction
whatsoever is imposed, either on choice behavior or on choice domain,
finding the complete preorders that rationalize behavior turns out to
be intractable. We show that the task of finding the rationalizing
complete preorders is equivalent to a graph problem. This allows the
search for existing algorithms in the graph theory literature, for the
rationalization of choice.

Computing Spanning Trees in a Social Choice Context
This paper combines social choice theory with discrete
optimization. We assume that individuals have preferences over edges
of a graph that need to be aggregated. The goal is to find the
socially best spanning tree. As ranking all spanning trees is becoming
infeasible even for small numbers of vertices and/or edges of a graph,
our interest lies in finding algorithms that determine a socially
“best” spanning tree in a simple manner. This problem is closely
related to the minimum (or maximum) spanning tree problem. The main
result in this paper shows that for the various underlying set ranking
rules discussed in this paper, using a greedy algorithm to determine
the maximal spanning tree based on such a group ranking will always
provide the “best” spanning tree.

10:4011:10 Coffee 
11:1012:30 Ranking 
(session chair: Peter McBurney) 
Approvalrating systems that never reward insincerity
Electronic communities are relying increasingly on the results of
continuous polls to rate the effectiveness of their members and
offerings. Sites such as eBay and Amazon solicit feedback about
merchants and products, with prior feedback and resultstodate
available to participants before they register their approval ratings.
In such a setting, participants are understandably prone to exaggerate
their approval or disapproval, so as to move the average rating in a
favored direction.
We explore several protocols that solicit approval ratings and report
a consensus outcome without rewarding insincerity. One such system is
rationally optimal while still reporting an outcome based on the the
usual notion of averaging. That system allows all participants to
manipulate the outcome in turn. Although multiple equilibria exist
for that system, they all report the same average approval rating as
their outcome. We generalize our results to obtain a range of
declaredstrategy voting systems suitable for approvalrating polls.

Aggregating Referee Scores: an Algebraic Approach
This paper presents a quantitative solution to the problem of
aggregating referee scores for documents submitted to peerreviewed
conferences or scientific journals. The proposed approach is a
particular application of the DempsterShafer theory to a very
restricted setting, from which an interesting algebraic framework
results. The paper investigates the algebraic properties of this
framework and shows how to apply it to the score aggregation and
document ranking problems.

Computing Kemeny Rankings, Parameterized by the Average KTDistance
The computation of Kemeny scores is central to many applications in
the context of rank aggregation. Unfortunately, the problem is
NPhard. Extending our previous work (AAIM08), we show that the Kemeny
score of an election can be computed efficiently whenever the average
pairwise distance between two input votes is not too large. In other
words, Kemeny Score is fixedparameter tractable with respect to the
parameter “average pairwise KendallTau distance d_{a}”. We describe a
fixedparameter algorithm with running time O^{*}(4^{d_a}).

Preference Functions That Score Rankings and Maximum Likelihood Estimation
A social welfare function (SWF) takes a set of votes (linear orders
over a set of alternatives) as input, and produces one or more
rankings (also linear orders over the alternatives) as output. Such
functions have many applications, for example, aggregating the
preferences of multiple agents, or merging rankings (of, say,
webpages) into a single ranking. The key issue is choosing an SWF to
use. One natural and previously studied approach is to assume that
there is an unobserved “correct” ranking, and the votes are noisy
estimates of this. Then, we can use the SWF that always chooses the
maximum likelihood estimate (MLE) of the correct ranking. In this
paper, we define simple ranking scoring functions (SRSFs) and show
that the class of neutral SRSFs is exactly the class of neutral SWFs
that are MLEs for some noise model. We also define extended ranking
scoring functions (ERSFs) and show a condition under which these
coincide with SRSFs. We study key properties such as consistency and
continuity, and consider some example SWFs. In particular, we study
Single Transferable Vote (STV), a commonly used SWF, showing that it
is an ERSF but not an SRSF, thereby clarifying the extent to which it
is an MLE function. This also gives a new perspective on how ties
should be broken under STV. We leave some open questions.

12:3014:00 Lunch break 
14:0015:00 Invited Talk: 
(session chair: Jérôme Lang) 
Lorenz rankings
of rules for the adjudication of conflicting claims
For the problem of adjudicating conflicting claims (O'Neill, 1982;
for a survey, see Thomson, 2003), we offer simple criteria to
compare rules on the basis of the Lorenz order. They exploit
several recently developed techniques to structure the space of
rules.
The first results concern the family of “ICI rules” (Thomson,
2008, forthcoming). This family contains the constrained equal awards (Maimonides,
12th Century), constrained equal losses (Maimonides, 12th Century), Talmud (Aumann
and Maschler, 1985), and minimal overlap (Ibn Ezra, 12th Centrury;
O'Neill, 1982) rules. We obtain a condition relating the
parameters associated with two rules in the family guaranteeing
that one Lorenz dominates the other. We prove parallel results for
a second family (CIC family, Thomson, 2008, forthcoming), which is
obtained from the first one by exchanging, for each problem, how
well agents with relatively larger claims are treated as compared
to agents with relatively smaller claims. This second family also
contains the constrained equal awards and constrained equal losses rules.
The next results concern the family of “consistent” rules (Young,
1987). A rule is consistent if the recommendation it makes for
each problem is never contradicted by the recommendation it makes
for each reduced problem obtained by imagining some claimants
leaving the scene with their awards and reassessing the situation
at that point. The main result here is the identification of
circumstances under which the Lorenz order is “lifted by
consistency” from the twoclaimant case to arbitrarily many
claimants. (The concept of lifting is adapted from one developed
for properties of rules by Hokari and Thomson, 2008, forthcoming.)
This means that if two rules are consistent and one of them Lorenz
dominates the other in the twoclaimant case, this domination
extends to arbitrarily many claimants.
Finally, we exploit the notion of an operator on the space of
rules (Thomson and Yeh, 2008, forthcoming). An operator is a
mapping that associates with each rule another one. The operators
we consider are the duality operator, the claims truncation
operator, and the attribution of minimal right operator. We also
consider the operator that associates with each rule and each list
of nonnegative weights for them adding up to one, their weighted
average. An operator “preserves an order” if whenever a rule
Lorenz dominates another one, this domination extends to the two
rules obtained by subjecting them to the operator. We identify
circumstances under which certain operators preserve the Lorenz
order (or reverse it), and circumstances under which a rule can
be Lorenz compared to the rule obtained by subjecting it to the
operator.

15:0015:40 Belief merging 
Equilibria in Social Belief Removal
In studies of multiagent interaction, especially in game theory, the
notion of equilibrium often plays a prominent role. A typical
scenario for the belief merging problem is one in which several
agents pool their beliefs together to form a consistent “group”
picture of the world. The aim of this paper is to define and study new
notions of equilibria in belief merging. To do so, we assume the
agents arrive at consistency via the use of a social belief
removal function, in which each agent, using his own individual
removal function, removes some belief from his stock of beliefs. We
examine several notions of equilibria in this setting, assuming a
general framework for individual belief removal due to Booth et al. We
look at their interrelations as well as prove their existence or
otherwise. We also show how our equilibria can be seen as a
generalisation of the idea of taking maximal consistent subsets of
agents.

Confluence Operators: Negotiation as Pointwise Merging
Sébastien Konieczny and Ramón Pino Pérez
In the logic based framework of knowledge representation and reasoning
many operators have been defined in order to capture different kinds
of change: revision, update, merging and many others. There are close
links between revision, update, and merging. Merging operators can be
considered as extensions of revision operators to multiple belief
bases. And update operators can be considered as pointwise revision,
looking at each model of the base, instead of taking the base as a
whole. Thus, a natural question is the following one: Are there
natural operators that are pointwise merging, just as update are
pointwise revision? The goal of this work is to give a positive
answer to this question. In order to do that, we introduce a new
class of operators: the confluence operators. These new operators can
be useful for modelling negotiation processes.

15:4016:10 Coffee 
16:1017:30 Computational obstacles to election manipulation 
(session chair: Edith Elkind) 
Llull and Copeland Voting Computationally Resist Bribery and Control
Piotr Faliszewski,
Edith Hemaspaandra,
Lane A. Hemaspaandra and
Jörg Rothe
Control and bribery are settings in which an external agent seeks to
influence the outcome of an election. Constructive control of
elections refers to attempts by an agent to, via such actions as
addition/deletion/partition of candidates or voters, ensure that a
given candidate wins. Destructive control refers to attempts by an
agent to, via the same actions, preclude a given candidate's victory.
An election system in which an agent can affect the result and in
which recognizing the inputs on which the agent can succeed is NPhard
(respectively, polynomialtime solvable) is said to be resistant
(respectively, vulnerable) to the given type of control. Aside from
election systems with an NPhard winner problem, the only systems
previously known to be resistant to all the standard control types are
highly artificial election systems created by hybridization.
We study a parameterized version of Copeland voting, denoted by
Copeland^{α}, where the parameter α is a rational number
between 0 and 1 that specifies how ties are valued in the pairwise
comparisons of candidates. In every previously studied constructive
or destructive control scenario, we determine which of resistance or
vulnerability holds for Copeland^{α} for each rational α, 0
≤ α ≤ 1. In particular, we prove that Copeland^{0.5}, the
system commonly referred to as “Copeland voting”, provides full
resistance to constructive control. Among the systems with a
polynomialtime winner problem, this is the first natural election
system proven to have full resistance to constructive control. In
addition, we prove that both Copeland^{0} and Copeland^{1} (interestingly,
the latter is an election system developed by the thirteenthcentury
mystic Ramon Llull) are resistant to all the standard types of
constructive control other than one variant of addition of
candidates. Moreover, we show that for each rational α, 0 ≤
α ≤ 1, Copeland^{α} voting is fully resistant to bribery
attacks, and we establish fixedparameter tractability of boundedcase
control for Copeland^{α}.
We also study Copeland^{α}
elections under more flexible models such as microbribery and extended
control, we integrate the potential irrationality of voter preferences
into many of our results, and we prove our results in both the
uniquewinner and the nonuniquewinner model. Our vulnerability
results for microbribery are proven via a technique involving mincost
network flow.

SincereStrategy PreferenceBased Approval Voting Fully Resists Constructive
Control and Broadly Resists Destructive Control
We study sincerestrategy preferencebased approval voting (SPAV), a
system proposed by Brams and Sanver [Electoral Studies, 25(2):287305,
2006], with respect to procedural control. In such control scenarios,
an external agent seeks to change the outcome of an election via
actions such as adding/deleting/partitioning either candidates or
voters. SPAV combines the voters' preference rankings with their
approvals of candidates, and we adapt it here so as to keep its useful
features with respect to approval strategies even in the presence of
control actions. We prove that this system is computationally
resistant (i.e., the corresponding control problems are NPhard) to 19
out of 22 types of constructive and destructive control. Thus, SPAV
has more resistances to control, by three, than is currently known for
any other natural voting system with a polynomialtime winner
problem. In particular, SPAV is (after Copeland voting, see
Faliszewski et al. [AAIM2008, Springer LNCS 5034, pp. 165176, 2008])
the second natural voting system with an easy winnerdetermination
procedure that is known to have full resistance to constructive
control, and unlike Copeland voting it in addition displays broad
resistance to destructive control.

Approximability of Manipulating Elections
In this paper, we set up a framework to study approximation of
manipulation, control, and bribery in elections. We show existence of
approximation algorithms (even fully polynomial time approximation
schemes) as well as obtain inapproximability results. In particular,
we show that a large subclass of scoring protocols admits
fullypolynomial time approximation schemes for the coalitional
weighted manipulation problem and that if certain families of scoring
protocols (e.g., veto) admitted such approximation schemes then P =
NP. We also show that bribery for Borda count is NPcomplete and that
there is no approximation algorithm that achieves even a polynomial
approximation ratio for bribery in Borda count for the case where
voters have prices.

Complexity of Unweighted Coalitional Manipulation Under Some Common Voting Rules
In this paper, we study the computational complexity of the unweighted
coalitional manipulation (UCM) problem under some common voting
rules. We show that the UCM problem under maximin is NPcomplete. We
also show that the UCM problem under ranked pairs is NPcomplete, even
if there is only one manipulator. Finally, we present a
polynomialtime algorithm for the UCM problem under Bucklin.

Thursday 4th September 2008

9:0010:00 Invited Talk: 
(session chair: Vincent Conitzer) 
Expressiveness
in mechanisms and its relation to efficiency: Our experience from $40 billion
of combinatorial multiattribute auctions and recent theory
A recent trend (especially in electronic commerce) is
higher levels of expressiveness in the mechanisms that mediate interactions such
as auctions, exchanges, catalog offers, voting systems, matching of peers, and so on.
Participants can express their preferences in drastically greater detail than ever before.
In many cases this trend is fueled by modern algorithms for winner determination that
can handle the richer inputs.
But is more expressiveness always a good thing? What forms of expressiveness should be offered?
In this talk I will first report on our experience from over $40 billion of
combinatorial multiattribute sourcing auctions. Then, I will present recent
theory that ties the expressiveness of a mechanism to an upper bound on efficiency
in a domainindependent way in privateinformation settings. Time permitting,
I will also discuss theory and experiments on applying expressiveness to ad auctions,
such as sponsored search and realtime banner ad auctions with temporal span and
complex preferences.

10:0010:40 Mechanism design 
A Qualitative Vickrey Auction
The negative conclusions of the GibbardSatterthwaite theorem—that
only dictatorial social choice functions are nonmanipulable—can be
overcome by restricting the class of admissible preference profiles. A
common approach is to assume that the preferences of the agents can be
represented by quasilinear utility functions. This restriction
allows for the positive results of the Vickrey auction and the
VickreyClarkeGroves mechanism. Quasilinear preferences, however,
involve the controversial assumption that there is some commonly
desired commodity or numeraire—money, shells, beads, etcetera—the
utility of which is commensurable with the utility of the other
alternatives in question. We propose a generalization of the Vickrey
auction, which does not assume the agents' preferences being
quasilinear but still has some of its desirable properties. In this
auction a bid can be any alternative, rather than just a monetary
bid. Such an auction is also applicable to situations where no
numeraire is available, when there is a fixed budget, or when money is
no issue. In the presence of quasilinear preferences, however, the
traditional Vickrey turns out to be a special case. In order to
sidestep the GibbardSatterthwaite theorem, we restrict the
preferences of the bidders. We show that this qualitative Vickrey
auctions always has a dominant strategy equilibrium, which moreover
invariably yields a weakly Pareto efficient outcome.

The Cost and Windfall of Manipulability
A mechanism is manipulable if it is in agents’ best interest to
misrepresent their private information (lie) to the center. We provide
the first formal treatment of the windfall of manipulability, the
seemingly paradoxical quality by which the failure of any agent to
play their best manipulation yields a strictly better result than an
optimal truthful mechanism. We dub such mechanisms manipulation
optimal. We prove that any manipulationoptimal mechanism can have at
most one manipulable type per agent. We show the existence of
manipulationoptimal multiagent mechanisms with the goal of social
welfare maximization, but not in dominant strategies when agents are
anonymous and the mechanism is symmetric, the most common setting. For
this setting, we show the existence of manipulationoptimal mechanisms
when the goal is affine welfare maximization.

10:4011:10 Coffee 
11:1012:30 Tournaments 
(session chair: Sébastien Konieczny) 
On the Agenda Control Problem for Knockout Tournaments
Thuc Vu, Alon Altman and Yoav Shoham
Knockout tournaments are very common in practice for various settings
such as sport events and sequential pairwise elimination elections. In
this paper, we investigate the computational aspect of tournament
agenda control, i.e., finding the agenda that maximizes the chances of
a target player winning the tournament. We consider several modelings
of the problem based on different constraints that can be placed on
the structure of the tournament or the model of the players. In each
setting, we analyze the complexity of finding the solution and show
how it varies depending on the modelings of the problem. In general,
constraints on the tournament structure make the problem become
harder.

How to Rig Elections and Competitions
We investigate the extent to which it is possible to rig the agenda of
an election or competition so as to favor a particular candidate in
the presence of imperfect information about the preferences of the
electorate. We assume that what is known about an electorate is the
probability that any given candidate will beat another. As well as
presenting some analytical results relating to the complexity of
finding and verifying agenda, we develop heuristics for agenda
rigging, and investigate the performance of these heuristics for both
randomly generated data and realworld data from tennis and basketball
competitions.

On Voting Caterpillars: Approximating Maximum Degree in a Tournament by Binary Trees
Voting trees describe an iterative procedure for selecting a single
vertex from a tournament. It has long been known that there is no
voting tree that always singles out a vertex with maximum degree. In
this paper, we study the power of voting trees in approximating the
maximum degree. We give upper and lower bounds on the worstcase ratio
between the degree of the vertex chosen by a tree and the maximum
degree, both for the deterministic model concerned with a single fixed
tree, and for randomizations over arbitrary sets of trees. Our main
positive result is a randomization over surjective trees of polynomial
size that provides an approximation ratio of at least 1/2. The proof
is based on a connection between a randomization over caterpillar
trees and a rapidly mixing Markov chain.

A Computational Analysis of the Tournament Equilibrium Set
A recurring theme in the mathematical social sciences is how to select
the "most desirable" elements given a binary dominance relation on a
set of alternatives. Schwartz's tournament equilibrium set (TEQ) ranks
among the most intriguing, but also among the most enigmatic,
tournament solutions proposed so far in this context. Due to its
unwieldy recursive definition, little is known about TEQ. In
particular, its monotonicity remains an open problem to date. Yet, if
TEQ were to satisfy monotonicity, it would be a very attractive
solution concept refining both the Banks set and Dutta's minimal
covering set. We show that the problem of deciding whether a given
alternative is contained in TEQ is NPhard. Furthermore, we propose a
heuristic that significantly outperforms the naive algorithm for
computing TEQ. Early experimental results support the conjecture that
TEQ is indeed monotonic.

12:3014:00 Lunch break 
14:0015:00 Invited Talk 
(session chair: Christian List) 
Individual and group
strategy proofness of voting rules. The case for restricted domains.
All nontrivial voting rules are manipulable at some preference
profiles when defined on a universal domain. This manipulation is
possible even by single individuals, and “a fortiori”, by coalitions
of voters. This negative result may be reversed if the functions are
defined in appropriately restricted domains of preferences, and only
preferences in that same domain can be used to manipulate.
In the first part of the talk I will exemplify the type of results
regarding nonmanipulability by individuals (or strategyproofness)
that one can obtain under domain restrictions. I will pay special
attention to the domain of separable preferences on grids. I will
present a characterization of all strategy proof rules in that
context, consider the complications that arise in the presence of
feasibility constraints and discuss the relevance of that paradigmatic
case. I will also briefly touch upon the case of voting rules whose
outcomes are lotteries.
In the second part of the talk I will consider the added difficulties
that arise if one wants to guarantee that a rule is nonmanipulable by
coalitions of voters, in addition to being individually
strategyproof. I will also consider intermediate cases, where only
“large enough” groups may manipulate. I will also show that, in spite
of these difficulties, some domains of preferences have the nice
property that all strategy proof rules that one can define on them are
also necessarily group strategyproof (or at least strategy proof in
front of coalitions that are not “too large”). I will provide a
characterization of those nice domains of preferences for which both
types of requirements become equivalent.

15:0015:40 Manipulability 
Nondictatorial Social Choice Rules are Safely Manipulable
Arkadii Slinko and Shaun White
We extend the GibbardSatterthwaite theorem in the following way. We
prove that an onto, nondictatorial social choice rule which is
employed to choose one of at least three alternatives is safely
manipulable by a single voter. This means that on occasion a voter
will have an incentive to make a strategic vote and know that he will
not be worse off regardless of how other voters with similar
preferences would vote, sincerely or not.

Computing the Degree of Manipulability in the Case of Multiple Choice
Fuad Aleskerov, Daniel Karabekyan, Remzi Sanver and Vyacheslav Yakuba
The problem of the manipulability of known social choice rules in the
case of multiple choice is considered. Several concepts of expanded
preferences (preferences over the sets of alternatives) are
elaborated. As a result of this analysis ordinal and nonordinal
methods of preferences expanding are defined. The notions of the
degree of manipulability are extended to the case under study. Using
the results of theoretical investigation, 22 known social choice rules
are studied via computational experiments to reveal their degree of
manipulability.

15:4016:10 Coffee 
16:1017:30 Judgment aggregation 
(session chair: Ulle Endriss) 
Majority voting on restricted domains: a summary
In judgment aggregation, unlike preference aggregation, not much is
known about domain restrictions that guarantee consistent majority
outcomes. We introduce several conditions on individual judgments
sufficient for consistent majority judgments. Some are based on global
orders of propositions or individuals, others on local orders, still
others not on orders at all. Some generalize classic
socialchoicetheoretic domain conditions, others have no counterpart. Our most
general condition generalizes Seni's triplewise valuerestriction,
itself the most general classic condition. We also give a new
characterization theorem: for a large class of domains, if there
exists any aggregation function satisfying some democratic conditions,
then majority voting is the unique such function. Taken together, our
results provide new support for the robustness of majority rule.

A geometric approach to judgment aggregation
Don Saari developed a geometric approach to the analysis of voting
paradoxes such as the Condorcet paradox or Arrow's general possibility
theorem. In this paper we want to extend this approach to judgment
aggregation. In particular we will use Saari's representation cubes to
provide a geometric presentation of profiles and majority rule
outcomes. We will then show how profile decompositions can be used to
derive restrictions on the set of profiles that guarantee consistent
majority outcomes. Finally, the geometric approach is used to
determine the likelihood of inconsistencies.

From Preferences to Judgments and Back
Davide Grossi
The paper studies the interrelationships of the Preference Aggregation
and Judgment Aggregation problems from the point of view of logical
semantics. The result of the paper is twofold. On the one hand, the
Preference Aggregation problem is viewed as a special case of the
Judgment Aggregation one. On the other hand, the Judgment Aggregation
problem is viewed as a special case of the Preference Aggregation
one. As an example, it is shown how to import an impossibility result
from each framework to the other. This shows how the aggregation of
preferences and judgments can be viewed as two faces of a same coin.

Judgment Aggregation as Maximization of Epistemic and Social Utility
Szymon Klarman
In the paper we restate the discursive dilemma and relate it to
another wellknown problem regarding the rational acceptance of
logically connected propositions — the lottery paradox. We recall
how the latter has been tackled by I. Levi in the decisiontheoretic
framework and consider the possibility of applying a similar approach
to the judgment aggregation problem. Finally, we propose a method of
aggregation built on the concepts of epistemic and social utility of
accepting a collective judgment, which accounts for such parameters
— arguably of importance for aggregation tasks — as the factual
truth of the propositions, reliability of agents, information content
(completeness) of possible collective judgments and the level of
agreement between the agents. We claim that the expected utility of
accepting a collective judgment depends on the degree to which all
those objectives are satisfied and that groups of rational agents aim
at maximizing it, while solving a judgment aggregation problems.

Friday 5th September 2008

9:0010:00 Invited Talk: 
(session chair: Mike Wooldridge) 
Talk, Cheap Talk, and
States of Knowledge
Applications of Epistemic Logic have by now become a major industry and
an area once dominated by
philosophers has now attracted large followings among both AI people and
economists. We will discuss some of our own work in this area,
including applications to various social issues, like consensus, common
knowledge, elections, the sorts of things which candidates running for
office are apt to say, and why.

10:0010:40 Logic and argumentation 
A Deontic Logic for Socially Optimal Norms
The paper discusses the interaction properties between preference and
choice of coalitions in a strategic interaction. A language is
presented to talk about the conflict between coalitionally optimal and
socially optimal choices. Norms are seen as social constructions that
enable to enforce socially desirable outcomes.

Welfare properties of argumentationbased semantics
Since its introduction in the midnineties, Dung's theory of abstract
argumentation frameworks has been influential in artificial
intelligence. Dung viewed arguments as abstract entities with a binary
defeat relation among them. This enabled extensive analysis of
different (semantic) argument acceptance criteria. However, little
attention was given to comparing such criteria in relation to the
preferences of selfinterested agents who may have conflicting
preferences over the final status of arguments. In this paper, we
define a number of agent preference relations over argumentation
outcomes. We then analyse different argument evaluation rules taking
into account the preferences of individual agents. Our framework and
results inform the mediator (e.g. judge) to decide which argument
evaluation rule (i.e. semantics) to use given the type of agent
population involved.

10:4011:10 Coffee 
11:1012:30 Computing social choice functions 
(session chair: Ariel Procaccia) 
Dodgson's Rule: Approximations and Absurdity
With the Dodgson rule, cloning the electorate can change the winner,
which Young (1977) considers an “absurdity”. We show that removing
this absurdity results in a new rule (DC) for which we can compute the
winner in Polynomial time, unlike the traditional Dodgson rule. We
introduce two related rules (DR and D&). Dodgson did not explicitly
propose the “Dodgson rule” (Tideman, 1987); we argue that DC and DR
are better realizations of the principle behind the Dodgson rule than
the traditional Dodgson rule. These rules, especially D&, are also
effective approximations to the traditional Dodgson's rule. We show
that, unlike the rules we have considered previously, the DC, DR and
D& scores differ from the Dodgson score by no more than a fixed
amount, and thus these new rules converge to Dodgson under any
reasonable assumption on voter behaviour.

Informational requirements of social choice rules
Shin Sato
The needed amount of information to make a social choice is the cost
of information processing, and it is a practically important feature
of social choice rules. We introduce informational aspects into the
analysis of social choice rules and prove that (i) if an anonymous,
neutral, and monotonic social choice rule operates on minimal
informational requirements, then it is a supercorrespondence of either
the plurality rule or the antiplurality rule, and (ii) if the social
choice rule is furthermore Pareto efficient, then it is a
supercorrespondence of the plurality rule.

Compiling the votes of a subelectorate
In many practical situations where agents vote for making a common
decision, the votes do not come all together at the same time (for
instance, when voting about a date for a meeting, it often happens
that one or two participants express their preferences later than
others). In such situations, we might want to preprocess the
information given by the subelectorate (consisting of those voters who
have expressed their votes) so as to “compile” the known votes for
the time when the latecomers will have expressed their votes. We study
the amount of space necessary to such a compilation, in function of
the voting rule used, the number of candidates, the number of voters
who have already expressed their votes and the number of remaining
voters. We position our results with respect to existing work,
noticeably on vote elicitation and communication complexity.

Alternatives to Truthfulness are Hard to Recognize
The central question in mechanism design is how to implement a given
social choice function. One of the most studied concepts is that of
truthful implementations in which truthtelling is always the
best response of the players. The Revelation Principle says that one
can focus on truthful implementations without loss of generality (if
there is no truthful implementation then there is no implementation at
all). Green and Laffont (1986) showed that, in the scenario in which
players' responses can be partially verified, the revelation
principle holds only in some particular cases.
When the Revelation Principle does not hold, nontruthful
implementations become interesting since they might be the only way to
implement a social choice function of interest. In this work we show
that, although nontruthful implementations may exist, they are hard
to find. Namely, it is NPhard to decide if a given social choice
function can be implemented in a nontruthful manner, or even if it
can be implemented at all. This is in contrast to the fact that
truthful implementability can be recognized efficiently, even when
partial verification of the agents is allowed. Our results also show
that there is no “simple” characterization of those social choice
functions for which it is worth looking for nontruthful
implementations.

12:3014:00 Lunch break 
14:0015:20 Coalitional voting games 
(session chair: Arkadii Slinko) 
Complexity
of comparison of influence of players in simple games
Coalitional voting games appear in different forms in multiagent
systems, social choice and threshold logic. In this paper, the
complexity of comparison of influence between players in coalitional
voting games is characterized. The possible representations of simple
games considered are simple games represented by winning coalitions,
minimal winning coalitions, weighted voting game or a multiple
weighted voting game. The influence of players is gauged from the
viewpoint of basic player types, desirability relations and classical
power indices such as ShapleyShubik index, Banzhaf index, Holler
index, DeeganPackel index and Chow parameters. Among other results,
it is shown that for a simple game represented by minimal winning
coalitions, although it is easy to verify whether a player has zero or
one voting power, computing the Banzhaf value of the player is
#Pcomplete. Moreover, it is proved that multiple weighted voting
games are the only representations for which it is NPhard to verify
whether the game is linear or not. For a simple game with a set W^{m}
of minimal winning coalitions and n players, a
O(n.W^{m}+n^{2}log(n)) algorithm is presented which returns no if the
game is nonlinear and returns the strict desirability ordering
otherwise. The complexity of transforming simple games into compact
representations is also examined.

Coalition Structures in Weighted Voting Games
Georgios Chalkiadakis, Edith Elkind and Nick Jennings
Weighted voting games are a popular model of collaboration in
multiagent systems. In such games, each agent has a weight
(intuitively corresponding to resources he can contribute), and a
coalition of agents wins if its total weight meets or exceeds a given
threshold. Even though coalitional stability in such games is
important, existing research has nonetheless only considered the
stability of the grand coalition. In this paper, we introduce a model
for weighted voting games with coalition structures. This is a natural
extension in the context of multiagent systems, as several groups of
agents may be simultaneously at work, each serving a different task.
We then proceed to study stability in this context. First, we define
the CScore, a notion of the core for such settings, discuss its
nonemptiness, and relate it to the traditional notion of the core in
weighted voting games. We then investigate its computational
properties. We show that, in contrast with the traditional setting,
it is computationally hard to decide whether a game has a nonempty
CScore, or whether a given outcome is in the CScore. However, we
then provide an efficient algorithm that verifies whether an outcome
is in the CScore if all weights are small (polynomially bounded).
Finally, we also suggest heuristic algorithms for checking the
nonemptiness of the CScore.

Divide and Conquer: FalseName Manipulations in Weighted Voting Games
Weighted voting is a wellknown model of cooperation among agents in
decisionmaking domains. In such games, each of the players has a
weight, and a coalition of players wins the game if its total weight
meets or exceeds a given quota. In such games, the player's ability to
influence the outcome of the game is related to its weight, but not
always directly proportional to it. Usually, the agents' power is
measured using a power index, such as e.g., ShapleyShubik
index. In this paper, we study how the agent can manipulate its
voting power (as measured by ShapleyShubik power index) by
distributing his weight among several false identities. We call this
behavior a falsename manipulation. We show that such
manipulations can indeed increase an agent's power and provide upper
and lower bounds on the effects of such manipulations. We then study
this issue from the computational perspective, and show that checking
whether a beneficial split exists is NPhard. We also discuss
efficient algorithms for restricted cases of this problem, as well as
randomized algorithms for the general case.

Computing the Nucleolus of Weighted Voting Games
Weighted voting games (WVG) are coalitional games in which an agent's
contribution to a coalition is given by his weight, and a
coalition wins if its total weight meets or exceeds a given quota.
These games model decisionmaking in political bodies as well as
collaboration and surplus division in multiagent domains. The
computational complexity of various solution concepts for weighted
voting games received a lot of attention in recent years. In
particular, Elkind et al. [5] studied the complexity of
stabilityrelated solution concepts in WVGs, namely, of the core, the
least core, and the nucleolus. While they have completely
characterized the algorithmic complexity of the core and the least
core, for the nucleolus they have only provided an NPhardness result.
In this paper, we solve an open problem posed in [5] by
showing that the nucleolus can be computed in pseudopolynomial time,
i.e., there exists an algorithm that correctly computes the nucleolus
and runs in time polynomial in the number of players n and the
maximum weight W.

15:2015:50 Coffee 
15:5016:30 Coalition formation 
(session chair: Wiebe Van der Hoek) 
A fair payoff distribution for myopic rational agents
We consider the case of selfinterested agents that are willing to
form coalitions for increasing their individual rewards. We assume
that each agent gets an individual payoff which depends on the
coalition structure (CS) formed. We consider a CS to be stable if no
agent has an incentive to change coalition from this CS. Stability is
a desirable property of a CS: if agents form a stable CS, they do not
spend further time and effort in selecting or changing CSs. When no
stable CSs exist, rational agents will be changing coalitions forever
unless some agents accept suboptimal results. When stable CSs exist,
they may not be unique, and choosing one over the other will give an
unfair advantage to some agents. In addition, it may not be possible
to reach a stable CS from any CS using a sequence of myopic rational
actions. We provide a payoff distribution scheme that is based on the
expected utility of a rational myopic agent (an agent that changes
coalitions to maximize immediate reward) given a probability
distribution over the initial CS. Agents share the utility from a
social welfare maximizing CS proportionally to the expected utility of
the agents, which guarantees that agents receives at least as much as
their expected utility from myopic behavior. This ensures sufficient
incentives for the agents to use our protocol.

Threesided stable matchings with cyclic preferences and the kidney exchange problem
Knuth (1976) asked whether the stable matching problem can be
generalised to three dimensions i. e., for families containing a man,
a woman and a dog. Subsequently, several authors considered the
threesided stable matching problem with cyclic preferences, where men
care only about women, women only about dogs, and dogs only about
men. In this paper we prove that if the preference lists may be
incomplete, then the problem of deciding whether a stable matching
exists, given an instance of threesided stable matching problem with
cyclic preferences is NPcomplete. Considering an alternative
stability criterion, strong stability, we show that the problem is
NPcomplete even for complete lists. These problems can be regarded as
special types of stable exchange problems, therefore these results
have relevance in some real applications, such as kidney exchange
programs.

