Introduction to PPAD

The following is a summary of a talk I gave at Dagstuhl on the 22nd November 2007. I mainly used the blackboard; this is a "cleaned-up" summary of the talk, in which I correct various errors I made during the talk, as pointed out by members of the audience, and I also insert additional clarificatory remarks. Further comments and corrections would be gratefully received!

PPAD is a class of total search problems, that is, problems for which solutions are guaranteed to exist, and the challenge is to actually exhibit a specific solution. The kind of total search problems we are interested is are ones where it is, for some reason or other, possible to prove that a solution exists without actually constructing one. A good starting-point for understanding these kinds of total search problems is the following. We are given the following set of 10 numbers and we want to find two distinct subsets of those numbers that add up to the same total.
42,5,90,98,99,100,64,70,78,51

Before trying to find a solution, notice that there are 210 =1024 distinct subsets of these numbers, and none of those subsets can add up to more than 1000, so certainly there must be more than one solution. This observation is an example of the sort of proof that there really exists a solution to this problem. We have convinced ourselves that two distinct subsets have the same total, without actually exhibiting two subsets having the same total. (I thank Anuj Dawar for spotting the solution 42+5+51=98. When I constructed the problem, I was hoping that the most simple solution was 42+78+100=51+70+99.)

More generally, let us define the problem EQUAL SUBSETS as follows.
EQUAL SUBSETS
INPUT: a1,a2,...,an; 1≤ ai ≤ (2n−1)/n
FIND: Two distinct subsets of the ai that add up to the same total.
(This problem is mentioned in Papadimitriou's textbook; his definition, which should presumably be the official one, says that an instance has to satisfy Σiai < 2n − 1. Thus, his example "47,59,91,100,88,111,23,133,157,205" is a valid instance, and perhaps a bit harder to solve than the one I used above.)

Notice that EQUAL SUBSETS belongs to NP, or to be more precise, it belongs to TFNP, which is the class of all total search problems solvable in non-deterministic polynomial time. A total search problem is one where any instance has a solution, so the decision problem is trivial, and the interesting challenge is to actually find a solution. The obvious non-deterministic algorithm is:

Guess two distinct subsets
Test whether they add up to the same total.
In trying to establish the computational complexity of EQUAL SUBSETS, we start by asking, is it in P? And so far, that question remains open. So we next ask, is it NP-complete? We are asking whether, if you have an algorithm that solves EQUAL SUBSETS in polynomial time, you could use it to build an algorithm that solves SAT in polynomial time.

So, should we expect EQUAL SUBSETS to be NP-hard?
A result of Megiddo (1988) says that if a total search problem (such as EQUAL SUBSETS) is NP-complete, then it follows that NP=co-NP, which is generally believed not to be the case. (His proof was IIRC done for linear complementarity problems.) The proof (applied to EQUAL SUBSETS) goes as follows. Suppose it is NP-complete, that SAT≤pEQUAL SUBSETS. Then there is an algorithm for SAT that runs in polynomial time, provided that it has access to a poly-time algorithm for EQUAL SUBSETS. Now suppose that this algorithm is given a non-satisfiable formula φ. Presumably it calls the EQUAL SUBSETS algorithm some number of times, and receives a sequence of solutions to various instances of EQUAL SUBSETS, and eventually the algorithm returns the answer "no, φ is not satisfiable". Now suppose that we replace this hypothetical poly-time algorithm for EQUAL SUBSETS with the natural non-deterministic algorithm above, which gives us a non-deterministic polynomial-time algorithm for SAT. Notice that when φ is given to this new algorithm, the "guess and test" subroutine for EQUAL SUBSETS can produce the same sequence of solutions to the instances it is passed, and as a result, the entire algorithm can recognise this non-satisfiable formula φ as before. Thus we have a NP algorithm that recognizes unsatisfiable formulae, which gives the consequence NP=co-NP.
...hence, we don't think it is NP-hard.

An unsuccessful attempt to capture the complexity of EQUAL SUBSETS

We observe that solutions to EQUAL SUBSETS owe their existence to the pigeon-hope principle. Let us define a general class of problems for which the existence of solutions in promised by the pigeon-hole principle, and see if that class "captures the complexity" of EQUAL SUBSETS.

Define the problem PIGEONHOLE CIRCUIT as follows. An input to PIGEONHOLE CIRCUIT consists of a circuit having n inputs and outputs as shown in the diagram. (This problem is called EQUAL OUTPUTS in Papadimitriou's textbook.)

INPUT: Boolean circuit C, having the same number of inputs as outputs.

FIND: Either x such that C(x)=0n (i.e. input that gets mapped to the zero vector), or alternatively, x and x' such that C(x)=C(x').

If there are no inputs that map to the all-zeroes vector, then by the pigeonhole principle, there must be two inputs that map to the same output. So there must always exist a solution.

The challenge is to find a solution in time polynomial in the size of C — there is clearly a brute-force expnential-time algorithm.

(In my talk, I said that the size of C should be polynomially bounded in n, and then we seek a solution in time polynomial in n. In fact, it looks like a nicer definition is to say that we just seek an algorithm that is polynomial in the size of C (having the same number of inputs and outputs) without requiring the size of C to be polynomially related to the number n of inputs and outputs. I thank Nimrod Megiddo for pointing that out.)

This looks somehow like a very "generic" kind of problem for which solutions owe their existence to the pigeonhole principle. Let us
Define the complexity class PPP

(short for "polynomial pigeonhole principle") to be the class of all problems reducible in polynomial time to PIGEONHOLE CIRCUIT. Furthermore we will say that a problem X is complete for PPP provided that X is in PPP and PIGEONHOLE CIRCUIT is poly-time reducible to X.

The good news...

We can verify that EQUAL SUBSETS is a member of PPP.

Given an instance of EQUAL SUBSETS, we construct a corresponding instance of PIGEONHOLE CIRCUIT as shown in the diagram.

The circuit just computes the sum of a subset of the ai values, whose characteristic function is the input vector x. Each output bit yi is the ith bit of this numeric total. We added 1 to the total to ensure that it is always positive, so we are looking for pairs of distinct input vectors that give to the same output, and these correspond to distinct subsets of the ai values having the same total.

The bad news...

It is not known if we can reduce in the opposite direction, from PIGEONHOLE CIRCUIT to EQUAL SUBSETS. Indeed, no problems in PPP are known to be complete for PPP, apart from PIGEONHOLE CIRCUIT and varaints thereof. And that is unfortunate for the class PPP, since it has not succeeded in capturing the computational complexity of any problem, other than the one involved in its definition.

PPAD: A subclass of PPP

We define the class of problems PPAD in the same sort of way that we defined PPP. That is, in terms of a specific problem that has something to do with circuits, and is guaranteed to have a solution, but for which the solution is seemingly hard-to-find in general, despite the fact we know one must exist.

Define the problem END-OF-LINE as follows. We are given 2 circuits having the same number of inputs and outputs, as shown in the diagram. Treat S as the "successor" circuit and P as the "predecessor" circuit. S and P define a directed graph on the set of length-n vectors as follows: There is an edge from x to y provided that S(x)=y and also P(y)=x. EXCEPT that the all-zeroes vector has no predecessor.

Input to END-OF-LINE.

The existence of a solution to END OF LINE is guaranteed by the observation that

Every directed graph with in/outdegree 1 and a source, has a sink.
We will see that that observation is a special case of the pigeonhole principle.

What the END OF LINE graph looks like. Notice that since every node has degree 2, it is a collection of paths and cycles. Also, since the zeroes-vector 0 (shown at the top-left of the diagram) is at one end of a path, there must be some other end to that path, which is a valid solution to END OF LINE.

Valid solutions are indicated with the red arrows. Note that we are allowing other sources as solutions (e.g. bottom left red arrow). If we disallow other sources as solutions, there must still exist solutions, but we have a possibly harder search problem that corresponds to the class PPADS.

Note that the problem is not to find the end of the line that begins at the zeroes vector. That problem is PSPACE-complete! You would essentially be trying to identify the final configuration of a polynomially space-bounded Turing machine, albeit one that is "reversible" (in that you can use trace the computation backwards from any configuration). I thank Peter Bro Milterson for assuring us that TM computations can be made reversible. Also, note that a solution is allowed to be a source other than the all-zeroes vector; it does not actually have to be a sink (even though there is of course no guarantee that another source exists). I thank Nimrod Megiddo for pointing that out. (In the DGP paper this problem gets called "END OF THE LINE"; in the CDT paper it is called END OF LINE. I think it was Mike Paterson who suggested that "END OF A LINE" would be a better name than "END OF THE LINE".)

Reducing END OF LINE to PIGEONHOLE CIRCUIT.

We can reduce END OF LINE to PIGEONHOLE CIRCUIT, which consequently shows that PPAD is a subclass of PPP, and shows that the principle highlighted above that "every degree 2 graph with a source, has a sink" is a special case of the pigeonhole principle.

The reduction:

Given the circuits S and P that describe the graph, we build a larger pigeon-hole circuit, that takes as input a n-bit vector x (vertex of the graph) and in general, outputs S(x), the vertex one edge downstream from x. If x is a sink, having no vertex downstream, then x itself is output.

Notice that whenever a pair of vertices map to the same output, those two vertices consist of a sink and its predecessor, a solution to END OF LINE. Alternatively, if the zeroes-vertex is output, then it is an isolated vertex, thus a sink, also a valid solution to END OF LINE.

The diagram hints at the construction of the pigeonhole-circuit from P and S; any input gets its adjacent vertices checked, and the appropriate output is chosen. Strictly speaking, we don't need to check the predecessor of x as indicated by the diagram, we could just make do the successor.

Finding an Approximate Nash equilibrium is in PPAD.

The fact that it's actually complete for PPAD requires us to establish two things: that it's a member of PPAD (you can reduce it to END OF LINE) and that you can reduce END OF LINE to the problem of finding an approximate Nash equilibrium. I focus on the first of these two. Notice that the proof that membership of PPAD necessarily embodies a proof that Nash equilibria do in fact exist, since we have already seen that PPAD is a class of total search problems.

We reduce NASH to SPERNER, and from there to END OF LINE. So, what is SPERNER? It's a total search problem based on Sperner's lemma. To find out about Sperner's lemma, consider the following sequence of diagrams.

Given a game for which we seek a Nash equilibrium, pretend that the domain of its mixed-strategy profiles is triangular. OK, so it's not really a triangle, but topologically it's a simplex, and our discussion with respect to triangles can be made to extend to simplices. We can divide it up into tiny triangles as shown in the picture. Then, define a "preference function" that takes any mixed-strategy profile, and maps it to a new one, constructed by letting players move slightly towards strategies that they would prefer over their current ones. Note that such a function has a fixpoint at a Nash equilibrium, but not elsewhere. If we are not at a Nash equilibrium, draw an arrow in the direction that the function takes the mixed-strategy profile.

We can colour the points where the function has been computed, according to the direction of the arrows. If we give the 3 outermost (extremal) points of the triangle the 3 colours red, green abd blue, then each point with an arrow gets coloured with a colour of an extreme point that it is moving away from. Note that according to this rule, the points on the edge between the outermost red and green vertices will be either red or green. Similarly for points on edges between other vertices.

The icon at the top right-hand side of the picture indicates how the colors are chosen — a downward arrow is red, a up-left arrow is green and a up-right arrow is blue.

Intuitively, we would expect fixpoints of the preference function to lie in vicinity of small triangles whose 3 vertices get 3 different colours. This is because the points are being dragged in 3 different directions, and by continuity, if we triangulate at a sufficiently fine resolution, we should converge to a fixpoint. In this diagram there are 3 small traingles with vertices of all colours, marked with black spots.

Let's try doing the triagulation on a finer scale. Here's a picture of what you might end up with when you compute the preference function at all points...

Notice that once again there are small triangles with vertices of all different colours! Call these the panchromatic triangles. Sperner's Lemma predicts that such triangles always exist.

Let's try again at a finer scale...

Again, we find panchromatic triangles — nine of them here!

Trying once more at an even finer scale, there still exist panchromatic triangles. You can count 17 in the picture if you look carefully.

Sperner's Lemma

Suppose a simplex has each of its extremal vertices coloured a different colour. Suppose that we subdivide it into smaller simplices and colour the vertices of the simplices subject to the following rule. If a vertex of a subsimplex lies in the convex hull of a subset of the extreme points, then its colour must be one of the colours of that subset.
The proof of Sperner's Lemma is central to one's understanding of why Nash equilibria exist, and in particular, why approximate Nash equilibria owe their existence to the principle that every graph with in/out-degree 1 and a source, has a degree-1 vertex other than the source. So, what is the relationship between such graphs, and Brouwer functions? Read on...

Proof of Sperner's Lemma

Given a convex domain that has been triangulated as described in the statement, construct a graph as follows. Each triangle (or in higher dimension, subsimplex) is vertex of the graph. If 2 of these subsimplices are adjacent and share a facet that itself has all different coloured vertices, then there is an edge between those vertices. The colouring can give the facet an orientation and this determines the direction of the edge. But, don't worry too much about the orientation.

Now, let's just stick to 2 dimensions here, and claim without proof that the idea generalises.

A valid Sperner coloring.

Extend the triangulation by joining some extremal vertex to all vertices along one of its incident edges.

Treat each red-green edge as having a gateway through it, so that someone standing outside the triangulation can enter it through the single gateway that has been left after extending the triangulation.

Notice that as we pass from triangle to triangle, there is only ever one way to go, and we eventually stop at a panchromatic triangle!

We can, just for fun, try the same trick with the red-blue edge.

Apply the same rule, treating a red-blue edge as a gateway...

Keep going as far as possible through re-blue gateways, until we eventually stop at a panchromatic triangle.

We have followed edges in a directed graph, in which vertices are the tiny triangles, and edges are pairs of adjacent triangles connected by a gateway.

In the diagram, the top right-hand triangle should have the red and blue vertices swapped to be consistent with the direction of the edge. (thanks to Troels Sørensen for pointing that out)

There is only one way into the triangulation, so we must stop eventually somewhere inside it, and the only kind of triangle that allows us to stop, is a panchromatic one.

As it happens, this proof is intimately related to Scarf's 1967 algorithm for computing approximate Nash equilibria; this algorithm and variants thereof have been often applied in practice. Scarf's algorithm works by following the same kind of paths through the domain that are promised in the above way by Sperner's lemma. It was motivated by the desire for computational efficiency, and while we now know it is exponential-time in the worst case, it is usually of course much better than brute-force search.

And Finally... Why do we believe that PPAD-complete problems are hard?

  • Well, at the very least the PPAD-completeness results show that lots of different versions of the problem of finding Nash equilibrium are equally difficult, where previously it was thought that 2-player games should be easier to solve than games with more players, or compactly-represented games with many players, such as graphical games.
  • There's something rather general-looking about these circuit problems; they are not quite CIRCUIT SAT, but they certainly have something of the same feel.
  • There also exist separation oracles for PPAD and related complexity classes, although most people aren't too convinced by that kind of result.
  • I thank Bernhard von Stengel for pointing out that the PPAD-completeness results show that all the algorithms used in practice for finding Nash equilibrium must be exponential-time. (but Rahul Savani asked me if Lemke-Howson in particular is actually black-box in this sense. I'll have to think about that.) This is because they treat the Brouwer function as a black-box circuit, and such algorithms are known to take exponential time in general for problems like END OF LINE.

Last modified by Paul Goldberg, 2.1.12.