We consider the complexity of the problem of counting all witnesses to instances of the binary constraint satisfaction problem. We show that such problems are likely to be #P-Complete. We consider further whether approximation versions of such counting problems are also hard. In this case, easy instances seem to rest on the rapid convergence of certain Markov chains on the witness set. This in turn seems to be closely related to the existence or non-existence of a phase transition in a corresponding statistical physics setting. We will consider this phase transition phenomenon in relation to the complexity of the approximate counting problem.
It is one of the fundamental observations of the theory of random graphs that monotone graph properties usually are associated with a threshold function: they appear quite suddenly as the parameter controlling the edge probability grows from 0 to 1. In this talk we will see the generality of this phenomenon: i.e. that it is true for EVERY monotone graph property.
Furthermore we will present simple conditions that enable us to decide for which such properties the threshold is SHARP, (this will be defined precisely in the talk, but roughly means that the transition is swift even when viewed on a smaller scale corresponding to the critical probability at which the transition occurs.)
This technique also enables us to settle the k-sat problem: the question of the sharpness of the threshold for satisfiability of a random k-CNF Boolean formula.
Phase transitions have been seen as important in Artificial Intelligence (AI) for about 10 years. I will survey some applications in AI, in particular in benchmarking algorithms and in the design of new search algorithms. In this research, AI has borrowed extensively from statistical mechanics, and I will show that we are able to give at least something back in return.
Informally, an "unlabelled combinatorial structure" is an object such as an unlabelled graph (in which the vertices are indistinguishable) or a structural isomer in chemistry (in which different atoms of the same type are indistinguishable). We consider the problem of randomly sampling such structures. One can obtain nearly uniform samples by simulating a simple Markov chain which we call the ``Burnside Process''. (This Markov chain is essentially an implementation of Burnside's Lemma.) For the approach to be feasible, the Markov chain ought to be ``rapidly mixing'', i.e., converge rapidly to equilibrium. The Burnside Process was known to be rapidly mixing for some special groups, and it has even been implemented in some computational group theory algorithms. As we have shown in this work, it is not rapidly mixing in general. The slow mixing of the Burnside Process can be explained in terms of the relationship between the process and a particular dynamics for the Potts model in statistical physics. In particular, the slow mixing of the Burnside Process is connected to the slow mixing of the ``Swendsen-Wang dynamics'', which Gore and Jerrum have shown to be connected to a first-order phase transition in the number of ``ordered'' Potts configurations. These connections will be explained in the talk.
(Joint work with Mark Jerrum)
Consider a process in which information is transmitted from a given root node on a noisy tree network T. We start with an unbiased random bit R at the root of the tree, and send it down the edges of T. On every edge the bit can be reversed with probability $\epsilon$, and these errors occur independently. The goal is to reconstruct R from the values which arrive at the nth level of the tree. This model has been studied in information theory, genetics and statistical mechanics. We bound the reconstruction probability from above using the maximum flow on T viewed as a capacitated network, and from below using the electrical conductance of T. For general infinite trees, we establish a sharp threshold: The probability of correct reconstruction tends to 1/2 as $n \to \infty$ if $(1-2 \epsilon)2< pc(T)$, but the reconstruction probability stays bounded away from 1/2 if the opposite inequality holds. Here pc(T) is the critical probability for percolation on T; in particular pc(T)=1/b for the b+1 regular tree. The asymptotic reconstruction problem is equivalent to purity of the ``free boundary'' Gibbs state for the Ising model on a tree. The special case of regular trees was solved in 1995 by Bleher, Ruiz and Zagrebnov; our extension to general trees depends on a coupling argument, and on a reconstruction algorithm that weights the input bits by the electrical current flow from the root to the leaves. (Joint work with William Evans, Claire Kenyon, and Leonard J. Schulman) (Postscript file available)
A constraint satisfaction problem (CSP) consists of a set of variables, each with a finite set of possible values (its domain), and a set of constraints each of which restricts the possible assignments of values to subsets of the variables. A solution to a CSP is an assignment of a value to every variable such that all the constraints are satisfied.
CSPs are commonly solved by some form of backtracking search algorithm. When large samples of CSPs are solved using such algorithms, and the constraint tightness is varied, there is a phase transition from a region where the constraints are loose and problems are easy to solve, to a region where the constraints are very tight and it is easy to show that problems have no solutions. The peak in average difficulty occurs in the intervening transition region. I shall discuss this phenomenon and show how phase transition behaviour can be used in comparing the performance of different algorithms and heuristics.
I shall also discuss the occurrence of `exceptionally hard problems', i.e. rare problems which are very difficult to solve and occur in the region where most problems are very easy.
We review here recent results and current research on the issue of existence and estimation of thresholds of properties in various models of random structures.
The emphasis is given on the techniques and the analysis. We also refer to current research efforts and open problems related to the above.
A phase transition in a decision problem can be defined very informally as follows. The instances of the problem are parameterised (for example, if the problem is the 3-Satisfiability Problem then in a collection of m clauses involving n Boolean variables this parameter might be m/n) and there is a threshold value t such that when an instance's parameter is less than t, the instance almost surely is a no-instance; and when an instance's parameter is greater than t, the instance almost surely is a yes-instance. It is often the case that phase transitions can be `empirically observed' yet no proof exists as to whether the observed phase transition is indeed a phase transition.
Finite model theory is the study of the model theory of finite structures; that is, it is the study of the properties of finite structures which can and can not be defined in various logics. The notion of `almost surely' arises in finite model theory in the following context. Consider a sentence $\phi$ of first-order logic, for example. It is the case that as the size n of a finite structure increases, the proportion of structures of size n satisfying $\phi$ either tends to 1 or tends to 0; that is, a finite structure `almost surely' satisfies or does not satisfy $\phi$. In particular, we say that first-order logic has a zero-one law.
Might finite model theory provide some tools and techniques to prove that problems (definable in some logic) necessarily have phase transitions? This talk will give an introduction to zero-one laws for anyone who might like to pursue this question.
The theory of phase-transitions goes back to Laplace and van der Waals, who showed that surface tension and evaporation-condensation could be accounted for by the apparently crude assumptions of an intermolecular attraction independent of distance, the so-called "effective field" assumption, also used by Weiss to account for ferromagnetism, by Bragg and Williams for order-disorder in alloys, and by Fowler for ferro-electrics.
Nearly always we can obtain insight into a phase-transition by an "effective field" theory, the days of which are far from over. (For example, I have shown that a van der Waals type theory can account qualitatively for the metastable properties of a liquid). A great deal of this work was recently re-discovered, re-labelled "Catastrophe Theory" and applied to a large number of non-physical problems.
In physics, it remains true that an "effective field" type of theory gives preliminary insight into many processes of phase-transition type. Examination of the consequences of more realistic models is virtually confined to "lattice" or "cell type" models with interactions only between neighbouring sites or cells. Onsager in 1944 solved a two-dimensional ferromagnet with nearest neighbour interactions which can also describe, e.g. a two-dimensional adsorbed layer. This quickly changed the whole approach to phase-transitions, because it showed that approximate treatments could be quite unreliable and virtually useless for calculating "critical exponents" describing the analytic behaviour of the partition function in the neighbourhood of a transition. The subsequent theory has developed in several directions
A key move in all the work in lattice models was the introduction in 1941 of the row-transfer operator for lattice models. Set up a matrix whose elements correspond to the configurations of two rows of sites and their interactions. Calculation of the partition function is equivalent to calculating the maximum eigenvalue of this matrix. Thus we have a direct connection between phase-transitions and operator theory, braid and knot theory, calculus of variations and percolation theory.
Some of these developments will be examined and described and some "exact results" will be described.