Better late than never… here’s the belated second part of the update on the Clay-Fields conference. We were very lucky to have Tim Gowers give the Distinguished Lecture Series during the conference, and in this post I will try to give a brief account of his lectures.
There were of course many other interesting talks – in fact this may have been the first time in my life that I attended every single lecture at a conference. I wish I had the time to write about them all. That won’t be happening now, but many of the conference topics are close to my research interests and it’s likely that they will come up in future posts.
This was also the first time that I was the main organizer of a conference this size. It’s been a lot of work, but it has also been very rewarding and totally worth it. I’ve had a lot of help from Andrew Granville, Bryna Kra and Trevor Wooley, and the Fields staff did a fantastic job running everything so smoothly. We’d had new audiovisual equipment installed in the conference room just days before the conference… and everything worked perfectly from day one. Isn’t that something? We even had a screening of two short experimental movies, courtesy of Andrew Granville.
Thanks again to all participants for coming!
The rest of this post will be about Tim’s lectures. It’s a little bit long, so I will put it under the cut. The subject is “quadratic Fourier analysis”.
In his first lecture, addressed to a general audience, Tim gave an introduction to uniformity and inverse theorems. Suppose that we want to prove Szemerédi’s theorem: if a set contains a positive proportion of integers, then it must contain a -term arithmetic progression for each . The general strategy is well known by now. If is randomly distributed, in the sense that it does not exhibit any noticeable “special patterns”, then there are many -term progressions in . If on the other hand is not random, then we look for its special patterns: these are structured subsets of integers on which has higher density. Iterating the argument, we eventually prove the theorem.
Exactly what it means for a set to be random, or to exhibit a special pattern, depends in a crucial way on the length of the progression that we are looking for. It also depends, to a lesser extent, on the choice of approach to Szemerédi’s theorem: graph-theoretic, ergodic, or harmonic-analytic. Here we will focus on the harmonic-analytic proof, due to Roth for and to Gowers for general . Here, uniformity is defined in terms of the so-called norms: if a set is uniform in the norm, it contains the “expected number”, for a set its size, of -term arithmetic progressions.
uniformity, also known as linear uniformity, has a Fourier-analytic interpretation. If is not uniform, its characteristic function has a large Fourier coefficient; in other words, its characteristic function correlates with a phase function , where is linear. (This is where the terminology comes from: a set is linearly uniform if and only if it exhibits no linear patterns.)
What about uniformity? It has been known for some time (due perhaps to Furstenberg and Weiss) that a set of integers can have a statistically disproportional number of 4-term arithmetic progressions if it exhibits “quadratic patterns”, in the sense that its characteristic function correlates with , where is a “quadratic homomorphism”. Quadratic homomorphisms are somewhat more general than quadratic polynomials in : they also include linear projections of quadratic polynomials in several variables. Similarly, higher degree polynomial patterns contribute to nonuniformity for higher .
What’s really needed in applications, though, are inverse theorems: if a set is not uniform, it must exhibit a polynomial pattern. This turns out to be very difficult to get. The first such result for was proved by Gowers in the course of his proof of Szemerédi’s theorem for -term progressions. This is sometimes known as a “weak inverse theorem”, in the sense that the polynomial correlations occur only on very small parts of the set. Other problems require stronger inverse theorems: in particular, Tim mentioned the Green-Tao inverse theorem for the U^3 norm and its application to counting solutions to systems of linear equations in the primes. That’s where the first lecture ended.
In the second lecture (based on Gowers’s joint work with Julia Wolf) we were introduced to decomposition theorems. A decomposition theorem for the norm can be stated as follows: if is a function (on either or ) with , there is a decomposition , where are “generalized quadratic phase functions” and are error terms with and small. This can be deduced from the inverse theorem of Green-Tao; in fact a similar statement was already implicit in their work, based on the energy increment argument. Tim presented a different approach to deducing decomposition theorems from inverse theorems, based on functional-analytic arguments involving the geometry of normed spaces (specifically, a variant of the Hahn-Banach theorem).
This can be applied to the question of counting solutions to systems of linear equations in sets. Let’s say that we are interested in finding sensible conditions under which a set will have the “statistically correct” number of solutions to a system of linear equations. For instance, if it is 4-term arithmetic progressions that we are concerned with, then uniformity is sufficient (and, in general, necessary). Green and Tao prove a more general result of this type: they define the complexity of a system of linear forms, and prove that systems of complexity are controlled by norms.
Gowers and Wolf, however, do not stop there. Suppose that, instead of 4-term progressions, we are interested in configurations of the form, say, . The complexity of this system in the sense of Green-Tao is 2, hence a set uniform in the norm will contain the “right” number of such configurations. Gowers and Wolf, however, can prove that uniformity already guarantees the same conclusion! The difference between the two examples? The squares are linearly dependent, whereas are not. Gowers and Wolf prove that such “square independence” is in fact both sufficient and necessary for a system of complexity 2 to be controlled by the $U^2$ norm. The proof is based on the decomposition theorem described earlier.
The last lecture was focused specifically on the functional-analytic arguments of the Hahn-Banach variety. Tim started the lecture by explaining some of the details of the proof of the decomposition theorem from the second lecture. He then went on to present two other applications of the Hahn-Banach approach. One is the following transference principle:
Let be nonnegative functions with expectation 1, and assume that is small. Let . Then there is a function such that $0\leq g\leq\nu$ and is small.
This looks a lot like the Green-Tao transference principle from the progressions-in-primes paper; the proofs use many of the same ingredients (Weierstrass approximation, the basic anti-uniform functions of Green-Tao, etc.), but Gowers’s proof uses functional-analytic arguments instead of the more involved energy-increment iteration. According to Tim, this approach can replace the transference results in the Green-Tao paper just mentioned, or in the Tao-Ziegler paper on polynomial patterns in the primes, yielding significant simplifications of (this part of) the proofs. (A small but important comment: we are <em>not</em> relying on inverse theorems here.)
The second application is a refinement of Tao’s “structure theorem” from his quantitative version of the ergodic-theoretic proof of Szemerédi’s theorem. This is similar to the decomposition theorem from the second lecture, except that the latter can produce a decomposition where the main term is unbounded even if the original function was bounded to begin with, and now we can’t afford that. The problem is dealt with by further functional-analytic methods combined with the new transference principle just described.