Buffon’s needles and other creatures

It took several false starts, complete changes of direction and various other mishaps, but “Buffon’s needle estimates for rational product Cantor sets” (aka Project Lamprey), joint with Matthew Bond and Alexander Volberg, has been completed and posted on the arXiv. I will add the link as soon as it goes live; in the meantime, you can also download the paper from my web page.

(Updated: the arXiv link is here, and here is the revised version with minor corrections and clarifications.)

A question about terminology

There’s a project that I and collaborators have been working on for a fairly long time now. It is almost finished, at least the first stage of it, and I will have more to say about it once we have posted the paper on the arXiv. In the meantime, though, there is a very important question that we need to consider.

Would it be well received in the community if we referred to a certain class of sets appearing in the paper as “non-parasitic lampreys”?

For all we know, the community does not currently harbour any particular feelings towards such sets. They have come up in a couple of places over the years, but their possible parasitic behaviour has not really been investigated until now. We can prove that certain particular lampreys of interest are indeed non-parasitic, which is good for us. By way of contrast, “eels” are somewhat more straightforward than lampreys. That makes them easy to manage when they’re small, but otherwise they’re still troublemakers.

This would be a radical departure from the naming conventions established in, say, physics or algebraic geometry. While those of course abound in colourful vocabulary, much of it refers to various forms of enchantment, awe, amazement, pleasure and wonder, not necessarily the feelings that lampreys tend to inspire. But… our unofficial terminology fits so nicely, I’d be quite reluctant to part with it.

What do you think?

An update on differentiation theorems

Malabika Pramanik and I have just uploaded to the arXiv the revised version of our paper on differentiation theorems. The new version is also available from my web page.

Here’s what happened. In the first version, we proved our restricted maximal estimates (with the dilation parameter restricted to a single scale) for all p>1; unfortunately our scaling analysis worked only for p\geq 2, therefore our unrestricted maximal estimates and differentiation theorems were only valid in that range. However, just a few days after we posted the paper, Andreas Seeger sent us a “bootstrapping” scaling argument that works for p between 1 and 2. With Andreas’s kind permission, this is now included in the new version. The updated maximal theorem is as follows.

Theorem 1. There is a decreasing sequence of sets S_k \subseteq [1,2] with the following properties:

  • each S_k is a disjoint union of finitely many intervals,
  • |S_k| \searrow 0 as k \rightarrow \infty,
  • the densities \phi_k=\mathbf 1_{S_k}/|S_k| converge to a weak limit \mu,
  • the maximal operators

    {\mathcal M} f(x):=\sup_{t>0, k\geq 1} \frac{1}{|S_k|} \int_{S_k} |f(x+ty)|dy


    {\mathfrak M} f(x) =  \sup_{t > 0} \int \left| f(x + ty) \right| d\mu(y)

    are bounded on L^p({\mathbb R}) for p >1.

Our differentiation theorem has been adjusted accordingly.

Theorem 2. Let S_k and \mu be given by Theorem 1. Then the family {\cal S} =\{ rS_k:\ r>0, n=1,2,\dots \} differentiates L^p( {\mathbb R}) for all p>1, in the sense that for every f \in L^p we have

\lim_{r\to 0} \sup_{n} \frac{ 1 }{ r|S_n| } \int_{ x+rS_n } f(y)dy = f(x) for a.e. x\in {\mathbb R}.


\lim_{r\to 0} \int f(x+ry) d \mu (y)  =f(x) for a.e. x\in {\mathbb R}.

What about p=1? I had the good luck of meeting David Preiss in Barcelona right after Malabika and I had finished the first version of the preprint. I explained our work; we also spent some time speculating on whether such results could be true in L^1. Next day, David sent me a short proof that our Theorem 2 cannot hold with p=1 for any singular measure \mu supported away from 0. (The same goes for sequences of sets S_k as above, by a slight modification of his argument.) We are grateful to David for letting us include his proof in the new version of our paper.

We have also polished up the exposition, fixed up the typos and minor errors, etc. One other thing we have added (to the arXiv preprint – we are not including this in the version we are submitting for publication) is a short section on how to modify our construction of S_k so that the limiting set S would also be a Salem set. The argument is very similar to the construction in our earlier paper on arithmetic progressions, so we only sketch it very briefly.

I’ll be on vacation throughout the rest of July. I’ll continue to show up here on this blog – I might actually write here more often – and I’ll finish up a couple of minor commitments, but you should not expect any more serious mathematics from me in the next few weeks.

Maximal estimates and differentiation theorems for sparse sets

Malabika Pramanik and I have just uploaded to the arXiv our paper Maximal operators and differentiation theorems for sparse sets. You can also download the PDF file from my web page.

The main result is as follows.

Theorem 1. There is a decreasing sequence of sets S_k \subseteq [1,2] with the following properties:

  • each S_k is a disjoint union of finitely many intervals,
  • |S_k| \searrow 0 as k \rightarrow \infty,
  • the densities \phi_k=\mathbf 1_{S_k}/|S_k| converge to a weak limit \mu,
  • the maximal operators

    {\mathcal M} f(x):=\sup_{t>0, k\geq 1} \frac{1}{|S_k|} \int_{S_k} |f(x+ty)|dy


    {\mathfrak M} f(x) =  \sup_{t > 0} \int \left| f(x + ty) \right| d\mu(y)

    are bounded on L^p({\mathbb R}) for p\geq 2.

It turns out that the set S=\bigcup_{k=1}^\infty S_k does not even have to have Hausdorff dimension 1 – our current methods allow us to construct S_k so that S can have any dimension greater than 2/3. We also have $L^p\to L^q$ estimates as well as improvements in the range of exponents for the “restricted” maximal operators with 1<t<2. See the preprint for details.

Theorem 1 allows us to prove a differentiation theorem for sparse sets, conjectured by Aversa and Preiss in the 1990s (see this post for a longer discussion).

Theorem 2. There is a sequence [1,2]\supset S_1\supset S_2\supset\dots of compact sets of positive measure with |S_n| \to 0 such that {\cal S} =\{ rS_n:\ r>0, n=1,2,\dots \} differentiates L^2( {\mathbb R}). More explicitly, for every f \in L^2 we have

\lim_{r\to 0} \sup_{n} \frac{ 1 }{ r|S_n| } \int_{ x+rS_n } f(y)dy = f(x) for a.e. x\in {\mathbb R}.

Continue reading “Maximal estimates and differentiation theorems for sparse sets”

Bourgain’s circular maximal theorem: an exposition

The following spherical maximal theorem was proved by E.M. Stein in the 1970s in dimensions 3 and higher, and by Bourgain in the 1980s in dimension 2.

Theorem 1. Define the spherical maximal operator in {\mathbb R}^d by

M f(x)=\sup_{t>0}\int_{S^{d-1}}|f(x+ty)|d\sigma(y),

where \sigma is the normalized Lebesgue measure on the unit sphere \mathbb S^{d-1}. Then

\| M f(x) \|_{p} \leq C\| f \|_{p} for all p > \frac{d}{d-1}.

The purpose of this post is to explain some of the main ideas behind Bourgain’s proof. It’s a beautiful geometric argument that deserves to be well known; I will also have to refer to it when I get around to describing my recent joint work with Malabika Pramanik on density theorems. Among other things, I will try to explain why the d=2 case of Theorem 1 is in fact the hardest.

Note that Theorem 1 is trivial for p=\infty; the challenge is to prove it for some finite p. It is known that the range of p in the theorem is the best possible, but we will not worry about optimizing it in this exposition. (Not much, anyway.)

Let’s first try to get a general idea of what kind of geometric considerations might be relevant. Fix d=2 and 1<p<\infty. For the sake of the argument, let's pretend that we are looking for a counterexample to Theorem 1, i.e. a function f with \| f \|_p small but \| Mf \|_p large. Let's also restrict our attention for the moment to characteristic functions of sets, so that f = {\bf 1}_\Theta for some set \Theta \subset {\mathbb R}^2. Then \| f \|_p = | \Theta |^{1/p}. On the other hand, let \Omega be the set of all x for which there exists a circle C_x centered at x such that a fixed proportion (say, 1/10-th) of C_x is contained in \Theta. Then

Mf(x) \geq .1 for all x \in \Omega,

and in particular \| Mf \|_p \geq .1 | \Omega |^{1/p}. If we could construct examples of such sets with |\Omega | fixed, but |\Theta | arbitrarily small, this would contradict Theorem 1. In particular, if we could construct a set \Theta of measure 0 such that for every x \in [0,1]^2 (or some other set of positive measure) there is a circle C_x centered at x and contained in \Theta , Theorem 1 would fail spectacularly. Thus one of the consequences of Bourgain’s circular maximal theorem is that such sets \Theta can't exist. (This was also proved independently by Marstrand.)

Let’s now see if we can use this type of arguments to prove the theorem.

Continue reading “Bourgain’s circular maximal theorem: an exposition”

Density and differentiation theorems for sparse sets

Over the next couple of weeks, I will be posting short expositions of various parts of an upcoming paper by Malabika Pramanik and myself on maximal estimates associated with sparse sets in {\mathbb R}. I’ll start by explaining some of the questions that motivated us to do this work. We first learned about them from Nir Lev. We are grateful to him for the many conversations we had at the Fields Institute and for pointing us to references that would otherwise be very hard to find.

The following question was raised and investigated by Vincenzo Aversa and David Preiss in the 1980s and 90s: to what extent can the Lebesgue density theorem be viewed as “canonical” in {\mathbb R}, in the sense that any other density theorem that takes into account the affine structure of the reals must follow from the Lebesgue density theorem?

Let’s make this more precise. For the purpose of this post, we will say that family {\cal S} of measurable subsets of {\mathbb R} has the density property if for every measurable set E \subset {\mathbb R} we have

\lim_{S \in {\cal S}, diam ( S \cup \{ 0  \} ) \to 0 } \frac{ |(x+S) \cap E  | }{ |S| } = 1 for a.e. x\in E.

This is slightly different from standard terminology, but there should be no danger of confusion, as we will not use any other density properties here. We write x+S= \{ x+y:\  y\in S \}.

The Lebesgue density theorem states that the collection of intervals \{ (-r,r): \ r>0 \} has this property. It also implies that collections such as \{(0,r):\ r>0\} or \{(\frac{r}{2},r):\ r>0\} have it, just because the intervals in question occupy a positive and bounded from below proportion of (-r,r).

But that does not exhaust all examples. For instance, consider the family \{ I_n \}_{n=1}^\infty, where I_n=( \frac{ n }{ (n+1)! } , \frac{ 1 }{ n! } ). We have |I_n|=\frac{1}{(n+1)!} and diam ( I_n \cup \{ 0 \} )= \frac{ 1 }{ n! }, hence the Lebesgue argument no longer works. Nonetheless, this collection does have the density property, by the hearts density theorem of Preiss and Aversa-Preiss.

Note, however, that the collection in the last example is not closed under scaling x \to r x, $r>0$. Aversa and Preiss have in fact proved that if a family of intervals is invariant under such scaling and has the density property, then its density property must follow from the Lebesgue theorem in the manner described above.

On the other hand, if we consider more general sets than intervals, then it turns out that there are indeed scaling-invariant density theorems that are independent of the Lebesgue theorem. This was announced by Aversa and Preiss in 1987; the proof (via a probabilistic construction) was published in a 1995 preprint.

Theorem 1 (Aversa-Preiss): There is a sequence \{ S_n \} of compact sets of positive measure such that |S_n|\to 0 and:

  • 0 is a Lebesgue density point for {\mathbb R } \setminus \bigcup S_n, and in particular we have \lim_{ n\to\infty } \frac{ |S_n| }{ diam (S_n \cup \{ 0 \} ) }=0;

  • the family \{rS_n:\ r>0, n\in {\mathbb N} \} has the density property.

The analogous question for L^p differentiation theorems turned out to be much more difficult.

We will say that {\cal S} differentiates L^p_{loc} ( {\mathbb R} ) for some 1\leq p\leq\infty if for every f\in L^p_{loc} ( {\mathbb R} ) we have

\lim_{ S\in {\cal S}, diam (S\cup \{ 0 \} )\to 0 } \frac{ 1 }{ |S| } \int_{x+S} f( y  ) dy = f(x) for a.e. x\in E.

For example, the Lebesgue differentiation theorem states that the collection \{ (-r,r): r>0\} differentiates L^1_{loc}( {\mathbb R }).

The differentiation property is formally stronger than the density property, by letting f range over characteristic functions of measurable sets. However, there is no automatic implication in the other direction.

The following theorem was conjectured by Aversa and Preiss in 1995, and proved very recently by Malabika Pramanik and myself (paper in preparation).

Theorem 2. There is a sequence [1,2]\supset S_1\supset S_2\supset\dots of compact sets of positive measure with |S_n| \to 0 such that {\cal S} =\{ rS_n:\ r>0, n=1,2,\dots \} differentiates L^2( {\mathbb R}). More explicitly, for every f \in L^2 we have

\lim_{r\to 0} \sup_{n} \frac{ 1 }{ r|S_n| } \int_{ x+rS_n } f(y)dy = f(x) for a.e. x\in {\mathbb R}.

Our construction of S_n, like that of Aversa and Preiss, is probabilistic. We prove that the sequence S_n can be chosen so that the maximal operator associated with it is bounded on appropriate L^p spaces. This in particular implies the differentiation theorem.

The exact statement of the maximal estimate, and some of the ideas from the proof, will follow in the next installment.

NSERC Discovery Grants: cut back again

The results of this year’s NSERC Discovery Grants competition have just been announced to the applicants and their institutions. They are not publicly available yet, and I’m writing this post based on very limited information, but what we know so far is quite discouraging. Although the earlier NSERC press release said that the Discovery Grants program would not get cut, the devil is as always in the details. The grand total for the two Mathematics GSCs for this year is 2.28M, down from 2.48M last year and 2.62M two years ago. (This is according to a letter from the mathematics “liaison committee” that was circulated last week to the math community. The figures are not yet available on the NSERC web site, as far as I know.) The success rate is also significantly lower: 64%, down from 77% last year.

This is a major disappointment. The budget was already stretched way too thin – there was absolutely no room left for further cuts. That sucking sound you hear? That’s Canada’s “brain gain” of the last decade going down the drain.

More on that once the details become clear. In the meantime, there’s something else that I’d like to point out.

The NSERC grants to the three mathematics institutes – Fields, CRM, and PIMS – were increased in 2007 to 1.2M, 1.2M, and 1.1M per year, respectively, and BIRS is getting an additional 0.57M per year. That adds up to over 4M per year. And that’s just the federal funding. The institutes receive very substantial additional support from the provincial governments. The Fields Institute did particularly well in the last competition: its Ontario grant has been doubled, from 1M to 2M per year, and deservedly so.

I want to make it very clear: I’m absolutely not suggesting that institutes should get cut. I’m sure that they each made their cases for the level of funding that they are getting. Instead, my point is that this gives some perspective on how woefully inadequate our Discovery Grants have become, compared to the increases in funding for those disciplines and units that had reasonable lobbying power and political clout. The Discovery Grant budget in mathematics has been more or less the same for many years. Meanwhile, there are many more research-active mathematicians in Canada now than, say, 10 or 15 years ago, a good number of them at the top of their discipline. Operating costs do increase over time. The salaries of our postdocs and graduate students should be adjusted for the increased cost of housing and living. The program is overdue for a big raise, not a cut.

I also want to make it clear that the institute funding does not compensate for insufficient Discovery Grant funding. Continue reading “NSERC Discovery Grants: cut back again”

The Piatetski-Shapiro theorem

I have just learned that Ilya Piatetski-Shapiro died on February 21, 2009, just a month short of his 80th birthday. Most of his research has been in algebraic number theory and representation theory. I’m not a number theorist, and I know even less about representation theory, so I can’t tell you much about his work in those areas. However, I would like to tell you about an early result of his on the summability of Fourier series, known as the Piatetski-Shapiro theorem in harmonic analysis.

Suppose that c_k,\ k\in{\mathbb Z}, is a sequence with the property that \sum_{k=-\infty}^\infty c_ke^{2\pi i kx}=0 almost everywhere on [0,1]. Does it follow that c_k=0 for all k? It turns out (due to Menshov) that the answer is negative. Hence the following definition.

A set E\subset [0,1] is called a set of uniqueness if the only sequence c_k such that \sum_{k=-\infty}^\infty c_ke^{2\pi i kx}=0 for all x\in [0,1]\setminus E is c_k=0 for all k. Otherwise, E is called a set of multiplicity.

If E is closed, it is known that E is a set of multiplicity if and only if it supports a distribution whose Fourier coefficients tend to 0 at infinity.

It was thought for a while that the word “distribution” in the last sentence can be replaced by “measure”. This is what Piatetski-Shapiro disproved.

Theorem (Piatetski-Shapiro). There is a closed set E\subset[0,1] such that E is a set of multiplicity, but does not support any measure \mu with \widehat{\mu}(k)\to 0 as |k|\to\infty.

In other words, E supports a distribution whose Fourier coefficients vanish at infinity, but does not support a measure with the same property!

Piatetski-Shapiro proved that one can take E to be the set of all numbers in [0,1] whose dyadic expansion \sum_{j=1}^\infty r_j2^{-j} obeys n^{-1}\sum_{j=1}^n r_j\leq r, where r is a fixed number in (0,1/2).

Alternative proofs of the Piatetski-Shapiro theorem were given by Kaufman, Körner and others. The following brief sketch of the Kaufman-Körner argument is based on an exposition by Nir Lev. See the introduction to his thesis for the full length version.

Continue reading “The Piatetski-Shapiro theorem”

Growth, expanders, and the sum-product problem

It is a great pleasure to introduce Harald Helfgott as the first guest author on this blog.  Many of the readers here will be familiar with the sum-product problem of Erdös-Szemerédi: see here for general background, or here for an exposition of Solymosi’s recent breakthrough. The subject of this post is the connection between the sum-product theorem in finite fields, due to Bourgain-Katz-Tao and Konyagin, and recent papers on sieving and expanders, including Bourgain and Gamburd’s papers on SL_2 and on SL_3 and Bourgain, Gamburd and Sarnak’s paper on the affine sieve. Since this connection was made through Harald’s paper on growth in SL_2 (he has since proved a similar result in SL_3 ), I asked Harald if he would be willing to write about it here. His post is below.


Let us first look at the statements of the main results. In the following, |S| means “the number of elements of a set S“. Given a subset A of a ring, we write A+A for \{x+y:\ x,y \in A\} and A\cdot A for \{x\cdot y:\  x,y \in A\}.

Sum-product theorem for {\Bbb Z}/p{\Bbb Z} (Bourgain, Katz, Tao, Konyagin). Let p be a prime. Let A be a subset of {\Bbb Z}/p{\Bbb Z}. Suppose that |A|\leq p^{1-\epsilon},\ \epsilon>0. Then either |A+A| \geq |A|^{1+\delta} or |A\cdot A|\geq  |A|^{1+\delta}, where \delta>0 depends only on \epsilon.

In other words: a subset of {\Bbb Z}/p{\Bbb Z} grows either by addition or by multiplication (provided it has any room to grow at all). One of the nicest proofs of the sum-product theorem can be found in this paper by Glibichuk and Konyagin.

Now here is what I proved on SL_2. Here A\cdot A\cdot A means simply \{x\cdot y\cdot z:\ x,y,z\in A\}.

Theorem (H). Let p be a prime. Let G = SL_2({\Bbb Z}/p{\Bbb Z}). Let A be a subset of G that generates G. Suppose that |A|\leq |G|^{1-\epsilon},\ \epsilon>0. Then |A\cdot A\cdot A| \geq |A|^{1+ \delta}, where \delta >0 depends only on \epsilon.

Continue reading “Growth, expanders, and the sum-product problem”

Broken or not, fix it anyway?

The latest issue of the AMS Notices starts with an op-ed by J.-P. Bourguignon:

Lately, in many countries, the financing of research has been following a very common trend, according to which, to be financially viable, a project should have a pre-defined critical size as well as cluster a number of activities. There are undoubtedly disciplines for which this is all well and good, but except under very special circumstances this is not what fits mathematicians’ needs. […]

Obvious questions include: what forms should infrastructures have in order to help mathematicians develop their research in the best possible conditions?

To spell it out more clearly: the “common trend” refers to investing more research money in flashy big programs and enterprises, while at the same time neglecting our daily bread and butter programs, especially individual grants. Bourguignon talks about Europe and in particular the EU, but Canadian science is not immune to this, either.

This is not at all surprising from the political point of view. Administrative units such as institutes, brandishing significant political clout and a capacity to lobby and advocate for themselves at all levels of government, deal mostly in collaborative modules that support a large group of scientists for a limited period of time. It’s obviously in their interest to promote this model of funding. Individuals, regardless of their preferences, aren’t able to exert the same type of influence. Meanwhile, it looks good on a politician’s résumé to have reorganized a funding mechanism, proposed new strategies, developed innovative solutions. Maintaining a long-established program does not carry the same bragging rights.

But from the scientists’ point of view, there’s no funding mechanism that’s more vital to us than our individual research grants. No amount of funding for institutes and other large initiatives can replace that.

Because, for the most part, our research is a sustained long-term individual effort. Continue reading “Broken or not, fix it anyway?”