Malabika Pramanik and I have just uploaded to the arXiv our paper Maximal operators and differentiation theorems for sparse sets. You can also download the PDF file from my web page.

The main result is as follows.

Theorem 1.There is a decreasing sequence of sets with the following properties:

- each is a disjoint union of finitely many intervals,
- as ,
- the densities converge to a weak limit ,
- the maximal operators
and

are bounded on for .

It turns out that the set does not even have to have Hausdorff dimension 1 – our current methods allow us to construct so that can have any dimension greater than 2/3. We also have $L^p\to L^q$ estimates as well as improvements in the range of exponents for the “restricted” maximal operators with . See the preprint for details.

Theorem 1 allows us to prove a differentiation theorem for sparse sets, conjectured by Aversa and Preiss in the 1990s (see this post for a longer discussion).

Theorem 2.There is a sequence of compact sets of positive measure with such that differentiates . More explicitly, for every we havefor a.e.

Note that Aversa and Preiss did prove a *density* theorem for sparse sets. While density theorems (such as the Aversa-Preiss theorem just mentioned) can often be proved using geometrical methods alone, differentiation theorems tend to require additional analytic input such as maximal theorems. Once the “right” maximal estimate is available, the differentiation theorem follows from it quite easily – see for example the classic deduction of the Lebesgue differentiation theorem from the Hardy-Littlewood maximal theorem. Theorem 2 follows from Theorem 1 along the same lines.

The idea is to make the sets randomly distributed, in a very strong sense, throughout the interval . The actual proof is quite complicated, so here I’ll just explain some of the parallels to Bourgain’s proof of the circular maximal theorem. The key to Bourgain’s argument was a careful analysis of the intersections of thin annuli of fixed thickness, in terms of their size and frequency. We will do the same here.

The goal is to construct $S_k$ via a randomized Cantor iteration so that the double intersections are small for generic translation and dilation parameters . This indeed turns out to be possible. The analysis is quite complicated, though, due to the interplay between the different scales.

To see where the problem is, consider the following simplified setting. We construct by subdividing into congruent intervals, then letting each one be in with probability . (For a set of dimension , take .) What is the expected size of ? Since each subinterval was chosen independently, a generic intersection should contain about subintervals. Thus we have a gain compared to the size of , which consists of about subintervals.

Suppose now that has already been constructed. Subdivide each of the intervals of into subintervals of equal length, and let each one be in with probability . Now let’s ask the same question: what is the size of for generic ?

Consider first an intersection of the form , where and are two different intervals of . Then the level 2 subintervals of and were chosen independently, hence the intersection is expected to consist of about subintervals. Assuming that the first-level intersection contains about first-level intervals, the second-level intersection may well be expected to contain about second-level subintervals, hence is significantly smaller than which contains about of them.

But we also have to consider the case when there are many first-level intersections with . Clearly, the above argument does not apply – so what are we going to do about it? Consider what happens at the *next* stage of the iteration. If and are not too close, and will consist mostly of intersections of distinct third-level intervals of . Thus the above argument does work when we go down to the next level, or even further down if necessary.

Following Bourgain’s terminology, we will refer to the first type of intersections (different intervals) as *transverse intersections*, and to the second type (same-interval) as *internal tangencies*. For each fixed , a typical intersection of two affine copies of will consist of both transverse intersections and internal tangencies. If there are few internal tangencies, the above argument applies. If on the other hand there are many internal tangencies, then both and must be small, which essentially means that such cases are few and far between. In other words, the domain of translation parameters splits naturally into two parts, one of which involves few internal tangencies and therefore is subject to good bounds, the other covers most of the internal tangencies and enjoys an improved $L^1$ estimate. Following the lines of Bourgain’s proof, we combine these observations to prove a restricted weak type estimate for , then use it to deduce our maximal bound.

This argument does *not* prove the maximal estimate at the endpoint . To get there (and beyond, in the case of the restricted maximal operator), we need to consider -fold intersections for general .

Hi, I don´t know whether this is the right place to write a coment about your paper. I think there is a mistake when you go for the adjoint operator $\Pi_k^*$ in section 3.3, as it should act on functions supported on $[-4,0]$ and not on $[0,1]$. But since you use the fact that the subsets are of $[0,1]$ (so $|\Omega|^n\leq |\Omega|^{n-1}$ in Lemma 3.4), I suppose the original restriction on the support $[0,1]$ (in Lemma 3.1)should be modified accordingly.

Regards