# Maximal estimates and differentiation theorems for sparse sets

Malabika Pramanik and I have just uploaded to the arXiv our paper Maximal operators and differentiation theorems for sparse sets. You can also download the PDF file from my web page.

The main result is as follows.

Theorem 1. There is a decreasing sequence of sets $S_k \subseteq [1,2]$ with the following properties:

• each $S_k$ is a disjoint union of finitely many intervals,
• $|S_k| \searrow 0$ as $k \rightarrow \infty$,
• the densities $\phi_k=\mathbf 1_{S_k}/|S_k|$ converge to a weak limit $\mu$,
• the maximal operators ${\mathcal M} f(x):=\sup_{t>0, k\geq 1} \frac{1}{|S_k|} \int_{S_k} |f(x+ty)|dy$

and ${\mathfrak M} f(x) = \sup_{t > 0} \int \left| f(x + ty) \right| d\mu(y)$

are bounded on $L^p({\mathbb R})$ for $p\geq 2$.

It turns out that the set $S=\bigcup_{k=1}^\infty S_k$ does not even have to have Hausdorff dimension 1 – our current methods allow us to construct $S_k$ so that $S$ can have any dimension greater than 2/3. We also have $L^p\to L^q$ estimates as well as improvements in the range of exponents for the “restricted” maximal operators with $1. See the preprint for details.

Theorem 1 allows us to prove a differentiation theorem for sparse sets, conjectured by Aversa and Preiss in the 1990s (see this post for a longer discussion).

Theorem 2. There is a sequence $[1,2]\supset S_1\supset S_2\supset\dots$ of compact sets of positive measure with $|S_n| \to 0$ such that ${\cal S} =\{ rS_n:\ r>0, n=1,2,\dots \}$ differentiates $L^2( {\mathbb R})$. More explicitly, for every $f \in L^2$ we have $\lim_{r\to 0} \sup_{n} \frac{ 1 }{ r|S_n| } \int_{ x+rS_n } f(y)dy = f(x)$ for a.e. $x\in {\mathbb R}.$

Note that Aversa and Preiss did prove a density theorem for sparse sets. While density theorems (such as the Aversa-Preiss theorem just mentioned) can often be proved using geometrical methods alone, differentiation theorems tend to require additional analytic input such as maximal theorems. Once the “right” maximal estimate is available, the differentiation theorem follows from it quite easily – see for example the classic deduction of the Lebesgue differentiation theorem from the Hardy-Littlewood maximal theorem. Theorem 2 follows from Theorem 1 along the same lines.

The idea is to make the sets $S_k$ randomly distributed, in a very strong sense, throughout the interval $[0,1]$. The actual proof is quite complicated, so here I’ll just explain some of the parallels to Bourgain’s proof of the circular maximal theorem. The key to Bourgain’s argument was a careful analysis of the intersections of thin annuli of fixed thickness, in terms of their size and frequency. We will do the same here.

The goal is to construct $S_k$ via a randomized Cantor iteration so that the double intersections $(x+rS_k)\cap (y+sS_k)$ are small for generic translation and dilation parameters $x,y,r,s$. This indeed turns out to be possible. The analysis is quite complicated, though, due to the interplay between the different scales.

To see where the problem is, consider the following simplified setting. We construct $S_1$ by subdividing $[1,2]$ into $N$ congruent intervals, then letting each one be in $S_1$ with probability $p$. (For a set of dimension $1-\epsilon$, take $p=N^{-\epsilon}$.) What is the expected size of $(x+rS_1)\cap (y+sS_1)$? Since each subinterval was chosen independently, a generic intersection should contain about $p^2 N$ subintervals. Thus we have a gain compared to the size of $S_1$, which consists of about $pN$ subintervals.

Suppose now that $S_1$ has already been constructed. Subdivide each of the intervals of $S_1$ into $N$ subintervals of equal length, and let each one be in $S_2$ with probability $p$. Now let’s ask the same question: what is the size of $(x+rS_2)\cap (y+sS_2)$ for generic $x,y,r,s$?

Consider first an intersection of the form $(x+r(I\cap S_{2}))\cap (y+s(J\cap S_{2}))$, where $I$ and $J$ are two different intervals of $S_{1}$. Then the level 2 subintervals of $I$ and $J$ were chosen independently, hence the intersection is expected to consist of about $p^2N$ subintervals. Assuming that the first-level intersection contains about $p^2N$ first-level intervals, the second-level intersection may well be expected to contain about $p^4N^2$ second-level subintervals, hence is significantly smaller than $S_2$ which contains about $p^2N^2$ of them.

But we also have to consider the case when there are many first-level intersections with $I=J$. Clearly, the above argument does not apply – so what are we going to do about it? Consider what happens at the next stage of the iteration. If $x$ and $y$ are not too close, $x+r(I\cap S_{2})$ and $y+s(J\cap S_{2})$ will consist mostly of intersections of distinct third-level intervals of $S_{3}$. Thus the above argument does work when we go down to the next level, or even further down if necessary.

Following Bourgain’s terminology, we will refer to the first type of intersections (different intervals) as transverse intersections, and to the second type (same-interval) as internal tangencies. For each fixed $k$, a typical intersection of two affine copies of $S_k$ will consist of both transverse intersections and internal tangencies. If there are few internal tangencies, the above argument applies. If on the other hand there are many internal tangencies, then both $|x-y|$ and $|t-s|$ must be small, which essentially means that such cases are few and far between. In other words, the domain of translation parameters splits naturally into two parts, one of which involves few internal tangencies and therefore is subject to good $L^2$ bounds, the other covers most of the internal tangencies and enjoys an improved $L^1$ estimate. Following the lines of Bourgain’s proof, we combine these observations to prove a restricted weak type estimate for $p>2$, then use it to deduce our maximal bound.

This argument does not prove the maximal estimate at the endpoint $p=2$. To get there (and beyond, in the case of the restricted maximal operator), we need to consider $n$-fold intersections for general $n$.

1 Comment

Filed under mathematics: research

### One response to “Maximal estimates and differentiation theorems for sparse sets”

1. An

Hi, I don´t know whether this is the right place to write a coment about your paper. I think there is a mistake when you go for the adjoint operator $\Pi_k^*$ in section 3.3, as it should act on functions supported on $[-4,0]$ and not on $[0,1]$. But since you use the fact that the subsets are of $[0,1]$ (so $|\Omega|^n\leq |\Omega|^{n-1}$ in Lemma 3.4), I suppose the original restriction on the support $[0,1]$ (in Lemma 3.1)should be modified accordingly.

Regards