We discuss the computation of where and are the obvious maps.
A curious case of reverse engineering
Inspired mostly by the computation performed here, I wanted to see what looked like when embedding punctured -space into -space. What I found interesting about this computation is that there are, ostensibly, three different ways to attack this computation and, as far as my abilities are concerned, only one is viable. Specifically, Methods and Method 2 lead to other, equivalent (but independently interesting) computations which, if not for Method 3, would still be undoable to me.
Why this is interesting to me is that the computations that Method 1 and Method 2 result in are (again, only for me) undoable without recourse to Method 3 which requires retracing these computations back to their source. In other words, the only way that I could solve the independently interesting computations afforded to me by Methods 1 and 2 would be reverse engineering them to the original problem of computing and applying Method 3. This shows the sort of capriciousness of mathematical problems that sometimes in practice one really needs a total reinterpretation to solve them.
Of course, we still can guess at what the answer should be based on the topological picture. Namely, if we consider the obvious inclusions and then we can understand as a direct limit of cohomology groups:
where travels over neighborhoods of in . As in the one-dimensional case cofinal in this system of are open balls around , in which case is a punctured ball.
Now, if is a ball around so that is the punctured ball then one can easily compute its cohomology. Namely:
One way of seeing this directly is to see (obviously) that is homotopy equivalent to . Moreover, since for two balls and , say , the natural map is a homotopy equivalence, we have that the colimit of cohomology groups just gives
as one might expect.
Too many indices…
From this topological computation one would expectthat
and that is what we endeavor to prove in this post (as well as the interesting tidbits we get by looking at the other methods). That said, purely for notational convenience we focus on the case which is precisely indicative of the general case (just with less indices!).
Remark: Again, as a matter of notational convenience, we shall denote by the origin in any . We shall conflate it, as per usual, with the obvious closed embedding . Also, let us denote by .
Let us attack this computation precisely as we did in the previous case. Namely we know that
Now, as one can easily see we have that
where is the maximal idea of the strictly local ring. Thus, in essence, we see that we’re trying to compute the etale cohomology of the space (an example of a punctured spectrum).
So, now our aim is clear: we need to compute the etale cohomology of with coefficients in . And now that our way is illuminated things should be easy–right? Well, no. The issue being that, in practice, it is usually only easy to compute or, more to the point, it’s easy to compute . This case is no different:
Theorem(Purity of the branch locus): Let be a regular locally Noetherian scheme and an open subset such that . Then, the inclusion induces an isomorphism for any geometric point of .
This theorem (usually attributed to Zariski and Nagata) is certainly difficult techincally but is not so surprising intuitively. Namely, it’s intuitive content is that if is a cover such that the pullback is etale then must also be etale. That said, if is thought to be relative dimension (and perhaps flat) then the points where should fail to be etale should be something like where is the Jacobian matrix for , in particular it should be at least codimension subset of . Since and differ by codimension at least we see that , if not empty, would intersect . Thus, if we know that is etale, then so should be .
Now, since (and the fact that regular local and Noetherian are preserved under strict Henselization) we know that is a Noetherian regular local ring with maximal ideal . Moreover, since
we know that and thus, by the purity of the branch locus, we may conclude that the inclusion induces an ismorphism
but the latter is obviously .
Thus, from this, we may conclude that
but, as mentioned above, this is essentially where this line of inquiry into the problem ends. Namely, this sort of method is ill-equipped to deal with higher cohomology computations without further remark.
For example, one might hope, naively, that is an ‘algebro-geometric ‘ so that one can compute , for any LCC sheaf on , as . Unfortunately, this is not the case. Moreover, one wouldn’t expect it to be the case. This intuitively says that is a for a small punctured ball around in . But, this is obviously false since its universal cover (which is itself) is not contractible.
Thus, this is the end of the line for Method 1.
Now that we see that a direct attack on this problem, using the same methods as in the case, is somewhat difficult we instead try to reduce to the case. This is, essentially, by a Mayer-Vietoris argument.
Namely, let us denote by the space and similarly let denote . Note then that . Thus, we see that we might try and understand in terms of the Mayer-Vietoris sequence for this cover.
To wit, we have the following exact sequence of sheaves on the etale site of :
where and are the obvious inclusions and is the inclusion with . By passing to stalks we see that we’ve reduced ourselves to computing for and .
So, what now? Well, now that we are dealing with different pushforwards let us, again, try the trick reducing stalks of pushforwards to cohomologies of strict Henselizations. Namely, we’re now trying to compute the following three quanities:
unfortunately we’re back in the same sort of situation we were in above but worse. Namely, we need to compute these higher cohomology groups and, as far as I know, the only thing which is easy to directly compute is the first cohomology groups (again because we can get a handle on the fundamental group).
Here we need to use something even more sophisticated than the purity of the branch locus (in fact purity is used in the proof of this result):
Theorem(Generalized Abhyankar’s lemma): Let be a strictly Henselian regular local ring of dimension . Let , for , be regular local parameters for . Then,
where, here, is characteristic of , the superscript means the maximal prime-to- quotient, and means . Thus, non-canonically we have that
where, as per usual, .
Applying this with , and shows that
from where it easily follows that
But, once again, we’re stuck. Where do we go from now? It’s not even obvious if we can use these computations to compute since we’re still lacking the other terms in the Mayer-Vietoris sequence! So, we must resort to a third, different, method.
This method starts as the previous one by attempting to find using the Mayer-Vietoris sequence. But, what it does differently is to compute (with ) by realizing an extra structure present in these pushforwards. Namely, the maps are all products of maps between one-dimensional objects (in a way to soon be made precise) and thus are amenable to attack via a Kunneth type formula.
So, let us note that we can decompose our -maps as follows:
where denotes the identity map on and (for ) is the inclusion . Thus, we see that, in all cases, we’re trying to compute something of the form and this can be done fruitfully using the Kunneth formula:
Theorem(Kunneth formula): Let be a separably closed field, and for finite type -schemes, and morphisms over . Let and be the obvious projections. Then, for any we have that
Thus, in the case , this reads
Passing to stalks in the Kunneth formula (and noting that we don’t have to worry about any -terms since we’re dealing with -spaces) we see that
Thus, to compute the higher pushforward’s stalks we’re interested in it suffices to understand the following two more basic computations:
The first of these is literally the content of the case, and the second is a triviality.
From these, and the Kunneth formula, we easily deduce
Thus, from the Mayer-Vietoris sequence we deduce that
as claimed (note there is a tiny hiccup in the Mayer-Vietoris sequence where one really only gets the claimed equalities immediately for –one really only sees that the dimensions of and term are equal, but we already resolved that the term is trivial).
The uintended computations
Of course by looking at Method 1 and Method 2 we see that the above also proves the following computations of etale cohomology groups:
but these computations would have been highly non-trivial to do out of context. Namely, one sees that, in all cases, we used ‘global techniques’, namely we realized these cohomologies as being stalks of global maps which actually decomposed. So, for example, one might try and imagine that
is something like ‘a contractible piece times a topologically interesting piece’–something like “it’s combination of and and the former is ‘contractible’ so we should only really need to consider the latter”. Unfortunately, I don’t know how to make this rigorous beyond our observation that on the level of global maps this really happens (i.e. it’s basically a stalk of and and has trivial relative cohomology over itself).
I would be interested if one could compute these cohomology groups without reinterpreting/re-engineering them as stalks of global objects which actually decompose.
Odds and ends
Here are just two comments about questions that one might have if one read the case.
First, in the case we didn’t just compute the dimensions of the stalks, but also computed them as -representations. But, this is simple. Namely, as the above case showed, the non-trivial cohomology group comes from, essentially, an -fold tensor product of the -dimensional case. Since we get there one can easily deduce that in the case we get
and this extends in the obvious way to higher dimensions.
The other thing we talked about in the case was how one could relate to this to the cohomology of a formal disk (which made me happier because it’s easier for me to think of a formal disk as a disk than these punctured strictly local rings). So, does something similar happen here? I think the answer is yes, but I’m not positive. Namely a local elder pointed me to Prop. 5.4.53 of Gabber-Romero and this looks like it gives precisely what we want, but I haven’t checked all the details.