# Non-archimedean Real Estate

Making our way up some buildings

## p-adic Orlik-Solomon algebras

June 21, 2011

Posted by on In this blog post I’m going to finish up discussing the classical Orlik-Solomon algebra and then move on to discuss section 2.1 of de Shalit. Note that I’ve made the title of this post up on a whim, and I have no idea if it’s at all close to standard terminology.

**1. The Orlik-Solomon algebra **

Let be a finite dimensional complex vector space. Let denote a hyperplane arrangement in , and let denote an element in the dual space of which cuts out , for each . Let denote the hyperplane arrangment in which is the complement of ; we wish to describe the de Rham cohomology ring of . We’ll work over , but one can in fact work integrally. (It is a theorem that the Betti cohomology of is torsion free, so one need not worry about torsion)

To begin our study of the de Rham cohomology, we identify some special -cocycles. For each we write . Note that this is homotopic to via projection onto the line in orthogonal to . Write in for the pullback via the inclusion of the class in which corresponds to in . With defined as above, one has

These classes generate , and hence , and it is possible to describe the relations between them quite explicitely. For this we introduce the Orlik-Solomon algebra.

Let denote the complex vector space with basis for each hyperplane in the arrangement. Then let denote the exterior algebra of . Let be a subset of the indices through . Then one says that is **dependent** if the intersection has codimension less than the size of . One can show that is dependent if and only if the linear forms for are linearly dependent over , which explains the terminology.

Let denote the wedge product of all the for in . Finally let denote the homogeneous ideal generated by all of the as ranges over dependent subsets. It is not hard to show that is stable under the differential of , so that inherits the structure of a differential graded algebra. Orlik-Solomon proved that the ideal gives all the relations between the : more precisely, the map defined by mapping to induces an isomorphism of differential graded algebras

**2. A -adic variant**

Now we’d like to define something similar for a -adic vector space of dimension , except that we’d like to admit the hyperplane arrangement to be infinite (since we’d like to describe the cohomology of the Drinfeld symmetric spaces). Since we’ll be following de Shalit from here on out, I’m going to switch over and begin using his conventions. We’ll stick to them from now on.

Rather than work with hyperplane arrangements, we’ll work with arrangements of lines in . So let denote a subset of the projective space . For each a nonzero vector in , let denote the line spanned by . Following de Shalit we write to mean that .

Let denote the free exterior -algebra on , where is our fixed -adic field, generated in degree by all of the . Let denote the differential on which maps and is otherwise defined like a Cech differential on the higher graded pieces. Then put ; since , it follows that also and that is the subalgebra generated by the elements . Thus yields a split exact sequence of graded modules

,

where a splitting is given by for any in . (~~Remark: formula (2.4) in de Shalit is wrong. The correct formula is~~

~~ ~~

(Edit: Formula 2.4 in de shalit is fine! I misread it when I was writing this, so my bad. :))

Before jumping into the next definition, I’d like to provide some explanation for what we’re about to do: if is an (oriented) simplex of the building , say represented by the lattices

,

then one can intersect the lines in our arrangmenet with . Reducing mod the uniformizer for gives an honest finite arrangement of lines in the finite dimensional vector space . Note that endows this quotient with a filtration. Using this filtration we will define, for each such , relations in such that the quotient of by these relations describes the Monsky-Washnitzer cohomology of the arrangment in . Today we’re just going to define the relations, though, and we’ll get to the cohomology later.

Let be an oriented simplex represented by lattices as above. Then for in , we define the **index** of relative to to be the unique integer such that . For general in , one can multiply by a suitable (unique!) power of the uniformizer so that . Then define the index of to be the index of .

**Example.** Consider the case and (just so that I can write instead of ). Then is two dimensional and is the Bruhat-Tits tree. Let be an oriented edge. There are two possible indices relative to , namely and . To describe things more concretely, we take for the edge corresponding to the following lattices: is the standard lattice , while is the lattice spanned by the vectors and (to save writing a bunch of tranposes , I’m going to write row vectors in this example rather than column vectors). Let be an arbitrary vector in . To check its index we must rescale and so that both are -adic integers, but such that at least one is a -adic unit. Replace by this rescaled version. If is a -adic unit, then but it is not in , and the index of is . If is not a -adic unit, then is a -adic unit and . Somewhat more generally, if one considers the edges adherent to in the tree, which correspond to the lines in , then has index if and only if it reduces to the line in corresponding to the given edge .

We can now define our relations relative to : let be the ideal in generated by elements for any elements in which are linearly dependent modulo . Then as in the complex case we set to be the quotient and set to be the quotient . The previous exact sequence induces another split exact sequence

.

We will spend the next few posts discussing these algebras. Note that Proposition 2.1 in de Shalit shows that (i) is supported in degree , while is supported in degree ; (ii) both algebras are generated in degree ; (iii) both algebras are finite dimensional.

Next time I’ll cover subsections 2.2 and 2.3, and then then there will be one more post finishing up the last bits of section 2.

Advertisements

## Hyperplane arrangements

June 16, 2011

Posted by on Let be a finite extension of . We are on our way towards understanding the cohomology of Drinfeld’s -adic symmetric domain of dimension over , and its connection with the building of . Recall that Drinfeld’s domain is with all -rational hyperplanes removed. When these spaces are a little mysterious to me, so a more modest goal for these posts is simply to get a concrete feeling for what these higher domains look like. When the answer is nice and easy to picture: the domain, in this case the -adic upper half plane, is a tubular neighbourhood of the Bruhat-Tits tree.

Marc has asked me to cover the next section of de Shalit’s paper *Residues on buildings, etc*. It defines an algebra whose genesis lies in the study of finite complex hyperplane arrangements. More precisely, the material in section 2 of de Shalit is motivated by the work of Orlik-Solomon on the cohomology of the complement in a finite dimensional vector space of a number of hyperplanes. So in this post I’m going to start off by going over this previous work, before jumping ahead into de Shalit. I’ll be following this wonderful expository paper of Lionel Levine, who is a postdoc at MIT.

Not all that’s presented below is necessary for our study of Drinfeld’s domain, but it’s all cool!

** Hyperplane arrangements **

In this post we restrict to what Levine (and maybe everybody who discusses hyperplane arrangements…) calls *central* arrangements. This amounts to considering only hyperplanes which pass through the origin. So to save myself from having to write codimension subspace all over the place, in this post hyperplane always refers to a hyperplane through the origin of a vector space.

We’re going to change the field of definition a few times, so we’ll let be a vector space over an arbitrary field . Then a *hyperplane arrangement* in is simply a finite collection of hyperplanes in . Ultimately we’re interested in the case , and in the computation of the de Rham cohomology of the complement of an arrangement. However, we’ll get used to riding around on training wheels first by computing points over a finite field, and then by computing connected components when .

** Mobius function of a lattice **

Let denote hyperplanes defining an arrangement in . Let denote the collection of all *nonempty* subsets of which can be expressed as an intersection of some of the , possibly an empty intersection. Since is contained in every , the total intersection is nonempty. Hence if we endow with the partial ordering given by set theoretic inclusion, has both a least element and a largest element (corresponding to the empty intersection). In fact, is a *lattice*, that is, a poset in which every pair of elements has a unique supremum and infimum. To see this, consider and . The infimum is given by the intersection over , while the supremum is given over . The Mobius function of the lattice plays an important role in what follows, so we recall some generalities.

Let be a lattice. For and in let be the *interval* between and containing all such that . A lattice is said to be *locally finite* if every interval is a finite set. Such lattices possess a Mobius function. It is defined concretely as follows: is a function defined inductively by setting ,

if and otherwise.

For example, if one considers the positive integers endowed with the divisibility relation, then where is the classical Mobius function of number theory (extended so that if does not divide ).

For an arbitrary locally finite lattice (and I probably also have to assume that there is a unique least element for the formulae below to make sense), one has a Mobius inversion formula of the following form: if and are two functions , where is defined in terms of by the formula

then Mobius inversion says that can be expressed in terms of via the formula:

For more on Mobius functions, one can start with the Wikipedia page on the incidence algebra of a lattice. If you’ve got access to Springerlink, then you can also check out this classic paper of Gian-Carlo Rota, *On the Foundations of Combinatorial Theory I: Theory of Mobius Functions*.

**Counting points of an arrangement over a finite field**

In this section we suppose that is a finite field with elements. Set for any , where is the Mobius function of the lattice associated to the arrangement in . Let denote the number of points of in the complement of the arrangement .

** Lemma.** One has

We can prove this by a simple application of Mobius inversion. Let be defined as follows: is the number of points contained in but which are not contained in any with (so ). Show that if , then . The lemma then follows immediately by evaluating the Mobius inversion formula at .

See Levine’s paper for applications of this to computing points in the complement of the braid arrangement, as well as to computing an identity for Stirling numbers.

**The number of components of a real arrangement**

Somewhat amazingly, the previous computation can be used to compute the Betti numbers of the complement of a complex arrangement which is defined over . Before getting to this, we state a result (without proof) which describes the number of connected components in the complement of a real arrangement defined over :

**Lemma.** Let be a real finite dimensional vector space and let be a finite arrangement in defined over . Then the number of connected components in the complement of is given by the formula

Levine’s proof considers the polynomial of the previous section. A change of variables of this polynomial yields the Poincare polynomial of the corresponding complex arrangement, whose coefficients encode the Betti numbers (*Remark*: I think that one has to choose (which is a finite prime actually – I think Levine avoids cause he wants to save it for labelling points) such that reducing the arrangement does not change the lattice).

It’s surprising that these Betti numbers can be computed combinatorially from the lattice alone. Rybnikov has constructed arrangements whose complements have nonisomorphic fundamental groups, but such that the corresponding lattices are isomorphic (and hence the complements have the same Betti numbers).

Originally I’d intended to discuss the Orlik-Solomon algebra tonight, but since we discuss it all throughout section 2 of de Shalit, I will save this for next time! I’ll try to get to it over the weekend.

## An announcement

June 15, 2011

Posted by on Dear reader(s):

We **proudly announce** the first release of our code for computing with arithmetic quotients of the Bruhat-Tits of .

We hope that it will make it into Sage some day, but for now it is available on a private space in Assembla. If you want to try it out, here is what you should do:

- Download the source of Sage from their webpage, and compile it (it will take a couple hours).
- Get our code from assembla:
**hg clone https://hg.assembla.com/btquotients** - Put the folder btquotients inside
**SAGE_ROOT/devel/sage/sage/modular/** - Add a line in
**SAGE_ROOT/devel/sage/setup.py**with**‘sage.modular.btquotients’,**(don’t forget the comma!) together with the similar lines that start with sage.modular. - Run
**sage -br**and enjoy!

If you don’t care about the source and just want to use it, you can also get a patch.

We will prepare a post or a worksheet with detailed instructions on using this software. Also comments are welcome, and needed! For now, you can start with:

sage: X=ShimuraCurve(13*23)

sage: Y=X[13]

sage: Y.plot()

PS @Cameron: yes, it worked. I got Darmon’s point! But not his period…which makes me suspect that there is some typo somewhere. It doesn’t matter anymore, though! 🙂

## The Bruhat-Tits building of PGL(n+1) (III)

June 14, 2011

Posted by on
It is now time to introduce the action of on our building. Of course, acts on homothethy classes of flags on the left, by acting on the vector space (remember that this is the definition of ). Here is where **types** start to become relevant: the action of can’t be transitive in general: it will not change the dimensions of the subspaces that conform each flag, and therefore if the flags have different sequence of dimensions, they won’t lie in the same orbit.

Here is the precise definition of **type**:

Given a pointed -cell , the

type ofis the sequence defined by . By convention, . Note that each of the is positive (at least ) and the sum of all of them is .

There are types of pointed -cells: in particular, there is only one type of the extreme-dimensional ones. This is why in the case of (the tree) we don’t see them!

One can convince oneself easily that acts transitively on the set of pointed -cells of a given type (you probably learned this in your first course in linear algebra).

From now on, fix coordinates on so that we can talk about . Let be the vertex corresponding this basis. Its stabilizer is . The question is: what is the stabilizer of a pointed -cell ? Well, it would have to fix , so it lies in . Then it has to leave invariant, so this will mean that there will be blocks in the corresponding matrix. To make it simpler, let’s suppose that the basis we chose happens to be adapted to . That is, that the flag looks like:

where . This would be called the standard pointed -cell of type .

In this case the stabilizer is a matrix with -blocks in the diagonal, of sizes and with entries in , that has arbitrary entries above these blocks, and entries divisible by below them. This is called the **standard parahoric subgroup of type **. In general, the stabilizer of a pointed -cell will be a conjugate of such a group.

There are more instrinsic groups that we can get from looking at stabilizers: fix a vertex , and consider the ball of radius centered at : the set of vertices at distance **at most** from . Its point-wise stabilizer is called **principal congruence subgroup of level ** of the stabilizer of . Letting vary we get a sequence of normal pro- subgroups of .

Maximal tori of can also be recovered: for each basis , its corresponding maximal torus is the stabilizer of the apartment . Given a wall in , there is a unique involution which normalizes and in induces a reflection with respect to the wall .

This post should finish with a little bit of topology, as promised. Actually, we will introduce a new metric . Let be the topological simplicial complex associated to . This takes vertices to points, edges to (open) segments, -cells to (open) triangles and so on. This turns out to be a contractible topological space.

Pick a vertex . The **star** of , written , is the subspace obtained as the union of the open simplices containing (look at the picture for to see why it is called a star). Note that its closure is compact. This notion is extended to any cell by taking the intersection of the stars of the vertices in that cell.

Let be a basis for and let be its corresponding apartment. We can identify

and we get an Euclidean metric on . Now, it is a fact that any two points in belong to a common apartment , and therefore we can measure the distance between them using . That this is well-defined follows from looking at the intersections of two apartments: if and contain the points and , then there is an isomorphism which fixes (pointwise) both and . Also, given two vertices , there is a geodesic connecting them: the straight line in any of the apartments containing both of them.

One word of caution to end this post: the metric is different from the metric that we have introduced before. I just noticed that has two meanings in this post, but if you have trouble differentiating them by the context you should probably be reading something else anyway…

Next goal: draw some pictures!

## The Bruhat-Tits tree

June 9, 2011

Posted by on In this post I’ll specialize Marc’s discussion to the building of . It turns out that the building is an infinite tree in this case.

**1. Definition**

Recall that the vertices of are lattices in taken up to rescaling. If is a lattice, then we’ll write for the homothety class of -multiples of . Two vertices bound a -simplex of if and only if there are representative lattices , for the corresponding homothety classes such that

Note that this is actually a symmetric relation: for by definition of the square-brackets notation, and one deduces from the lined formula above that also

Since is a lattice in , it is isomorphic with , and thus

The lattice above corresponds to a line in , and there are such lines. This computation also shows that there are no higher dimensional simplices in , for such a simplex would correpsond to a sequence of lattices

But additive subsets sandwich between and as above correspond one to one with vector subspaces of , and this plane is too small to admit such inclusions. To summarize, we have shown that is a graph such that each vertex is adjacent to exactly other vertices.

It does not take much more work to show that the graph is a tree: given two vertices and , one can use the elementary divisors theorem to find representative lattices and such that there is a basis for with the property that is a basis for , for some integer . If we let denote the -span of and , then the inclusions

describe the unique nonbacktracking path from to . Hence every pair of vertices in are joined by a unique nonbacktracking path, so that is connected and acyclic, and hence a tree.

**2. Distances and contiguity**

Note that the integer above describes the distance between and as defined in Marc’s previous post; that is, the distance is simply the number of edges between the two vertices. The distance between two edges is the maximum distance between any two of the endpoints of the edges. So for example, if two distinct edges share a vertex, then their distance from one another is . The distance from an edge to itself is , as Marc remarked last time, which seems a little pathological. the distance from a vertex to an edge is the maximum distance from the vertex to the endpoints of the edge.

Recall that two simplices are said to be *contiguous* if and only if they are at distance at most from one another. Hence vertices are contiguous if and only if they are joined by an edge. A vertex is contiguous with an edge if and only if it is an endpoint of the edge. Finally, the previous paragraph shows that an edge is contiguous with another if and only if they’re equal. This is one instance of things being simpler in the case of the Bruhat-Tits tree: contiguity is not very exciting.

**3. Apartments**

In this low-dimensional case, apartments are also very simple: given a basis for , the corresponding appartment is described by the vertices which correspond with the lattices for an integer, either positive or negative. Hence apartments are nothing but paths in the tree in both directions. They can be given a natural euclidean topology which makes them homeomorphic with the real line.

## The Bruhat-Tits building of PGL(n+1) (II)

June 8, 2011

Posted by on As promised in the previous post, we will start this one with distances on the BT building . First, if are two vertices, then one can find bases of adapted to them: that is, so that is represented by the standard -lattice , and is represented by some sublattice of the form

This is just an application of the elementary divisors theorem. The *distance* between and is:

One can check that this is well defined. One extends this function to all simplices: if and , then:

where are vertices. One then checks that this satisfies the properties of a distance. There are more distances that one can define on , but for now let’s avoid confusion.

**Contiguity**

We want a notion of when two cells are “contiguous”, and this notion should come from the distance that we have just defined. If then should be contiguous to . This only happens when , but okay. We would like also that any cell is contiguous to itself, even if it is not a vertex. But if is not a vertex, then . So we can try with:

Two cells are

contiguousif .

Now, is this notion useful? Well, to start with, it would allow us to reconstruct the simplicial structure of : we would take as -cells any set of vertices such that any distinct two of them contiguous.

We would hope that something like the following would be true: two cells are contiguous if and only if they contain a common sub-cell. Well, this is not only false, but none of the directions is actually true: for example, on the BT tree (the case of ), the two endpoints of any edge are contiguous (but they do not share sub-cells). On the other hand, there are no contiguous edges at all. Even when they share a vertex. None. So looks like pretty bad, but this is life. To compensate, here is a proposition listing equivalent ways of defining contiguity:

Let and be two cells in . Then the following are equivalent:

- and are contiguous.
- The union is contained in a cell.
- There are lattice flags and representing and respectively, such that they can be interlaced: that is, its union is also a lattice flag.

If you are a combinatorist you might enjoy proving this. The rest of us are happy believing it and leaving the messy subindex book-keeping arguments alone.

If and are contiguous, then one can define a *type* of , which encodes how the corresponding lattice flags interleave, and what are the dimensions of the successive quotients. We won’t be too precise here, and might come back to it as we need.

**Living combinatorially: walls and apartments**

The (vector) space is a nice place to live in, but not everyone gets along with everyone else, as it happens. There are maximally-compatible sets of elements of that people have always called bases. And they want a place to live. So we give them apartments, one for each basis. Of course, if you give an apartment to a family one day and the next they come and one of them has changed the shirt, you don’t want to give them another apartment. So you consider two basis “the same family” if after reordering one is obtained from the other by rescaling each member independently (yes, one could change his/her shirt, the other get a haircut, and so on).

Now, let’s get to business: if is a basis of , the *apartment* that it determines is defined to be the simplicial subcomplex of supported on the vertices of the form

for varying . Every apartment is a triangulation of a copy of -dimensional Euclidean space. For example, for an apartment is a doubly-infinite sequence of consecutive edges, which is a “triangulation” of the real line. For , an apartment is a triangulation (a tesselation) of the euclidean plane, and so on.

Finally, if we fix and , and consider the vertices which satisfy , the simplex that they span is called a *wall* of the apartment . Cameron will give examples of all this, and I will try to draw some pictures (and possibly fail).

In the next post of this series, we will see how our group acts on such a building. This will give a way to understand many of the very beloved subgroups of , such as parabolics, parahorics, maximal tori, and all these animals.

## The Bruhat-Tits building of PGL(n+1)

June 5, 2011

Posted by on We will be following [dS] for a while. Let’s fix some notation: take a finite extension of , with a choice of uniformizer . Let denote the size of and normalize the norm on (and on for that matter) so that . Fix also a vector space over of dimension , and denote by the group .

The goal of this post is to define , the Bruhat-Tits (BT) building of G. For now we will just define it as a combinatorial object, namely a simplicial complex.

First we define its set of vertices : they are just homothety (dilation) classes of lattices of . Here, by lattice we mean -lattice, and homothety is given by scaling by elements of .

The set of -cells is the set of *(lattice) flags in *: these are tuples of vertices satisfying

There is a natural cyclic ordering of the vertices in a given -cell: we say that whenever is a *face* of .

Some notions will depend on the choice of a distinguished vertex on a -cell . A pair of a -cell together with a distinguished vertex is called a *pointed -cell*, and the set of these will be written as . One can define notions such as *type* of a pointed -cell, and to do computations it might be useful to pick a basis of that is *adapted* to a particular pointed -cell. We will talk about these notions when we need them.

In the next post we will talk about distances, walls and apartments (and of course, a chamber will be a piece of the apartment and limited by walls!).

A **question** for the readers: why does one say the building of and not the building of or of ? One possible **answer** is that is the automorphism group of these buildings. Cameron suggests also that in this way we get nicer stabilizers of vertices and edges, although these two answers are very much related, I think…

## The name of the game

June 5, 2011

Posted by on The first goal of this blog is to understand the paper “Residues on buildings, and de Rham cohomology of -adic symmetric domains” of Ehud de Shalit. C and i have been playing around trees for a while now, and decided that if a computer can deal with trees, then it should also be able to deal with buildings in general.

It is hard to decide at which level of generality we want to work. The least pretentious of the possibilities is to start with . This would already show some new features and give us some ~~headaches~~ challenges. At the other side of the spectrum would be to try to understand the building of any classical group, as P. Garret does in “Buildings and Classical Groups”. Some middle ground for which we already have a reference is to do all , and if C agrees we’ll stick to this for now. I know, you want to see us doing and all this. So let’s keep it in mind and emphasize what is particular to and what is more general.

Before finishing i would like to show at least a partial view of the whole picture of buildings, that should serve us as a guide. There are three main types of buildings:

- Spherical (finite apartments) (analogue to compact symmetric spaces)
- Affine (apartments look like real affine space) (analogue to noncompact symmetric spaces)
- Hyperbolic (the rest)

We will concentrate on **affine buildings** for now. These are some combinatorial gadgets, some of which are associated to semisimple matrix groups. We will focus on this type as well, since we care about these groups more than others, at least for now. Actually, all buildings of high enough dimension that don’t contain as a factor a small dimension (meaning 1 or 2) sub-building are indeed attached to some semisimple matrix group.