Non-archimedean Real Estate

Making our way up some buildings

p-adic Orlik-Solomon algebras

In this blog post I’m going to finish up discussing the classical Orlik-Solomon algebra and then move on to discuss section 2.1 of de Shalit. Note that I’ve made the title of this post up on a whim, and I have no idea if it’s at all close to standard terminology.

1. The Orlik-Solomon algebra

Let {V} be a finite dimensional complex vector space. Let {\mathcal{A}} denote a hyperplane arrangement {(H_{1},\ldots , H_{n})} in {V}, and let {\alpha _{i}} denote an element in the dual space of {V} which cuts out {H_{i}}, for each {i}. Let {M} denote the hyperplane arrangment in {V} which is the complement of {\bigcup _{i} H_{i}}; we wish to describe the de Rham cohomology ring {H^{\bullet }(M)} of {M}. We’ll work over {\mathbf{C}}, but one can in fact work integrally. (It is a theorem that the Betti cohomology of {M} is torsion free, so one need not worry about torsion)

To begin our study of the de Rham cohomology, we identify some special {1}-cocycles. For each {i} we write {M_{i} = V - H_{i}}. Note that this is homotopic to {\mathbf{C}^{\times }} via projection onto the line in {V} orthogonal to {H_{i}}. Write {\omega _{i}} in {H^{1}(M)} for the pullback via the inclusion {M \to M_{i}} of the class in {H^{1}(M_{i})} which corresponds to {(1/2\pi i)dz/z} in {H^{1}(\mathbf{C}^{\times })}. With {\alpha _{i}} defined as above, one has

{\displaystyle \omega _{i} = \frac{1}{2\pi i} \frac{d\alpha _ i}{\alpha _ i}.}

These classes generate {H^{1}(M)}, and hence {H^{\bullet }(M)}, and it is possible to describe the relations between them quite explicitely. For this we introduce the Orlik-Solomon algebra.

Let {E_{1}} denote the complex vector space with basis {e_{H}} for each hyperplane {H} in the arrangement. Then let {E} denote the exterior algebra of {E}. Let {S} be a subset of the indices {1} through {n}. Then one says that {S} is dependent if the intersection {\bigcap _{i \in S} H_{i}} has codimension less than the size of {S}. One can show that {S} is dependent if and only if the linear forms {\alpha _{i}} for {i \in S} are linearly dependent over {\mathbf{C}}, which explains the terminology.

Let {e_{S}} denote the wedge product of all the {e_{i}} for {i} in {S}. Finally let {I} denote the homogeneous ideal generated by all of the {e_{S}} as {S} ranges over dependent subsets. It is not hard to show that {I} is stable under the differential of {E}, so that {E/I} inherits the structure of a differential graded algebra. Orlik-Solomon proved that the ideal {I} gives all the relations between the {\omega _{i}}: more precisely, the map {E \to H^{\bullet }(M)} defined by mapping {e_{i}} to {\omega _{i}} induces an isomorphism of differential graded algebras

{\displaystyle E/I \cong H^{\bullet }(M).}

2. A {p}-adic variant

Now we’d like to define something similar for a {p}-adic vector space {V} of dimension {d+1}, except that we’d like to admit the hyperplane arrangement to be infinite (since we’d like to describe the cohomology of the Drinfeld symmetric spaces). Since we’ll be following de Shalit from here on out, I’m going to switch over and begin using his conventions. We’ll stick to them from now on.

Rather than work with hyperplane arrangements, we’ll work with arrangements of lines in {V}. So let {\mathcal{A}} denote a subset of the projective space {\mathbf{P}(V)}. For each {a} a nonzero vector in {V}, let {e_{a}} denote the line spanned by {a}. Following de Shalit we write {a \in \mathcal{A}} to mean that {e_{a} \in \mathcal{A}}.

Let {\widetilde{E}} denote the free exterior {K}-algebra on {\mathcal{A}}, where {K} is our fixed {p}-adic field, generated in degree {1} by all of the {e_{a}}. Let {\delta } denote the differential on {\widetilde{E}} which maps {\delta (e_{a}) = 1} and is otherwise defined like a Cech differential on the higher graded pieces. Then put {E = \ker (\delta )}; since {\delta (e_{a}\wedge x) = x - e_{a}\wedge \delta x }, it follows that also {E = \text{im}(\delta )} and that {E} is the subalgebra generated by the elements {e_{a} - e_{b}}. Thus {\delta } yields a split exact sequence of graded modules

{\displaystyle 0 \to E \to \widetilde{E}\to E[1] \to 0},

where a splitting is given by {x \mapsto e_{a}\wedge x} for any {a} in {\mathcal{A}}. (Remark: formula (2.4) in de Shalit is wrong. The correct formula is

{\displaystyle \delta(e_{0}\wedge\cdots \wedge e_{k}) = \delta(e_{0} \wedge \cdots \wedge e_{m}) \wedge \delta(e_{0} \wedge e_{m+1} \wedge \cdots \wedge e_{k})}

(Edit: Formula 2.4 in de shalit is fine! I misread it when I was writing this, so my bad. :))

Before jumping into the next definition, I’d like to provide some explanation for what we’re about to do: if {\tau } is an (oriented) {r} simplex of the building {\mathcal{T}_{d}}, say represented by the lattices

{\displaystyle M_{0} \supset M_{1} \supset \cdots \supset M_{r} \supset \pi M_{0} = M_{r+1}},

then one can intersect the lines in our arrangmenet with {M_{0}}. Reducing mod the uniformizer {\pi } for {K} gives an honest finite arrangement of lines in the finite dimensional vector space {M_{0} /\pi M_{0}}. Note that {\tau } endows this quotient with a filtration. Using this filtration we will define, for each such {\tau }, relations in {E} such that the quotient of {E} by these relations describes the Monsky-Washnitzer cohomology of the arrangment in {M_{0}/\pi M_{0}}. Today we’re just going to define the relations, though, and we’ll get to the cohomology later.

Let {\tau } be an oriented {r} simplex represented by lattices {M_{i}} as above. Then for {a} in {\mathcal{A}\cap M_{0} - \pi M_{0}}, we define the index {\iota _{\tau }(a)} of {a} relative to {\tau } to be the unique integer {0 \leq i \leq r} such that {a \in M_{i} - M_{i+1}}. For general {a} in {\mathcal{A}}, one can multiply {a} by a suitable (unique!) power of the uniformizer {\pi } so that {\pi ^{n}a \in \mathcal{A}\cap M_{i} - M_{i+1}}. Then define the index of {a} to be the index of {\pi ^{n}a}.

Example. Consider the case {d = 1} and {K = {\mathbf{Q}_ p}} (just so that I can write {{\mathbf{Z}_ p}} instead of {\mathcal{O}_{K}}). Then {V = \mathbf{Q}_ p^{2}} is two dimensional and {\mathcal{T}_{1}} is the Bruhat-Tits tree. Let {\tau } be an oriented edge. There are two possible indices relative to {\tau }, namely {0} and {1}. To describe things more concretely, we take for {\tau } the edge corresponding to the following lattices: {M_{0}} is the standard lattice {\mathbf{Z}_ p^{2}}, while {M_{1}} is the lattice spanned by the vectors {(1,0)} and {(0,p)} (to save writing a bunch of tranposes , I’m going to write row vectors in this example rather than column vectors). Let {a = (x,y)} be an arbitrary vector in {\mathcal{A}}. To check its index we must rescale {x} and {y} so that both are {p}-adic integers, but such that at least one is a {p}-adic unit. Replace {a} by this rescaled version. If {y} is a {p}-adic unit, then {a \in M_{0}} but it is not in {M_{1}}, and the index of {a} is {0}. If {y} is not a {p}-adic unit, then {x} is a {p}-adic unit and {a \in M_{1} - pM_{0}}. Somewhat more generally, if one considers the {p+1} edges adherent to {M_{0}} in the tree, which correspond {1-1} to the lines in {M_{0}/pM_{0}}, then {a} has index {1} if and only if it reduces to the line in {M_{0}/pM_{0}} corresponding to the given edge {\tau }.

We can now define our relations relative to {\tau }: let {I(\tau )} be the ideal in {\widetilde{E}} generated by elements {\delta (e_{a_{0}}\wedge \cdots \wedge e_{a_{m}})} for any elements {a_{i}} in {\mathcal{A}\cap M_{j} - M_{j+1}} which are linearly dependent modulo {M_{j+1}}. Then as in the complex case we set {\widetilde{A}(\tau )} to be the quotient {\widetilde{E}/I(\tau )} and set {A(\tau )} to be the quotient {E/E\cap I(\tau )}. The previous exact sequence induces another split exact sequence

{\displaystyle 0 \to A(\tau ) \to \widetilde{A}(\tau ) \to A(\tau )[1] \to 0}.

We will spend the next few posts discussing these algebras. Note that Proposition 2.1 in de Shalit shows that (i) {\widetilde{A}(\tau )} is supported in degree {\leq d+1}, while {A(\tau )} is supported in degree {\leq d}; (ii) both algebras are generated in degree {1}; (iii) both algebras are finite dimensional.

Next time I’ll cover subsections 2.2 and 2.3, and then then there will be one more post finishing up the last bits of section 2.

Hyperplane arrangements

Let {K} be a finite extension of {{\mathbf{Q}_ p}}. We are on our way towards understanding the cohomology of Drinfeld’s {p}-adic symmetric domain of dimension {d} over {K}, and its connection with the building of {\mathbf{PGL}_{d+1}}. Recall that Drinfeld’s domain is {\mathbf{P}_{d}({\mathbf{C}_ p})} with all {K}-rational hyperplanes removed. When {d > 1} these spaces are a little mysterious to me, so a more modest goal for these posts is simply to get a concrete feeling for what these higher domains look like. When {d = 1} the answer is nice and easy to picture: the domain, in this case the {p}-adic upper half plane, is a tubular neighbourhood of the Bruhat-Tits tree.

Marc has asked me to cover the next section of de Shalit’s paper Residues on buildings, etc. It defines an algebra whose genesis lies in the study of finite complex hyperplane arrangements. More precisely, the material in section 2 of de Shalit is motivated by the work of Orlik-Solomon on the cohomology of the complement in a finite dimensional vector space of a number of hyperplanes. So in this post I’m going to start off by going over this previous work, before jumping ahead into de Shalit. I’ll be following this wonderful expository paper of Lionel Levine, who is a postdoc at MIT.

Not all that’s presented below is necessary for our study of Drinfeld’s domain, but it’s all cool!

Hyperplane arrangements

In this post we restrict to what Levine (and maybe everybody who discusses hyperplane arrangements…) calls central arrangements. This amounts to considering only hyperplanes which pass through the origin. So to save myself from having to write codimension {1} subspace all over the place, in this post hyperplane always refers to a hyperplane through the origin of a vector space.

We’re going to change the field of definition a few times, so we’ll let {V} be a vector space over an arbitrary field {k}. Then a hyperplane arrangement in {V} is simply a finite collection of hyperplanes in {V}. Ultimately we’re interested in the case {k = \mathbf{C}}, and in the computation of the de Rham cohomology of the complement of an arrangement. However, we’ll get used to riding around on training wheels first by computing points over a finite field, and then by computing connected components when {k = \mathbf{R}}.

Mobius function of a lattice

Let {H_{1},\ldots , H_{n}} denote hyperplanes defining an arrangement {A} in {V}. Let {L(A)} denote the collection of all nonempty subsets of {V} which can be expressed as an intersection of some of the {H_{i}}, possibly an empty intersection. Since {0} is contained in every {H_{i}}, the total intersection is nonempty. Hence if we endow {L(A)} with the partial ordering given by set theoretic inclusion, {L(A)} has both a least element {\cap _{i} H_{i}} and a largest element {V} (corresponding to the empty intersection). In fact, {L(A)} is a lattice, that is, a poset in which every pair of elements has a unique supremum and infimum. To see this, consider {\cap _{i \in S} H_{i}} and {\cap _{i \in T} H_{i}}. The infimum is given by the intersection over {S \cup T}, while the supremum is given over {S \cap T}. The Mobius function of the lattice {L(A)} plays an important role in what follows, so we recall some generalities.

Let {(L, \leq )} be a lattice. For {a} and {b} in {L} let {[a,b]} be the interval between {a} and {b} containing all {x \in L} such that {a \leq x \leq b}. A lattice is said to be locally finite if every interval is a finite set. Such lattices possess a Mobius function. It is defined concretely as follows: {\mu } is a function {L \times L \to \mathbf{Z}} defined inductively by setting {\mu (x,x) = 1},

{\displaystyle \mu (x,y) = -\sum _{x \leq z < y} \mu (x,z)}

if {x < y} and {\mu (x,y) = 0} otherwise.

For example, if one considers the positive integers endowed with the divisibility relation, then {\mu (a,b) = \mu (b/a)} where {\mu } is the classical Mobius function of number theory (extended so that {\mu (b/a) = 0} if {a} does not divide {b}).

For an arbitrary locally finite lattice {L} (and I probably also have to assume that there is a unique least element for the formulae below to make sense), one has a Mobius inversion formula of the following form: if {f} and {g} are two functions {L \to \mathbf{Z}}, where {g} is defined in terms of {f} by the formula

{\displaystyle g(y) = \sum _{x < y} f(x),}

then Mobius inversion says that {f} can be expressed in terms of {g} via the formula:

{\displaystyle f(y) = \sum _{x < y} g(x)\mu (x,y).}

For more on Mobius functions, one can start with the Wikipedia page on the incidence algebra of a lattice. If you’ve got access to Springerlink, then you can also check out this classic paper of Gian-Carlo Rota, On the Foundations of Combinatorial Theory I: Theory of Mobius Functions.

Counting points of an arrangement over a finite field

In this section we suppose that {k} is a finite field with {q} elements. Set {\mu (X) = \mu (X,V)} for any {X \in L(A)}, where {\mu } is the Mobius function of the lattice {L(A)} associated to the arrangement {A} in {V}. Let {\chi (A,q)} denote the number of points of {V} in the complement of the arrangement {A}.

Lemma. One has

{\displaystyle \chi (A,q) = \sum _{X \in L(A)} \mu (X)q^{\dim X}.}

We can prove this by a simple application of Mobius inversion. Let {f \colon L(A) \to \mathbf{Z}} be defined as follows: {f(Y)} is the number of points contained in {Y} but which are not contained in any {X \subset Y} with {X \in L(A)} (so {f(V) = \chi (A,q)}). Show that if {g(Y) = \sum _{X \subset Y} f(X)}, then {g(Y) = q^{\dim Y}}. The lemma then follows immediately by evaluating the Mobius inversion formula at {V}.

See Levine’s paper for applications of this to computing points in the complement of the braid arrangement, as well as to computing an identity for Stirling numbers.

The number of components of a real arrangement

Somewhat amazingly, the previous computation can be used to compute the Betti numbers of the complement of a complex arrangement which is defined over {\mathbf{Z}}. Before getting to this, we state a result (without proof) which describes the number of connected components in the complement of a real arrangement defined over {\mathbf{Z}}:

Lemma. Let {V} be a real finite dimensional vector space and let {A} be a finite arrangement in {V} defined over {\mathbf{Z}}. Then the number of connected components {\gamma } in the complement of {A} is given by the formula

{\displaystyle \gamma = (-1)^{\dim V}\sum _{X \in L(A)} (-1)^{\dim X}\mu (X).}

Levine’s proof considers the polynomial {\chi (A,q)} of the previous section. A change of variables of this polynomial yields the Poincare polynomial of the corresponding complex arrangement, whose coefficients encode the Betti numbers (Remark: I think that one has to choose {q} (which is a finite prime actually – I think Levine avoids {p} cause he wants to save it for labelling points) such that reducing the arrangement does not change the lattice).

It’s surprising that these Betti numbers can be computed combinatorially from the lattice alone. Rybnikov has constructed arrangements whose complements have nonisomorphic fundamental groups, but such that the corresponding lattices are isomorphic (and hence the complements have the same Betti numbers).

Originally I’d intended to discuss the Orlik-Solomon algebra tonight, but since we discuss it all throughout section 2 of de Shalit, I will save this for next time! I’ll try to get to it over the weekend.

An announcement

Dear reader(s):

We proudly announce the first release of our code for computing with arithmetic quotients of the Bruhat-Tits of \text{PGL}_2(\mathbb{Q}_p).

We hope that it will make it into Sage some day, but for now it is available on a private space in Assembla. If you want to try it out, here is what you should do:

  1. Download the source of Sage from their webpage, and compile it (it will take a couple hours).
  2. Get our code from assembla: hg clone https://hg.assembla.com/btquotients
  3. Put the folder btquotients inside SAGE_ROOT/devel/sage/sage/modular/
  4. Add a line in SAGE_ROOT/devel/sage/setup.py with ‘sage.modular.btquotients’, (don’t forget the comma!) together with the similar lines that start with sage.modular.
  5. Run sage -br and enjoy!

If you don’t care about the source and just want to use it, you can also get a patch.

We will prepare a post or a worksheet with detailed instructions on using this software. Also comments are welcome, and needed! For now, you can start with:

sage: X=ShimuraCurve(13*23)

sage: Y=X[13]

sage: Y.plot()

 

 

PS @Cameron: yes, it worked. I got Darmon’s point! But not his period…which makes me suspect that there is some typo somewhere. It doesn’t matter anymore, though! 🙂

The Bruhat-Tits building of PGL(n+1) (III)

It is now time to introduce the action of {G} on our building. Of course, {{\text {PGL}}_{n+1}} acts on homothethy classes of flags on the left, by acting on the vector space {V_{K}} (remember that this is the definition of {{\text {PGL}}}). Here is where types start to become relevant: the action of {{\text {PGL}}} can’t be transitive in general: it will not change the dimensions of the subspaces that conform each flag, and therefore if the flags have different sequence of dimensions, they won’t lie in the same orbit.

Here is the precise definition of type:

Given a pointed {k}-cell {\sigma }, the type of {\sigma } is the sequence {t(\sigma )=(e_{0},e_{1},\ldots ,e_{k})} defined by {e_{i}=\dim L_{i}/L_{i+1}}. By convention, {L_{k+1}=\pi L_{0}}. Note that each of the {e_{i}} is positive (at least {1}) and the sum of all of them is {d+1}.

There are {d \choose k} types of pointed {k}-cells: in particular, there is only one type of the extreme-dimensional ones. This is why in the case of {d=1} (the tree) we don’t see them!

One can convince oneself easily that {G} acts transitively on the set of pointed {k}-cells of a given type (you probably learned this in your first course in linear algebra).

From now on, fix coordinates on {V_{K}} so that we can talk about {{\text {PGL}}_{n+1}(K)}. Let {v_{0}} be the vertex corresponding this basis. Its stabilizer is {{\text {PGL}}_{n+1}({\mathcal{O}}_{K})}. The question is: what is the stabilizer {B_{\sigma }} of a pointed {k}-cell {\sigma =(v_{0},v_{1},\ldots ,v_{k})}? Well, it would have to fix {v_{0}}, so it lies in {{\text {PGL}}_{n+1}({\mathcal{O}}_{K})}. Then it has to leave {v_{1}} invariant, so this will mean that there will be blocks in the corresponding matrix. To make it simpler, let’s suppose that the basis we chose happens to be adapted to {\sigma }. That is, that the flag looks like:

\displaystyle  (e_{0},\ldots ,e_{d})\supset (e_{r_{1}},\ldots ,e_{d})\supset \cdots \supset (e_{r_{k}},\ldots ,e_{d}),

where {r_{1}<r_{2}<\cdots <r_{k}}. This would be called the standard pointed {k}-cell of type {(r_{1},r_{2}-r_{1},\ldots ,d-r_{k})}.

In this case the stabilizer is a matrix with {k+1}-blocks in the diagonal, of sizes {r_{1},r_{2}-r_{1},\ldots } and with entries in {{\mathcal{O}}_{K}}, that has arbitrary entries above these blocks, and entries divisible by {\pi } below them. This is called the standard parahoric subgroup of type {t(\sigma )}. In general, the stabilizer of a pointed {k}-cell will be a conjugate of such a group.

There are more instrinsic groups that we can get from looking at stabilizers: fix a vertex {v}, and consider the ball of radius {n} centered at {v}: the set of vertices at distance at most {n} from {v}. Its point-wise stabilizer is called principal congruence subgroup of level {n} of the stabilizer {B_{v}} of {v}. Letting {n} vary we get a sequence of normal pro-{p} subgroups of {B_{v}}.

Maximal tori of {G} can also be recovered: for each basis {\alpha }, its corresponding maximal torus is the stabilizer of the apartment {A_{\alpha }}. Given a wall {W} in {A_{\alpha }}, there is a unique involution {s_{W}\in G} which normalizes {T_{\alpha }} and in {A_{\alpha }} induces a reflection with respect to the wall {W}.

This post should finish with a little bit of topology, as promised. Actually, we will introduce a new metric {d}. Let {|{\mathcal{T}}|} be the topological simplicial complex associated to {{\mathcal{T}}}. This takes vertices to points, edges to (open) segments, {2}-cells to (open) triangles and so on. This turns out to be a contractible topological space.

Pick a vertex {v_{0}}. The star of {v_{0}}, written {{\text {St}}(v_{0})}, is the subspace obtained as the union of the open simplices containing {v_{0}} (look at the picture for {d=1} to see why it is called a star). Note that its closure is compact. This notion is extended to any cell by taking the intersection of the stars of the vertices in that cell.

Let {\alpha } be a basis for {V_{K}} and let {A_{\alpha }} be its corresponding apartment. We can identify

\displaystyle  |A_{\alpha }|\cong {\mathbb {R}}^{d+1}/{\mathbb {R}}(1,\ldots ,1),

and we get an Euclidean metric {d_{\alpha }} on {|A_{\alpha }|}. Now, it is a fact that any two points in {|{\mathcal{T}}|} belong to a common apartment {A_{\alpha }}, and therefore we can measure the distance between them using {d_{\alpha }}. That this is well-defined follows from looking at the intersections of two apartments: if {A_{\alpha }} and {A_{\beta }} contain the points {x} and {y}, then there is an isomorphism {A_{\alpha }\to A_{\beta }} which fixes (pointwise) both {x} and {y}. Also, given two vertices {u,v}, there is a geodesic connecting them: the straight line in any of the apartments containing both of them.

One word of caution to end this post: the metric {d} is different from the metric {\rho } that we have introduced before. I just noticed that {d} has two meanings in this post, but if you have trouble differentiating them by the context you should probably be reading something else anyway…

Next goal: draw some pictures!

The Bruhat-Tits tree

In this post I’ll specialize Marc’s discussion to the building of {mathbf{PGL}_2(mathbf{Q}_p)}. It turns out that the building {mathcal{T}} is an infinite tree in this case.

1. Definition

Recall that the vertices of {mathcal{T}} are lattices in {mathbf{Q}_p^2} taken up to rescaling. If {L subseteq mathbf{Q}_p^2} is a lattice, then we’ll write {[L]} for the homothety class of {mathbf{Q}_p^times}-multiples of {L}. Two vertices bound a {1}-simplex of {mathcal{T}} if and only if there are representative lattices {L}, {L'} for the corresponding homothety classes such that

displaystyle L supsetneq L' supsetneq pL.

Note that this is actually a symmetric relation: for {[L'] = [pL']} by definition of the square-brackets notation, and one deduces from the lined formula above that also

displaystyle L' supsetneq pL supsetneq pL'.

Since {L} is a lattice in {mathbf{Q}_p^2}, it is isomorphic with {mathbf{Z}_p^2}, and thus

displaystyle L/pL cong mathbf{Z}_p^2/pmathbf{Z}_p^2 cong (mathbf{Z}_p/pmathbf{Z}_p)^2 cong (mathbf{Z}/pmathbf{Z})^2.

The lattice {L'} above corresponds to a line in {(mathbf{Z}/pmathbf{Z})^2}, and there are {p+1} such lines. This computation also shows that there are no higher dimensional simplices in {mathcal{T}}, for such a simplex would correpsond to a sequence of lattices

displaystyle L supsetneq L' supsetneq L'' supsetneq pL.

But additive subsets sandwich between {L} and {pL} as above correspond one to one with vector subspaces of {(mathbf{Z}/pmathbf{Z})^2}, and this plane is too small to admit such inclusions. To summarize, we have shown that {mathcal{T}} is a graph such that each vertex is adjacent to exactly {p+1} other vertices.

It does not take much more work to show that the graph {mathcal{T}} is a tree: given two vertices {u} and {v}, one can use the elementary divisors theorem to find representative lattices {L} and {L'} such that there is a basis {(e_1,e_2)} for {L} with the property that {(p^ne_1, e_2)} is a basis for {L'}, for some integer {n geq 1}. If we let {L_i} denote the {mathbf{Z}_p}-span of {p^ie_1} and {e_2}, then the inclusions

displaystyle L_0 supsetneq L_1 supsetneq cdots supsetneq L_n

describe the unique nonbacktracking path from {u} to {v}. Hence every pair of vertices in {mathcal{T}} are joined by a unique nonbacktracking path, so that {mathcal{T}} is connected and acyclic, and hence a tree.

2. Distances and contiguity

Note that the integer {n} above describes the distance between {u} and {v} as defined in Marc’s previous post; that is, the distance is simply the number of edges between the two vertices. The distance between two edges is the maximum distance between any two of the endpoints of the edges. So for example, if two distinct edges share a vertex, then their distance from one another is {2}. The distance from an edge to itself is {1}, as Marc remarked last time, which seems a little pathological. the distance from a vertex to an edge is the maximum distance from the vertex to the endpoints of the edge.

Recall that two simplices are said to be contiguous if and only if they are at distance at most {1} from one another. Hence vertices are contiguous if and only if they are joined by an edge. A vertex is contiguous with an edge if and only if it is an endpoint of the edge. Finally, the previous paragraph shows that an edge is contiguous with another if and only if they’re equal. This is one instance of things being simpler in the case of the Bruhat-Tits tree: contiguity is not very exciting.

3. Apartments

In this low-dimensional case, apartments are also very simple: given a basis {(e_1,e_2)} for {mathbf{Q}_p^2}, the corresponding appartment is described by the vertices which correspond with the lattices {(p^ne_1,e_2)} for {n} an integer, either positive or negative. Hence apartments are nothing but paths in the tree {mathcal{T}} in both directions. They can be given a natural euclidean topology which makes them homeomorphic with the real line.

The Bruhat-Tits building of PGL(n+1) (II)

As promised in the previous post, we will start this one with distances on the BT building {{\mathcal{T}}}. First, if {u,v} are two vertices, then one can find bases of {V_{K}} adapted to them: that is, so that {u=[L]} is represented by the standard {{\mathcal{O}}_{K}}-lattice {L}, and {v=[M]} is represented by some sublattice of the form

\displaystyle \pi ^{m_{0}}{\mathcal{O}}_{K}\oplus \pi ^{m_{1}}{\mathcal{O}}_{K}\oplus \cdots \oplus \pi ^{m_{d}}{\mathcal{O}}_{K},\quad 0\leq m_{0}\leq \cdots \leq m_{d}.

This is just an application of the elementary divisors theorem. The distance between {u} and {v} is:

\displaystyle \rho (u,v)=m_{d}-m_{0}.

One can check that this is well defined. One extends this function to all simplices: if {\sigma \in {\mathcal{T}}_{k}} and {\tau \in {\mathcal{T}}_{r}}, then:

\displaystyle \rho (\tau ,\sigma )=\max _{u\in \tau ,v\in \sigma } \rho (u,v),

where {u,v} are vertices. One then checks that this satisfies the properties of a distance. There are more distances that one can define on {{\mathcal{T}}}, but for now let’s avoid confusion.

Contiguity

We want a notion of when two cells are “contiguous”, and this notion should come from the distance {\rho } that we have just defined. If {\rho (\sigma ,\tau )=0} then {\sigma } should be contiguous to {\tau }. This only happens when {\sigma =\tau \in {\mathcal{T}}_{0}}, but okay. We would like also that any cell is contiguous to itself, even if it is not a vertex. But if {\sigma } is not a vertex, then {\rho (\sigma ,\sigma )=1}. So we can try with:

Two cells {\sigma ,\tau \in {\mathcal{T}}} are contiguous if {\rho (\sigma ,\tau )\leq 1}.

Now, is this notion useful? Well, to start with, it would allow us to reconstruct the simplicial structure of {{\mathcal{T}}}: we would take as {k}-cells any set of {k+1} vertices such that any distinct two of them contiguous.

We would hope that something like the following would be true: two cells are contiguous if and only if they contain a common sub-cell. Well, this is not only false, but none of the directions is actually true: for example, on the BT tree (the case of {d=1}), the two endpoints of any edge are contiguous (but they do not share sub-cells). On the other hand, there are no contiguous edges at all. Even when they share a vertex. None. So looks like pretty bad, but this is life. To compensate, here is a proposition listing equivalent ways of defining contiguity:

Let {\sigma \in {\mathcal{T}}_{k}} and {\tau \in {\mathcal{T}}_{r}} be two cells in {{\mathcal{T}}}. Then the following are equivalent:

  1.  \sigma and \tau are contiguous.
  2. The union \tau \cup \sigma is contained in a cell.
  3. There are lattice flags \displaystyle\pi L_0\subset L_ k\subset \cdots\subset L_1\subset L_0 and \displaystyle \pi M_0\subset M_ r\subset \cdots\subset M_1\subset M_0 representing \sigma and \tau respectively, such that they can be interlaced: that is, its union is also a lattice flag.

If you are a combinatorist you might enjoy proving this. The rest of us are happy believing it and leaving the messy subindex book-keeping arguments alone.

If {\sigma } and {\tau } are contiguous, then one can define a type of {{\mathfrak {t}}(\sigma ,\tau )}, which encodes how the corresponding lattice flags interleave, and what are the dimensions of the successive quotients. We won’t be too precise here, and might come back to it as we need.

Living combinatorially: walls and apartments

The (vector) space {V_{K}} is a nice place to live in, but not everyone gets along with everyone else, as it happens. There are maximally-compatible sets of elements of {V_{K}} that people have always called bases. And they want a place to live. So we give them apartments, one for each basis. Of course, if you give an apartment to a family one day and the next they come and one of them has changed the shirt, you don’t want to give them another apartment. So you consider two basis “the same family” if after reordering one is obtained from the other by rescaling each member independently (yes, one could change his/her shirt, the other get a haircut, and so on).

Now, let’s get to business: if {\alpha =(\alpha _{0},\ldots ,\alpha _{d})} is a basis of {V_{K}}, the apartment {A_{\alpha }} that it determines is defined to be the simplicial subcomplex of {{\mathcal{T}}} supported on the vertices of the form

\displaystyle v_{\alpha }(m_{0},\ldots ,m_{d})=\pi ^{m_{0}}\alpha _{0}{\mathcal{O}}_{K}\oplus \cdots \oplus \pi ^{m_{d}}\alpha _{d}{\mathcal{O}}_{K},

for varying {(m_{0},\ldots ,m_{d})\in {\mathbb {Z}}^{d+1}}. Every apartment is a triangulation of a copy of {d}-dimensional Euclidean space. For example, for {d=1} an apartment is a doubly-infinite sequence of consecutive edges, which is a “triangulation” of the real line. For {d=2}, an apartment is a triangulation (a tesselation) of the euclidean plane, and so on.

Finally, if we fix {m\in {\mathbb {Z}}} and {0\leq i<j\leq d}, and consider the vertices {v_{\alpha }(m_{0},\ldots ,m_{d})} which satisfy {m_{i}-m_{j}=m}, the simplex that they span is called a wall of the apartment {A_{\alpha }}. Cameron will give examples of all this, and I will try to draw some pictures (and possibly fail).

In the next post of this series, we will see how our group {G={\text {PGL}}_{d+1}(V_{K})} acts on such a building. This will give a way to understand many of the very beloved subgroups of {G}, such as parabolics, parahorics, maximal tori, and all these animals.

The Bruhat-Tits building of PGL(n+1)

We will be following [dS] for a while. Let’s fix some notation: take { K} a finite extension of {{\mathbb {Q}}_{p}}, with a choice of uniformizer { \pi }. Let { q} denote the size of { {\mathcal{O}}_{K}/(\pi )} and normalize the norm on { K} (and on { \mathbb {C}_{K}} for that matter) so that { |\pi |=q^{-1}}. Fix also a vector space { V_{K}} over { K} of dimension { n+1}, and denote by { G} the group { G=\text {PGL}(V_{K})}.

The goal of this post is to define { \mathcal{T}}, the Bruhat-Tits (BT) building of G. For now we will just define it as a combinatorial object, namely a simplicial complex.

First we define its set of vertices { \mathcal{T}_{0}}: they are just homothety (dilation) classes of lattices { L} of { V_{K}}. Here, by lattice we mean { \mathcal{O}_{K}}-lattice, and homothety is given by scaling by elements of { K^{\times }}.

The set of {k}-cells {{\mathcal{T}}_{k}} is the set of (lattice) flags in {L_{0}}: these are {k+1} tuples of vertices {{[L_{0}],\ldots ,[L_{k}]}} satisfying

\displaystyle L_{0}\supsetneq L_{1}\supsetneq \cdots \supsetneq L_{k}\supsetneq \pi L_{0}.

There is a natural cyclic ordering of the vertices in a given {k}-cell: we say that {\tau \leq \sigma } whenever {\tau } is a face of {\sigma }.

Some notions will depend on the choice of a distinguished vertex on a {k}-cell {\sigma }. A pair of a {k}-cell together with a distinguished vertex is called a pointed {k}-cell, and the set of these will be written as {\widehat{{\mathcal{T}}}_{k}}. One can define notions such as type of a pointed {k}-cell, and to do computations it might be useful to pick a basis of {V_{K}} that is adapted to a particular pointed {k}-cell. We will talk about these notions when we need them.

In the next post we will talk about distances, walls and apartments (and of course, a chamber will be a piece of the apartment and limited by walls!).

A question for the readers: why does one say the building of \text{PGL}_{n+1} and not the building of \text{GL}_{n+1} or of \text{SL}_{n+1}? One possible answer is that \text{PGL}_{n+1} is the automorphism group of these buildings. Cameron suggests also that in this way we get nicer stabilizers of vertices and edges, although these two answers are very much related, I think…

The name of the game

The first goal of this blog is to understand the paper “Residues on buildings, and de Rham cohomology of p-adic symmetric domains” of Ehud de Shalit. C and i have been playing around trees for a while now, and decided that if a computer can deal with trees, then it should also be able to deal with buildings in general.

It is hard to decide at which level of generality we want to work. The least pretentious of the possibilities is to start with \text{GL}_3. This would already show some new features and give us some headaches challenges. At the other side of the spectrum would be to try to understand the building of any classical group, as P. Garret does in “Buildings and Classical Groups”. Some middle ground for which we already have a reference is to do all \text{GL}_n, and if C agrees we’ll stick to this for now. I know, you want to see us doing \text{Sp}_4 and all this. So let’s keep it in mind and emphasize what is particular to \text{GL}_n and what is more general.

Before finishing i would like to show at least a partial view of the whole picture of buildings, that should serve us as a guide. There are three main types of buildings:

  1.  Spherical (finite apartments) (analogue to compact symmetric spaces)
  2. Affine (apartments look like real affine space) (analogue to noncompact symmetric spaces)
  3. Hyperbolic (the rest)

We will concentrate on affine buildings for now. These are some combinatorial gadgets, some of which are associated to semisimple matrix groups. We will focus on this type as well, since we care about these groups more than others, at least for now.  Actually, all buildings of high enough dimension that don’t contain as a factor a small dimension (meaning 1 or 2) sub-building are indeed attached to some semisimple matrix group.