Michael McBreen (date: 27th April 2009)
Topics: Representation theory and algebraic geometry.
Examiners: Dmitri Beliaev, Sergio Fenley and Andrei Okounkov (chair).
Length: two hours.
I can’t remember the order or format of the questions, so this is a somewhat
inaccurate dramatization.
O: Want to start with real analysis?
Sure.
No one volunteers.
O: OK.. complex analysis?
F: What’s a harmonic function?
Twice differentiable function with Laplace(f)=0, or equivalently, locally L^1
function satisfying f(x) = average of f(y) over a circle centered at x.
O: How do you integrate an L^1 function on R^2 over a circle?
- Sorry, I guess we need continuous.
O: What if you average over a disc?
You still get f(x).
O: So L^1 makes sense here?
Yes.
F: Why are the definitions equivalent?
I start showing the integral doesn’t depend on the circle radius, by differentiating
it with respect to radius.
O: Can you do this with representation theory?
Um..
O: What’s a harmonic function on a symmetric space?
I guess: if the symmetric space is G/H for some semisimple group G, then the Lie
algebra of G acts on smooth functions by differentiation. Construct a Casimir
operator from the Lie Algebra and call its kernel the harmonic functions.
O: OK, now do an explicit example.
- I start doing it for R^2, explaining that since it wasn’t semisimple I’m not sure
how my spiel applies.
O: Even for nonsemisimple, you have a bunch of invariant differential operators..
He goes on to explain something related which I can’t recall.
O: So can you show f(x) satisfies the mean value property from representation theory?
You have a group of transformations of the plane..
I mumble something inconclusive about rotations and translations. The Laplacian is
invariant under these transformations, so its eigenspaces are representations. What
could these eigenspaces be? I am about to explain that the Laplacian has a compact
inverse and use that to show it has a basis of eigenvectors… of sorts. But Okounkov
stops me: this is the wrong direction.
O: Write down the formula for the average over the disc.
This takes an absurdly long time due to heckling from the audience about my notation.
drdtheta becomes dvol becomes dx becomes dy, f(x) goes to f(r + x) goes to f(x + y)..
O: What operation are you applying to f(x)?
Convolution with the characteristic function of a ball.
O: OK, write it that way. What can you say?
It’s… projection onto the trivial representation of… our rotation group? No. I am
getting worried and confused.
O: Decompose L^2(R^n) under the action of the isometry group.
I frown.
O: Just take translations.
I want to say that e^(i lambda x) form a basis for the invariant subspaces, but they
aren’t in L^2.. I can do this on a torus, though.
O: OK, do the torus.
L^2(T^n) decomposes as a direct sum of frequency spaces under the torus action.
O: For R^n instead of a direct sum, you have a ‘direct integral’.
I see where this is going! I write the Fourier transform, mumble a few properties.
O: How does the rotation group act on momentum space?
It rotates the momentum vector.
O: So?
This somehow trails off.
B: Can you show your harmonic function is smooth?
The average of a continuous function is C^1, average of a C^1 function is C^2, etc…
F: Why?
Basically, because the integral of a continuous function is differentiable.
F: Can you do it for this case in more detail?
I start very slowly choosing notation.
O: Do you know a formula that makes the analyticity obvious?
Poisson integral formula: I literally write f(x) = Integral(Poisson(x-y)f(y)dy) on
the board.
B: Why does such a kernel exist?
Well, you can write down an explicit kernel, check that the formula defines a
harmonic function on the disk with given boundary values, and use the maximum
principle to show that it must match your original harmonic function.
O: But why can we expect it to exist?
B: Yes, why can we Cauchy integral formula expect it to exist?
We can find a harmonic conjugate for f(x) on our simply-connected disk.
B: How?
I sketch the idea and conclude that the Poisson kernel comes from the Cauchy integral
formula, which I accidentally rename the Cauchy Riemann Integral. My examiners are
amused: I propose to call it the Cauchy Belyaev formula instead, but they don’t seem
too enthusiastic.
O: In higher dimensions, for an operator like Laplacian or bar(del), why do you
expect an integral formula?
- I start explaining that an elliptic operator has finite dimensional kernel and
cokernel, and can be inverted by a compact operator away from these spaces. We can
hope that the compact operator will be given by an integral formula. (There's more to
say here, but I think I stopped there)
O: Give an example of all this.
- I take the bar(del) operator acting on smooth sections of a holomorphic line bundle
on a manifold, and show that the cokernel is the first Dobleaut cohomology group. I
prove the Dolbeaut isomorphism because I can’t remember the statement.
Okounkov seems happy with this.
O: Can you find an explicit variety and line bundle with non-zero Dolbeaut
cohomology?
I blank at first, then write down P^1 with O(1), remember it has no higher
cohomology, and switch to the structure sheaf of an elliptic curve.
O: What is H^1 in this case?
Genus.
B: Can you find a probabilistic interpretation of the Poisson kernel?
I don’t know.. you spread out the boundary values using a random walk on a very fine
lattice?
O: That’s correct.
O: Here’s a question! Suppose you’re in a discrete group, and you have a finite
symmetric set of generators. Take a random walk {x_n} . What can you say using
representation theory?
??? (there’s an embarrassing silence – I pace back and forth and wave my hands about
to simulate mental activity).
O: Take a function on the group. We want the expectation value of its evaluation at
the endpoint of your random walk.
We expect it to converge to the mean..?
O: How fast?
More confusion.
O: Write everything down.
After a few decades of searching, I end up with a linear operator T on the space of
functions such that T^n(f)(x) gives expectedvalue(f(x_n)). I propose to suppose this
is self-adjoint, diagonalize it and show the eigenvalues are smaller than one. I am
at a loss how to proceed.
O: What can you say about functions on the group?
Real valued functions decompose into a direct sum of V tensor V* as V runs through
the irreducibles.
O: Are you sure?
?
O: The formula is wrong. What’s the base field?
Oh! I suppose I should have used C.
O: Yes: else you can’t use Schur’s lemma to prove this decomposition. What now?
I can’t see why T preserves the decomposition.
O: It’s obvious!
Of course! And the eigenvalues are at most one because..
O: Obvious!
Manifestly! And now to show they’re strictly smaller than one away from the trivial
representation:
A discussion ensues to which I contribute almost nothing, but which proves that the
expected value goes to the mean (barring certain special cases).
O: Some algebraic geometry?
Sure.
A few minutes of silence.
O: What can you say about varieties with group structure? (still the representation
theory!)
- If they’re compact (or complete) they’re tori.
O: And noncompact?
- Uh.. maybe they’re simply connected? I enumerate a few examples, grasping for
ideas.
O: If they’re affine.
I have no idea.
O: I wanted you to say: they’re semidirect products of linear groups with tori.
He explains why this is true.
O: What’s the cohomology of P^n?
I compute this.
O: The Todd class?
I write down a nearly correct formula, then compute the Chern polynomial of P^n,
making bounteous mistakes.
O: What does Grothendieck-Riemann-Roch say for O(k)?
I write down the formula, and compute the relevant cohomology classes.
O: You don’t need to finish that – you can do it at home, as homework.
And it was over! Everyone was friendly and understanding and fine all round.