Committee: Peter Sarnak (chair), John Conway, Nicolas Templier
Special topics: Functional Analysis and Analytic Number Theory
Examinee: Matthew de Courcy-Ireland
Date: May 9, 2013
Duration: 3 hours
COMPLEX ANALYSIS (asked by Sarnak)
What's an entire function?
What's the order of an entire function? What's the order of a bounded
entire function? What more can you say about bounded entire functions?
Prove Liouville's Theorem. How do you prove Schwarz's Lemma? Now take
a harmonic function. What's the relation between harmonic functions
and analytic functions? How do you prove that a bounded harmonic
function is constant? What about subharmonic functions?
How do you solve the Dirichlet problem?
At my suggestion, still asked by Sarnak (who says he never asks
students to prove Picard's Big Theorem): Given an analytic (or
harmonic, or subharmonic) function in a semidisk bounded by 1 on the
interval [-1,1] and by 2 on the upper half of the unit circle, bound
the function inside the semidisk. What's harmonic measure? Prove
Picard's big theorem. I don't think the examiners had ever seen the
proof I sketched of Picard's Theorem. At least, Sarnak said he
hadn't. They seemed impressed, and I'm glad I plunged into it, because
I got off to an atrocious start botching the proof of Liouville's
Theorem.
ALGEBRA
Conway: Tell me about groups of order mp where p is prime and m < p
(more specifically, show that such a group has a normal subgroup of
order p).
Conway: Classify the groups of order 28, giving a presentation
(generators and relations) for the semidirect product of a cyclic
group of order 4 and a cyclic group of order 7. Along the way, what's
the automorphism group of a cyclic group of prime order?
Conway, Sarnak, and Templier: Talk about representation theory. Does a
given finite group always have a faithful matrix representation? What
do you think are the most important theorems in representation theory?
What's an irreducible representation? How can you decompose a
representation into irreducible representations? What's Maschke's
Theorem? What field are you working over? What if you have a finite
field? What kind of decomposition can you get instead of direct sum?
What goes wrong if p divides the order of the group? How does the
regular representation decompose? How many times does each irreducible
representation appear in this decomposition? What numerical relation
does that imply about the dimensions of the irreducible
representations?
Conway and Sarnak: What's the character of a representation? What kind
of function is it [class function]? Does a character determine an
irreducible representation? What's the notion of equivalence for
representations? What are the orthogonality relations? Write down the
character table for S3, the symmetric group on 3 letters. Could all
its irreducible representations be 1-dimensional? What kind of groups
have only 1-dimensional irreducible representations?
Sarnak: For which matrices B can you solve the equation e^A = B? Prove
it. How can you write the solution? This was over the complex
numbers. Conway asked what happens over the real numbers. I think
Sarnak pointed out that it's possible to define the exponential even
over a finite field.
Templier: If you have a compact Lie group, when is the exponential map
onto? Why is it onto in the case of a torus? How do you reduce to the
case of a torus? Why can you conjugate any element of the group into a
given maximal torus?
REAL ANALYSIS
Sarnak: What's an integrable function? What's a measurable function?
What's a Borel set? What's a Lebesgue measurable set? Construct a
non-measurable set. Why is that set not measurable? Conway: Why would
that non-measurable set have to have non-zero measure if it were
measurable [my argument that it was not measurable was that, if it
were, its measure would have to be both zero and non-zero]?
Sarnak: What kind of function is the Fourier transform of an
integrable function? Is the Fourier transform a bounded operator from
L^1 to C_0? Does it have a kernel (i.e. is it injective?). Is it onto?
If it were onto, what could you say about the inverse? What's the
theorem you're using? Why is the example you've given not the
transform of any integrable function?
The example I gave of a function that's not the Fourier transform of
any integrable function was 1/log(x) for x bigger than 2, defined for
x less than -2 to make it odd, and linear in between to make it
continuous. Templier asked what would happen if you replaced 1/log
with other powers of log. What's the critical exponent? What happens
if you put factors of loglog(x) in the denominator?
Aside (I brought it up, and then Sarnak decided to use it in some
examples later): Fourier coefficients of the Cantor staircase measure.
FUNCTIONAL ANALYSIS
Sarnak: What can you say about finitely additive measures on the real
line (or the circle) that are invariant under translation (or
rotation)? Do you think Lebesgue measure is the only one? How can you
construct many? What's the Hahn-Banach theorem? What assumptions are
needed here? Does the subspace have to be closed? Is this only for
Banach spaces? What space are measures dual to? What space are
finitely additive measures dual to?
Sarnak: State the spectral theorem in its most general form. What is a
self-adjoint operator? How is the adjoint defined? What is its domain?
Does A* have to be self-adjoint? I trust you've looked at
examples. What kind of decomposition can you do on the spectral
measure? What's the Radon-Nikodym theorem? What is the spectrum [the
definition, not in terms of the spectral measure]? What kind of subset
of the complex plane is the resolvent set? Consider the example of an
operator on L^2(the circle) given by convolving with a measure,
specifically the Cantor staircase measure. What's the spectrum of
that? What unitary transformation and measure space can you use for
the spectral theorem in this case? Prove the spectral theorem in the
finite-dimensional case.
Sarnak: What's a compact operator?
Sarnak: State the Hodge Theorem on harmonic differential forms. What's
the Laplacian? Why is the kernel of the Laplacian finite-dimensional?
What compactness theorem are you using? Define the Sobolev space
that's relevant here. Prove that this Sobolev space, namely H^1,
embeds compactly into L^2 in the case that the manifold is a
circle. What's the name of the finite-dimensional compactness theorem
you're using here?
ANALYTIC NUMBER THEORY
Conway [while Templier was thinking of a good sieve question to ask]:
Describe the circle method.
Sarnak: What do you do for the minor arcs to prove that every large
odd number is a sum of three primes?
Sarnak: What's Weyl's inequality?
ADVICE AND OTHER COMMENTS
Executive summary: Don't worry, bring drinking water to the exam, and
get some practice writing at the blackboard beforehand (by giving a
talk at the Graduate Student Seminar, for instance -- it's supposed to
be good luck!). When picking your special topics, there are a lot of
ways to go. I suggest that you make your life easier by picking one
topic you already know well. Pick another topic that you want to learn
and worry you might not learn as well if you don't force yourself to
learn it for the exam.
The list of questions above probably doesn't give a very accurate
impression of what the exam was like. First of all, my examiners were
extremely fun, friendly, and forthcoming with hints. I think most
generals committees are at least somewhat friendly and helpful, but I
was really glad I suggested this one. They cracked a good number of
jokes, most of which were at my expense, in retrospect. At any rate,
this made for as relaxed an examination as possible. They all laughed
at my weird, very far from French pronounciation of "Borel" -- it more
or less rhymed with "quarrel". Peter commented on my lovely Gothic
capital letter S for the singular series in the circle method. He said
"That proves you read Davenport!" and maybe that's why he didn't press
for many details. I admitted that I didn't know it stood for singular
series (Peter, surprised: "Really? You didn't catch on to that?") and
that, moreover, I wasn't even sure if it was an S or a G (Nicolas:
"No, no, it's a German S"). I wanted to write out the matrices for the
2-dimensional representation of S_3 and calculate their traces, but
they pointed out that I could fill in that last row of the character
table using the orthogonality relations since I already had the first
two rows from the 1-dimensional representations. I screwed that up
repeatedly. At one point, John asked "How many of the numbers you just
wrote down are correct?" I replied "Zero! This is why I wanted to
write the matrices!!"
The examiners don't (at least mine didn't) have a list of questions to
ask you. There's more of a flow to it. For example, I mentioned that I
might be able to find a specific matrix realization of the group of
order 28 I had given a presentation for. Peter asked "Do you know how
to represent a group by matrices in general?" I replied "In
principle...", Peter inquired most considerately "Do you want to be
asked?", I affirmed, and that's how we segued into representation
theory. These types of things give you a modest amount of influence
over what you are asked if you can smell what might come next. You
also will probably be allowed to choose which core topic (algebra,
real, or complex) to start with. I exerted some more influence by
simply telling them I had expected questions about semidisks and
Picard's Theorems and diving into that. Peter said something like
"Okay, you'll both ask and answer the questions now."
"I don't know" is an acceptable response more often than you might
think. For the question about the Dirichlet problem, I said "I can
think of a bunch of ways to start off [Perron's method where you take
the sup of all subharmonic functions with boundary values less than or
equal to the given boundary data, Schwarz's alternating method, using
Hahn-Banach to extend from the subspace of boundary data with solvable
Dirichlet problem, Brownian motion, getting a weak solution and then
using elliptic regularity,...] but I doubt I'll be able to finish any
of them off" and we left it at that. At one point, I said "I don't
know" and John said something like "Well, I can't claim that's not a
correct answer." Peter initially wanted the spectral theorem for
normal operators. I said "Let me do self-adjoint first" and we never
went back to normal. I forgot exactly what some of the terms in Weyl's
inequality were, but the examiners seemed happy with my
half-remembered upper bound and sketchy "repeatedly square and use
Cauchy-Schwarz" outline of the proof.
For the most part, the examiners seemed to prefer examples and
applications of theorems to their proofs. For example, nobody wanted
to see a proof of Sylow's Theorems. They wanted to see them in
action. For representation theory, they were content to see statements
of facts and a worked example with few to no proofs. That might be
because I had indicated my representation theory was shaky. The
functional analysis section also felt more example-based. Before
getting into the example of the spectral theorem, I asked "Should I be
sketching a proof of the spectral theorem?" and got an unhesitating
"No." in reply. The complex analysis questions were mostly
proof-oriented. There wasn't much in the way of real analysis, maybe
because my special topics were analytical.
For analytic number theory, I suggest that you concentrate on additive
number theory more than multiplicative number theory. In my case, it
might be that there were no questions like "Prove the Prime Number
Theorem" because we were running short on time. We only talked about
analytic number theory for 15 minutes or so. Nevertheless, judging
from other people's generals as well as my own, I think the most
important topic for this part of the exam is the proof of Vinogradov's
3 Primes Theorem. If you really go through the details, you'll be
forced to learn quite a lot of multiplicative number theory anyway.
My worst performances were on elementary questions I hadn't thought
about much since early in my undergraduate degree. When you're
studying, I suggest that you not neglect topics that seem too
basic. It was particularly Liouville's Theorem, diagonalizability of
Hermitian matrices, and matrix exponentiatials/logarithms that tripped
me up. I embarked on a proof of Liouville's Theorem by rescaling to
get a function from the unit disk to the unit disk and applying
Schwarz's Lemma. I made the unfortunate choice of using a capital
letter R to denote a variable that would later have to tend to 0
instead of infinity, and got stuck as a result. By that point, I
couldn't remember that in the usual proof, you use Cauchy's formula
for the first derivative, not for the function itself. For the
finite-dimensional spectral theorem, I remembered that there is a nice
proof using Lagrange multipliers, but I needed a hint from Peter
(Sarnak) to use the variational characterization of the biggest
eigenvalue (namely, the maximum of for v a unit vector). Then
I got confused and ended up not using Lagrange multipliers but finding
the maximum a different way after many hints from all three of the
examiners. For solving e^A = B, I arrived at the correct answer (you
can solve it if and only if B is invertible) by getting a necessary
condition from the equation det(e^A) = e^trace(A). To prove that
invertibility is sufficient, I used the Jordan form. I started with
the power series for log(B)=log(I-(I-B)), but that only works if B is
close to the identity matrix. Then I tried to work out e^A and guess
what A has to be, but floundered. Finally, with some help from Peter,
I got the point: the Jordan form expresses B as a diagonal matrix plus
a nilpotent matrix. You can easily take exp or log of the diagonal
part regardless of how large the eigenvalues are, and the nilpotent
term goes away after finitely many terms in a power series expansion.
CURIOUS OMISSIONS
I was surprised nobody asked for the statement or proof of the Riemann
Mapping Theorem or, indeed, anything at all about conformal mapping. I
also thought "What's the order of an entire function?" was a sure
lead-in to Jensen's formula, and maybe it would have been if I hadn't
taken so long muddling around with Liouville's Theorem. I thought
Peter would ask about proving the Peter-Weyl Theorem as an application
of functional analysis. I was sure there would be something about
modules over PID and canonical forms. Maybe there would have been if I
hadn't taken so long on e^A = B.
BOOKS I LIKE
For analytic number theory, read Davenport's books Multiplicative
Number Theory and Analytic Methods for Diophantine Equations and
Inequalities. Those together cover more than enough for the exam, with
one exception (unless I haven't read them carefully enough). You
should also work out a lower bound on the L^1 norm of the exponential
sum that appears in the 3 Primes problem. That's why the method
doesn't prove Goldbach's Conjecture. Talk to your committee beforehand
to find out whether or not they will ask any questions about sieves.
For real analysis, I recommend green Rudin (Real and Complex Analysis)
or Folland's book "Real Analysis: Modern Techniques and Their
Applications". "Analysis" by Lieb and Loss is great, but there are
some standard topics that are not in there and there are many things
in there that you would not need to know for this exam. I learned the
1/log example of a function that isn't in the image of the Fourier
transform on L^1 from Stein and Weiss, "Introduction to Harmonic
Analysis on Euclidean Space". Katznelson's "Introduction to Harmonic
Analysis" also has a lot of useful material.
For complex analysis, green Rudin is good for the most part but maybe
doesn't say as much as you would like about the geometric aspects of
the subject. To learn more about that, I recommend Zeev Nehari's book
"Conformal Mapping". Ahlfors's book Complex Analysis covers more than
enough for the exam, although I find the proofs can be hard to
remember and he sometimes finesses his way around key issues rather
than confronting them directly. If you can find a copy, Titchmarsh's
"The Theory of Functions" is wonderful.
For algebra, "Abstract Algebra" by Dummit and Foote covers all that
you need and more. You could find better treatments of representation
theory though. I like Serre's book "Linear Representations of Finite
Groups" (at least the first third of the book -- the later parts are
more advanced and probably not necessary for the exam unless you've
chosen representation theory as a special topic). The French version
is probably better than the translation if you read French. If you
also want to know about compact groups, I highly recommend Barry
Simon's book "Representations of Finite and Compact Groups". Adams'
"Lectures on Lie Groups" is very good, but probably unnecessary unless
one of your topics involves representation theory or Lie groups.
Functional analysis doesn't seem to be a very popular subject. A lot
of professors think of it as more of a tool than a topic. If you
convince anyone to let you take this for the exam, talk to them about
what books to study. I read Yosida's treatise. I found the treatment
of the spectral theory hard to follow in that book. Reed and Simon's
"Methods of Mathematical Physics" is better on this point. The proof I
decided to remember is from some notes by Vojkan Jaksic in volume 1880
of Lecture Notes in Mathematics.