I joined the mathematics department at Princeton University on Feb. 1st, 2016 as an instructor, then as an assistant professor July 1st, 2018. I interact closely with PACM (the program in applied mathematics).

Prior to that,

I am an applied mathematician working on optimization and numerical analysis, broadly. Much of what I do is related to nonconvex optimization and optimization on manifolds. For the latter, I develop a toolbox called Manopt. A reference in this field is the book Optimization Algorithms on Matrix Manifolds.

Here is a selection of research topics:

  • Nonconvex optimization (computational complexity; when can we get global optima)
  • Synchronization (estimation from pairwise information)
  • Semidefinite programs and relaxations in low rank form
  • Low-rank optimization
  • Signal estimation through moments and invariant features
  • curve fitting on manifolds (master thesis)

(This list is not up to date.)

Manopt: a Matlab toolbox for optimization on Manifolds

Manopt, available at manopt.org, is a user-friendly, open source and documented Matlab toolbox which can be used to leverage the power of modern Riemannian optimization algorithms with ease. Manopt won the ORBEL Wolsey Award 2014 for best open source operational research implementation.

Tell me more/less

Smooth semidefinite programs

Consider a compact semidefinite program (SDP) of the form

$$\min_X \operatorname{Tr}(CX) \textrm{ s.t. } \mathcal{A}(X) = b, X \succeq 0,$$

where $X$ is a positive semidefinite matrix of size $n$ and $\mathcal{A}$ is a linear operator capturing $m$ equality constraints. For example, if the constraints are $\mathrm{diag}(X) = \mathbf{1}$ (diagonal entries of $X$ all equal 1) and $C$ is the adjacency matrix of a graph, then this is the Max-Cut SDP relaxation for that graph. Parameterizing $X$ as $X = YY^t$ yields a nonconvex optimization problem in $Y$:

$$\min_{Y \in \mathbb{R}^{n \times p}} \operatorname{Tr}(CYY^t) \textrm{ s.t. } \mathcal{A}(YY^t) = b.$$

Afonso, Vlad and I showed that if the set of acceptable $Y$'s is a manifold, and if $Y$ has sufficiently many columns, then, generically in $C$ (the cost matrix), second-order necessary optimality conditions for $Y$ are sufficient for global optimality. In other words: the nonconvex problem in $Y$ can be solved (and solves the SDP).


Tell me more/less

Synchronization: estimating rotations from relative measurements

Synchronization is the problem of estimating elements $g_1, \ldots, g_N$ in a group $G$, given measurements of relative quantities: $h_{ij} \approx g_i^{}g_j^{-1}$. These elements are best visualized on a graph (undirected), where each element $g_i$ is a node and there exists an edge between two nodes $g_i$ and $g_j$ if a measurement $h_{ij}$ is available. I focus on $G = \mathrm{SO}(n)$, the group of rotations:

$$\mathrm{SO}(n) = \{ R \in \mathbb{R}^{n\times n} \colon R^TR = I_n \ \mathrm{ and } \ \operatorname{det}(R) = +1 \}.$$

SynchronizeMLE is the distribution of Matlab codes for this project, available under BSD license. It contains code both to perform the estimation and to compute Cramér-Rao bounds.

Tell me more/less

Riemannian Staircase: semidefinite programming with diagonal block constraints

This is an algorithm to compute KKT points for problems of the form $$\min_X f(X)$$ with $X$ a symmetric matrix of size $n\times n$ such that $$X\succeq 0 \textrm{ and } X_{ii} = I_d \ \forall i,$$ meaning that the $d\times d$ diagonal blocks of $X$ are identity matrices. The cost function $f$ must be twice continuously differentiable. If $f$ is convex, KKT points are global optimizers.

The main idea is to attain a solution by tracking intermediate solutions of low rank, increasing the rank as needed. This is in contrast with interior point methods, which work with full-rank matrices to ultimately converge to (often) low-rank solutions.

Here is Matlab code for the Riemannian staircase method. It is readily usable to solve such problems with $f(X) = \operatorname{Trace}(CX)$ and a pseudo-Huber-loss smoothed version of $f(X) = \sum_{(i,j)\in E} \|C_{ij}Y_j - Y_i\|_F$ (notice the absence of square). These two functions are concave (the linear cost is also convex), which promotes solutions at extreme points. The latter have low rank. See also my slides and the full paper:


Tell me more/less

Low-rank matrix completion

Let $M \in \mathbb{R}^{m\times n}$ be a matrix with low rank $r \ll \min(m, n)$. Low-rank matrix completion is the task of estimating (or recovering) $M$ from measurements $\hat M_{ij} \approx M_{ij}$ of a few entries $(i, j) \in \Omega$.

At NIPS 2011, we proposed RTRMC, a Riemannian trust-region method for low-rank matrix completion:


Tell me more/less

Curve fitting on manifolds: interpolation and regression

In this project, which was the topic of my master's thesis, we are given time-labeled points on a Riemannian manifold $\mathcal{M}$ (for example, on a sphere, on the group of rotations, on the set of positive-definite matrices, etc.): $p_1, \ldots, p_n$, associated to timestamps $t_1 \leq \ldots \leq t_n$. The goal is to propose a curve (a model) on the manifold, $\gamma \colon [t_1, t_n] \to \mathcal{M}$, such that the curve fits the data (exactly for interpolation, reasonably for regression): $\gamma(t_i) \approx p_i$ and such that $\gamma$ is smooth in some suitable sense. Interpolation and regression are fundamental operations in signals processing. They serve the goals of denoising and resampling acquired data. These tasks are well understood when the data belongs to a Euclidean space such as $\mathbb{R}^n$, but much less so when the data belongs to a nonlinear manifold.

Code is available on GitHub for regression of rotation matrices.

Tell me more/less





Journal papers


Conference papers





Nicolas Boumal
Fine Hall, Dptmt of Mathematics
Washington Road
Princeton, NJ 08540
United States

Office: 607 (6th floor)
E-mail: nboumal@math.princeton.edu

Random stuff

At UCL, my office mate was Romain Hollanders.
In Paris, my office mates were Amit Bermanis, Damien Scieur and Vianney Perchet.

My Erdös number is 3, courtesy of my co-author and PhD advisor Vincent Blondel.

Research will get you places! It got me in: Palo Alto, Boston, Princeton, London, Prague, Cannes, Lisbon, Milan, Dagstuhl, Granada, Sierra Nevada, Valencia, Berlin, Les Houches, Costa da Caparica, Paris, Florence, San Diego, Bordeaux, Montréal, Bonn, Pittsburgh, Oxford, Geneva, New York City, Barcelona, Vancouver, Ames, Minneapolis, Washington DC, Stockholm... and various places in Belgium (Louvain-la-Neuve, Leuven, Liège, La Roche, Mons, Knokke, Daverdisse, Spa, Namur, Bruxelles...).

Teaching at Princeton University

  • Linear algebra with applications (MAT202), instructor, Spring 2016, 2017
  • Numerical methods (MAT321), instructor, Fall 2016, 2017, 2018
  • Junior seminar (optimization on manifolds, MAT982), instructor, Spring 2018
  • Junior seminar (math of data science through cryo-electron microscopy, MAT982), instructor, Fall 2018

Teaching at UCLouvain

  • Mathématiques 1 (FSAB1101), TA, autumn 2008, autumn 2009
  • Projet 1 (FSAB1501), TA, autumn 2010
  • Théorie des Matrices (INMA2380), TA, spring 2011, autumn 2013
  • Signaux et Systèmes (LFSAB1106), TA, autumn 2011
  • Analyse numérique : approximation, interpolation, intégration (LINMA2171), TA, autumn 2011 and 2012
  • Mathématiques 2 (LFSAB1102), TA, spring 2012
  • Modélisation et analyse des systèmes dynamiques (LINMA2370), TA, autumn 2012
  • Projet en ingénierie mathématique (LINMA2360), TA, spring 2012 and 2013
  • Projet en mathématiques appliquées (LINMA1375), TA, spring 2013
  • Systèmes dynamiques non linéaires (LINMA2361), TA, autumn 2013