Many applications involve non-Euclidean data, where exploiting Riemannian geometry can deliver algorithms that are computationally superior to standard nonlinear programming approaches. This observation has resulted in an increasing interest in Riemannian methods in the optimization and machine learning community.
Much of the Riemannian optimization literature has focused on unconstrained optimization problems. However, many of the optimization tasks that we encounter in Machine Learning are constrained. Matrix-valued examples include learning determinantal point processes (on the manifold of positive definite matrices, with interval constraints) and k-means clustering (on the Stiefel manifold with equality and inequality constraints). This creates a need for constrained Riemannian optimization methods. In this work, we investigate projection-free methods for constrained optimization. We introduce a class of Riemannian Frank-Wolfe (RFW) methods with algorithms for both deterministic and stochastic settings. Notably, RFW is the first Riemannian optimization algorithm that handles constraints directly: Competing approaches handle constraints indirectly via the computation of often costly projections of the update onto the feasible region. In contrast, the RFW update is guaranteed to stay in the feasible region in each iteration, circumventing the need to compute projections. We show that RFW recovers the best known guarantees of its Euclidean counterpart and that it performs competitively in numerical experiments, elucidating the benefits of the Riemannian lens.