Recently, there has been a surge of interest in representing large-scale, hierarchical data in hyperbolic spaces to achieve better representation accuracy with lower dimensions. However, beyond representation learning, there are few empirical and theoretical results that develop performance guarantees for downstream machine learning and optimization tasks in hyperbolic spaces. In this talk we consider the task of learning a robust classifier in hyperbolic space. We start with algorithmic aspects of developing analogues of classical methods, such as the perceptron or support vector machines, in hyperbolic spaces. We also discuss more broadly the challenges of generalizing such methods to non-Euclidean spaces. Furthermore, we analyze the role of geometry in learning robust classifiers by evaluating the trade-off between low embedding dimensions and low distortion for both Euclidean and hyperbolic spaces.