Derivative-free optimization (DFO) is the mathematical study of optimization algorithms that do not use derivatives. Model-based DFO methods are widely used in practice but are known to struggle in high dimensions. This talk provides a brief overview of recent research that addresses this issue by searching for decreases within randomly sampled low-dimensional subspaces. In particular, we examine the requirements for model accuracy and subspace quality in these methods, and compare their convergence guarantees and complexity bounds. This talk concludes with a discussion of some promising future directions in this area.