Model-based derivative-free optimization (DFO) faces significant challenges in high dimensions due to the cost of constructing accurate interpolation models. Subspace approaches address this by building and optimizing models in low-dimensional affine subspaces. In this talk, we present a unified view of subspace modelling and random subspace trust-region methods. We establish theoretical relationships between full-space and subspace quadratic models and simplex derivatives, showing their consistency on the underlying subspace, and discuss the construction of $Q$-fully quadratic models for accurate subspace approximations. We then describe random subspace trust-region methods for unconstrained and convex-constrained problems, with convergence guarantees based on subspace-restricted model accuracy and probabilistic subspace quality.