Random subspace trust-region algorithm for high-dimensional convex-constrained derivative-free optimization

Abstract

Model-based derivative-free optimization (DFO) methods are one major class of DFO methods widely used in practice. However, these methods are known to struggle in high dimensions. Recent research tackles this issue by searching for decreases in low-dimensional subspaces sampled randomly in each iteration. This talk proposes a random subspace trust-region algorithm for general convex-constrained DFO problems. We provide a new subspace sampling technique that guarantees that the subspace preserves the first-order criticality measure by a certain percentage with a fixed probability. We define a new class of models that can be constructed using only feasible points in the subspace. Based on these new theoretical results, an almost-sure global convergence and a worst-case complexity analysis of our algorithm are presented. Numerical experiments demonstrate the strong performance of our algorithm in high dimensions.

Date
Jun 8, 2025
Location
2025 CMS Summer Meeting
Québec, QC
Yiwen Chen
Yiwen Chen
PhD student in Mathematics

My research interests include derivative-free optimization, numerical optimization, and discrete geometry.