CLARSTA: A Random Subspace Trust-region Algorithm for Convex-constrained Derivative-free Optimization (25mins)

Abstract

This talk proposes a random subspace trust-region algorithm for general convex-constrained derivative-free optimization (DFO) problems. As in prior random subspace DFO methods, convergence of the proposed algorithm relies on sufficient model accuracy and adequate subspace quality. To address model accuracy, we introduce a new class of models that is only required to achieve reasonable accuracy on the projection of the constraint set onto the chosen subspace. We further propose a novel geometric measure that facilitates the analysis, construction, and management of such models. To ensure subspace quality, we leverage concentration-of-measure results on the Grassmann manifold to develop a randomized subspace sampling strategy that preserves a fixed fraction of the first-order criticality measure with a prescribed probability. Building on these theoretical developments, we present an almost-sure global convergence and a worst-case complexity analysis of our algorithm. Numerical experiments on problems with dimensions up to 10000 demonstrate the reliable performance of our algorithm in high dimensions.

Date
Jun 22, 2026
Location
DFOS26
Montréal, QC
Yiwen Chen
Yiwen Chen
PhD student in Mathematics

My research interests center on the theoretical foundations of derivative-free optimization, with a particular emphasis on model accuracy, complexity analysis, and randomized subspace methods for high-dimensional problems. I am also interested in discrete geometry and polytope theory.