News

GML Guest lecture Feb 6th: The Complexity of Constrained Min-Max Optimization

Written on 30.01.2024 17:49 by Tatjana Chavdarova

Dear all, 

In the second half of next week's lecture (starting at 15h), we will have Manolis Zampetakis who will talk about the complexity of min-max optimization; see details below. Manolis is a great researcher always working on interesting and challenging questions, and we did not cover in the course the work he'll present, so I highly recommend you join. 
Please note we will start at our usual time in person, and the guest lecture will be streamed in the lecture room (for those of you joining remotely use 'zoom link (tatjana)' in the materials tab).

Title: The Complexity of Constrained Min-Max Optimization

Abstract:

Despite its important applications in Machine Learning, min-max optimization of objective functions that are nonconvex-nonconcave remains elusive. We show that an approximate local min-max point of large enough approximation is guaranteed to exist, but finding one such point is PPAD-complete. The same is true of computing an approximate fixed point of the (Projected) Gradient Descent/Ascent update dynamics.

An important byproduct of our proof is to establish an unconditional hardness result in the Nemirovsky-Yudin oracle optimization model. In particular, we show that, given oracle access to some function and its gradient, every algorithm that finds an approximate local min-max point needs to make a number of queries that is exponential in the dimension or the accuracy. This comes in sharp contrast to minimization problems, where finding approximate local minima in the same setting can be done with Projected Gradient Descent polynomially many queries. Our result is the first to show an exponential separation between these two fundamental optimization problems in the oracle model.

Joint work with Constantinos Daskalakis and Stratis Skoulakis.

Short-Bio:

Manolis Zampetakis is currently an Assistant Professor at the Computer Science Department of Yale University. Before that, he was a post-doctoral researcher at the EECS Department of UC Berkeley working with Michael Jordan. He received his PhD from the EECS Department at MIT where he was advised by Constantinos Daskalakis. He has been awarded the Google PhD Fellowship and the ACM SIGEcom Doctoral Dissertation Award. He works on the foundations of machine learning (ML), statistics, and data science, with a focus on statistical analysis from biased data, optimization methods in multi-agent environments, and convergence properties of popular heuristic methods.

Paper: https://arxiv.org/abs/2009.09623

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.