Search Constraints
Number of results to display per page
Results for:
Keywords
Convex programming
Remove constraint Keywords: Convex programming
Language
English
Remove constraint Language: English
1 - 2 of 2
Search Results
-
Video
Convex Matrix Optimization (MOP) arises in a wide variety of applications. The last three decades have seen dramatic advances in the theory and practice of matrix optimization because of its extremely powerful modeling capability. In particular, semidefinite programming (SP) and its generalizations have been widely used to model problems in applications such as combinatorial and polynomial optimization, covariance matrix estimation, matrix completion and sensor network localization. The first part of the talk will describe the primal-dual interior-point methods (IPMs) implemented in SDPT3 for solving medium scale SP, followed by inexact IPMs (with linear systems solved by iterative solvers) for large scale SDP and discussions on their inherent limitations. The second part will present algorithmic advances for solving large scale SDP based on the proximal-point or augmented Lagrangian framework In particular, we describe the design and implementation of an augmented Lagrangian based method (called SDPNAL+) for solving SDP problems with large number of linear constraints. The last part of the talk will focus on recent advances on using a combination of local search methods and convex lifting to solve low-rank factorization models of SP problems.
Event date: 11/10/2022
Speaker: Prof. Kim-Chuan Toh (National University of Singapore)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Convex programming Semidefinite programming
- Resource Type:
- Video
-
Video
We introduce a Dimension-Reduced Second-Order Method (DRSOM) for convex and nonconvex (unconstrained) optimization. Under a trust-region-like framework, our method preserves the convergence of the second-order method while using only Hessian-vector products in two directions. Moreover; the computational overhead remains comparable to the first-order such as the gradient descent method. We show that the method has a local super-linear convergence and a global convergence rate of 0(∈-3/2) to satisfy the first-order and second-order conditions under a commonly used approximated Hessian assumption. We further show that this assumption can be removed if we perform one step of the Krylov subspace method at the end of the algorithm, which makes DRSOM the first first-order-type algorithm to achieve this complexity bound. The applicability and performance of DRSOM are exhibited by various computational experiments in logistic regression, L2-Lp minimization, sensor network localization, neural network training, and policy optimization in reinforcement learning. For neural networks, our preliminary implementation seems to gain computational advantages in terms of training accuracy and iteration complexity over state-of-the-art first-order methods including SGD and ADAM. For policy optimization, our experiments show that DRSOM compares favorably with popular policy gradient methods in terms of the effectiveness and robustness.
Event date: 19/09/2022
Speaker: Prof. Yinyu Ye (Stanford University)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Convex programming Nonconvex programming Mathematical optimization
- Resource Type:
- Video