Search Constraints
Number of results to display per page
Results for:
Keywords
Mathematical optimization
Remove constraint Keywords: Mathematical optimization
1 - 5 of 5
Search Results
-
Video
Stanford Electrical Engineering Course on Convex Optimization.
- Course related:
- AMA4850 Optimization Methods
- Subjects:
- Mathematics and Statistics
- Keywords:
- Mathematical optimization Convex functions
- Resource Type:
- Video
-
Video
Before the advent of computers around 1950, optimization centered either on small-dimensional problems solved by looking at zeroes of first derivatives and signs of second derivatives, or on infinite-dimensional problems about curves and surfaces. In both cases, "variations" were employed to understand how a local solution might be characterized. Computers changed the picture by opening the possibility of solving large-scale problems involving inequalities, instead of only equations. Inequalities had to be recognized as important because the decisions to be optimized were constrained by the need to respect many upper or lower bounds on their feasibility. A new kind of mathematical analysis, beyond traditional calculus, had to be developed to address these needs. It built first on appealing to the convexity of sets and functions, but went on to amazingly broad and successful concepts of variational geometry, subgradients, subderivatives, and variational convergence beyond just that. This talk will explain these revolutionary developments and why there were essential.
Event date: 1/11/2022
Speaker: Prof. Terry Rockafellar (University of Washington)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Computer science -- Mathematics Mathematical optimization Convex sets Convex functions
- Resource Type:
- Video
-
Video
We introduce a Dimension-Reduced Second-Order Method (DRSOM) for convex and nonconvex (unconstrained) optimization. Under a trust-region-like framework, our method preserves the convergence of the second-order method while using only Hessian-vector products in two directions. Moreover; the computational overhead remains comparable to the first-order such as the gradient descent method. We show that the method has a local super-linear convergence and a global convergence rate of 0(∈-3/2) to satisfy the first-order and second-order conditions under a commonly used approximated Hessian assumption. We further show that this assumption can be removed if we perform one step of the Krylov subspace method at the end of the algorithm, which makes DRSOM the first first-order-type algorithm to achieve this complexity bound. The applicability and performance of DRSOM are exhibited by various computational experiments in logistic regression, L2-Lp minimization, sensor network localization, neural network training, and policy optimization in reinforcement learning. For neural networks, our preliminary implementation seems to gain computational advantages in terms of training accuracy and iteration complexity over state-of-the-art first-order methods including SGD and ADAM. For policy optimization, our experiments show that DRSOM compares favorably with popular policy gradient methods in terms of the effectiveness and robustness.
Event date: 19/09/2022
Speaker: Prof. Yinyu Ye (Stanford University)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Nonconvex programming Mathematical optimization Convex programming
- Resource Type:
- Video
-
Courseware
This course aims to give students the tools and training to recognize convex optimization problems that arise in scientific and engineering applications, presenting the basic theory, and concentrating on modeling aspects and results that are useful in applications. Topics include convex sets, convex functions, optimization problems, least-squares, linear and quadratic programs, semidefinite programming, optimality conditions, and duality theory. Applications to signal processing, control, machine learning, finance, digital and analog circuit design, computational geometry, statistics, and mechanical engineering are presented. Students complete hands-on exercises using high-level numerical software.
- Subjects:
- Mathematics and Statistics
- Keywords:
- Mathematical optimization Convex functions
- Resource Type:
- Courseware
-
e-book
Evolutionary algorithms are successively applied to wide optimization problems in the engineering, marketing, operations research, and social science, such as include scheduling, genetics, material selection, structural design and so on. Apart from mathematical optimization problems, evolutionary algorithms have also been used as an experimental framework within biological evolution and natural selection in the field of artificial life.
- Subjects:
- Computing, Data Science and Artificial Intelligence
- Keywords:
- Mathematical optimization Evolutionary programming (Computer science) Genetic algorithms
- Resource Type:
- e-book