Search Constraints
Number of results to display per page
Results for:
Affiliation
Stanford University
Remove constraint Affiliation: Stanford University
1 - 3 of 3
Search Results
-
Video
Universities conduct research for three reasons: to educate students, to contribute to society, and to understand the world. While society often holds a view of the scholar as a solitary and singular genius, in reality scholars today participate in a highly collaborative, worldwide search for shared understandings that stand the test of time and the scrutiny of others. The problems in the 21st century often demand effort by teams of researchers with resources at scale: laboratories and equipment, compute resources, and expert staffing. Working with faculty, students, and other stakeholders to identify the greatest opportunities and the resources needed to address them is both a privilege and a challenge for modern academic administrators. In this talk, I will share three examples: fostering collaborative proposal-writing; planning for shared capabilities in experimental facilities, data, and computation; and transforming academic structures.
Event date: 12/4/2023
Speaker: Prof. Kathryn Ann Moler
Hosted by: PolyU Academy for Interdisciplinary Research
- Subjects:
- Statistics and Research Methods
- Keywords:
- Research Science
- Resource Type:
- Video
-
Video
Stanford Electrical Engineering Course on Convex Optimization.
- Course related:
- AMA4850 Optimization Methods
- Subjects:
- Mathematics and Statistics
- Keywords:
- Mathematical optimization Convex functions
- Resource Type:
- Video
-
Video
We introduce a Dimension-Reduced Second-Order Method (DRSOM) for convex and nonconvex (unconstrained) optimization. Under a trust-region-like framework, our method preserves the convergence of the second-order method while using only Hessian-vector products in two directions. Moreover; the computational overhead remains comparable to the first-order such as the gradient descent method. We show that the method has a local super-linear convergence and a global convergence rate of 0(∈-3/2) to satisfy the first-order and second-order conditions under a commonly used approximated Hessian assumption. We further show that this assumption can be removed if we perform one step of the Krylov subspace method at the end of the algorithm, which makes DRSOM the first first-order-type algorithm to achieve this complexity bound. The applicability and performance of DRSOM are exhibited by various computational experiments in logistic regression, L2-Lp minimization, sensor network localization, neural network training, and policy optimization in reinforcement learning. For neural networks, our preliminary implementation seems to gain computational advantages in terms of training accuracy and iteration complexity over state-of-the-art first-order methods including SGD and ADAM. For policy optimization, our experiments show that DRSOM compares favorably with popular policy gradient methods in terms of the effectiveness and robustness.
Event date: 19/09/2022
Speaker: Prof. Yinyu Ye (Stanford University)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Convex programming Nonconvex programming Mathematical optimization
- Resource Type:
- Video