Search Constraints
Number of results to display per page
Results for:
Polyu oer sim
Yes
Remove constraint Polyu oer sim: Yes
Resource Type
Video
Remove constraint Resource Type: Video
Tags sim
Deep Learning
Remove constraint Tags sim: Deep Learning
Year
2022
Remove constraint Year: 2022
1 - 2 of 2
Search Results
-
Video
Adaptive computation is of great importance in numerical simulations. The ideas for adaptive computations can be dated back to adaptive finite element methods in 1970s. In this talk, we shall first review some recent development for adaptive methods with some application. Then, we will propose a deep adaptive sampling method for solving PDEs where deep neural networks are utilized to approximate the solutions. In particular, we propose the failure informed PINNs (FI-PINNs), which can adaptively refine the training set with the goal of reducing the failure probability. Compared with the neural network approximation obtained with uniformly distributed collocation points, the proposed algorithms can significantly improve the accuracy, especially for low regularity and high-dimensional problems.
Event date: 18/10/2022
Speaker: Prof. Tao Tang (Beijing Normal University-Hong Kong Baptist University United International College)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Sampling (Statistics) Differential equations Partial -- Numerical solutions Mathematical models Adaptive computing systems
- Resource Type:
- Video
-
Video
We introduce a Dimension-Reduced Second-Order Method (DRSOM) for convex and nonconvex (unconstrained) optimization. Under a trust-region-like framework, our method preserves the convergence of the second-order method while using only Hessian-vector products in two directions. Moreover; the computational overhead remains comparable to the first-order such as the gradient descent method. We show that the method has a local super-linear convergence and a global convergence rate of 0(∈-3/2) to satisfy the first-order and second-order conditions under a commonly used approximated Hessian assumption. We further show that this assumption can be removed if we perform one step of the Krylov subspace method at the end of the algorithm, which makes DRSOM the first first-order-type algorithm to achieve this complexity bound. The applicability and performance of DRSOM are exhibited by various computational experiments in logistic regression, L2-Lp minimization, sensor network localization, neural network training, and policy optimization in reinforcement learning. For neural networks, our preliminary implementation seems to gain computational advantages in terms of training accuracy and iteration complexity over state-of-the-art first-order methods including SGD and ADAM. For policy optimization, our experiments show that DRSOM compares favorably with popular policy gradient methods in terms of the effectiveness and robustness.
Event date: 19/09/2022
Speaker: Prof. Yinyu Ye (Stanford University)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Convex programming Nonconvex programming Mathematical optimization
- Resource Type:
- Video