Search Constraints
Number of results to display per page
Results for:
1 - 8 of 8
Search Results
-
Video
In this lecture I consider the fundamental, challenging and largely unsolved problem of deriving rigorously the most popular kinetic equations, starting from the laws governing the dynamics of microscopic systems. I plan to present classical and recent results, discussing also some present perspectives.
Event date: 30/3/2023
Speaker: Prof. Mario Pulvirenti (University of Roma La Sapienza)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Mathematical models Kinetic theory of gases -- Mathematical models
- Resource Type:
- Video
-
Video
We investigate reversal and recirculation for the stationary Prandtl equations. Reversal describes the solution after the Goldstein singularity, and is characterized by regions in which u > O and u < 0. The classical point of view of regarding the Prandtl equations as an evolution equation in x completely breaks down since u changes sign. Instead, we view the problem as a quasilinear, mixed-type, free-boundary problem. This is a joint work with Sameer Iyer.
Event date: 14/3/2023
Speaker: Prof. Nader Masmoudi (New York University)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Fluid dynamics -- Mathematical models
- Resource Type:
- Video
-
Video
In the context of hyperbolic systems of balance laws with dissipative source manifesting relaxation, recent pr"Ogress will be reported in the program of passing to the limit, in 1he BV setting, as the relaxation lime tends to zero.
Event date: 16/2/2023
Speaker: Prof. Constantine Dafermos (Brown University)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Equilibrium -- Mathematical models Relaxation Differential equations Hyperbolic
- Resource Type:
- Video
-
Video
Models arising in biology are often written in terms of Ordinary Differential Equations. The celebrated paper of Kermack-McKendrick (19271, founding mathematical epidemiology, showed the necessity to include parameters in order to describe the state of the individuals, as time elapsed after infection. During the 70s, many mathematical studies were developed when equations are structured by age, size, more generally a physiological trait. The renewal, growth-fragmentation are the more standard equations. The talk will present structured equations, show that a universal generalized relative entropy structure is available in the linear case, which imposes relaxation to a steady state under non-degeneracy conditions. In the nonlinear cases, it might be that periodic solutions occur, which can be interpreted in biological terms, e.g., as network activity in the neuroscience. When the equations are conservation laws, a variant of the Monge-Kantorovich distance (called Fortet-Mourier distance) also gives a general non-expansion property of solutions.
Event date: 19/1/2023
Speaker: Prof. Benoît Perthame (Sorbonne University)
Hosted by: Department of Applied Mathematics
- Subjects:
- Biology and Mathematics and Statistics
- Keywords:
- Biomathematics Equations
- Resource Type:
- Video
-
Video
Before the advent of computers around 1950, optimization centered either on small-dimensional problems solved by looking at zeroes of first derivatives and signs of second derivatives, or on infinite-dimensional problems about curves and surfaces. In both cases, "variations" were employed to understand how a local solution might be characterized. Computers changed the picture by opening the possibility of solving large-scale problems involving inequalities, instead of only equations. Inequalities had to be recognized as important because the decisions to be optimized were constrained by the need to respect many upper or lower bounds on their feasibility. A new kind of mathematical analysis, beyond traditional calculus, had to be developed to address these needs. It built first on appealing to the convexity of sets and functions, but went on to amazingly broad and successful concepts of variational geometry, subgradients, subderivatives, and variational convergence beyond just that. This talk will explain these revolutionary developments and why there were essential.
Event date: 1/11/2022
Speaker: Prof. Terry Rockafellar (University of Washington)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Convex functions Convex sets Mathematical optimization Computer science -- Mathematics
- Resource Type:
- Video
-
Video
Adaptive computation is of great importance in numerical simulations. The ideas for adaptive computations can be dated back to adaptive finite element methods in 1970s. In this talk, we shall first review some recent development for adaptive methods with some application. Then, we will propose a deep adaptive sampling method for solving PDEs where deep neural networks are utilized to approximate the solutions. In particular, we propose the failure informed PINNs (FI-PINNs), which can adaptively refine the training set with the goal of reducing the failure probability. Compared with the neural network approximation obtained with uniformly distributed collocation points, the proposed algorithms can significantly improve the accuracy, especially for low regularity and high-dimensional problems.
Event date: 18/10/2022
Speaker: Prof. Tao Tang (Beijing Normal University-Hong Kong Baptist University United International College)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Sampling (Statistics) Differential equations Partial -- Numerical solutions Mathematical models Adaptive computing systems
- Resource Type:
- Video
-
Video
Convex Matrix Optimization (MOP) arises in a wide variety of applications. The last three decades have seen dramatic advances in the theory and practice of matrix optimization because of its extremely powerful modeling capability. In particular, semidefinite programming (SP) and its generalizations have been widely used to model problems in applications such as combinatorial and polynomial optimization, covariance matrix estimation, matrix completion and sensor network localization. The first part of the talk will describe the primal-dual interior-point methods (IPMs) implemented in SDPT3 for solving medium scale SP, followed by inexact IPMs (with linear systems solved by iterative solvers) for large scale SDP and discussions on their inherent limitations. The second part will present algorithmic advances for solving large scale SDP based on the proximal-point or augmented Lagrangian framework In particular, we describe the design and implementation of an augmented Lagrangian based method (called SDPNAL+) for solving SDP problems with large number of linear constraints. The last part of the talk will focus on recent advances on using a combination of local search methods and convex lifting to solve low-rank factorization models of SP problems.
Event date: 11/10/2022
Speaker: Prof. Kim-Chuan Toh (National University of Singapore)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Convex programming Semidefinite programming
- Resource Type:
- Video
-
Video
We introduce a Dimension-Reduced Second-Order Method (DRSOM) for convex and nonconvex (unconstrained) optimization. Under a trust-region-like framework, our method preserves the convergence of the second-order method while using only Hessian-vector products in two directions. Moreover; the computational overhead remains comparable to the first-order such as the gradient descent method. We show that the method has a local super-linear convergence and a global convergence rate of 0(∈-3/2) to satisfy the first-order and second-order conditions under a commonly used approximated Hessian assumption. We further show that this assumption can be removed if we perform one step of the Krylov subspace method at the end of the algorithm, which makes DRSOM the first first-order-type algorithm to achieve this complexity bound. The applicability and performance of DRSOM are exhibited by various computational experiments in logistic regression, L2-Lp minimization, sensor network localization, neural network training, and policy optimization in reinforcement learning. For neural networks, our preliminary implementation seems to gain computational advantages in terms of training accuracy and iteration complexity over state-of-the-art first-order methods including SGD and ADAM. For policy optimization, our experiments show that DRSOM compares favorably with popular policy gradient methods in terms of the effectiveness and robustness.
Event date: 19/09/2022
Speaker: Prof. Yinyu Ye (Stanford University)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Convex programming Nonconvex programming Mathematical optimization
- Resource Type:
- Video