Search Constraints
Number of results to display per page
Results for:
Language
English
Remove constraint Language: English
Resource Type
Video
Remove constraint Resource Type: Video
Search Results
-
Video
More than one hundred years ago, Albert Einstein published his Theory of General Relativity (GR). One year later, Karl Schwarzschild solved the GR equations for a non-rotating, spherical mass distribution; if this mass is sufficiently compact, even light cannot escape from within the so-called event horizon, and there is a mass singularity at the center. The theoretical concept of a 'black hole' was born, and was refined in the next decades by work of Penrose, Wheeler, Kerr, Hawking and many others. First indirect evidence for the existence of such black holes in our Universe came from observations of compact X-ray binaries and distant luminous quasars. I will discuss the forty-year journey, which my colleagues and I have been undertaking to study the mass distribution in the Center of our Milky Way from ever more precise, long-term studies of the motions of gas and stars as test particles of the space time. These studies show the existence of a four million solar mass object, which must be a single massive black hole, beyond any reasonable doubt.
Event date: 09/02/2023
Speaker: Prof. Reinhard GENZEL
Hosted by: PolyU Academy for Interdisciplinary Research
- Subjects:
- Cosmology and Astronomy and Physics
- Keywords:
- Astrophysics Astronomy Deep space -- Milky Way Nobel Prize winners General relativity (Physics) Black holes (Astronomy)
- Resource Type:
- Video
-
Video
In this lesson, we'll be looking at the cell cycle. This is the lifespan of a eukaryotic somatic cell. A somatic cell is any cell in the body of an organism, except for sex cells such as sperm and egg cells. The cell cycle describes the sequence of cell growth and division. A cell spends most of its life a state called interphase. Interphase has three phases, the G1, S, and G2 phases. Interphase is followed by cell division, which has one phase, the M phase. Together these four phases make up the entire cell cycle. G1 of interphase is sometimes called growth 1 or gap phase 1. In G1, a cell is busy growing and carrying out whatever function it's supposed to do. Note that some cells, such as muscle and nerve cells, exit the cell cycle after G1 because they do not divide again. A cell enters the S phase after it grows to the point where it's no longer able to function well and needs to divide. The S stands for synthesis, which means to make, because a copy of DNA is being made during this phase. Once DNA replication is complete, the cell enters the shortest and the last part of interphase called G2, also known as growth 2 or gap phase 2. Right now, it's enough to know that further preparations for cell division take place in the G2 phase. Now that interphase is over, the cell is ready for cell division, which happens in the M phase. The M phase has two events. The main one is mitosis, which is division of the cell's nucleus, followed by cytokinesis, a division of the cytoplasm. So, at the end of M phase, you have two daughter cells identical to each other and identical to the original cell. Let's review. The cell cycle describes the life cycle of an individual cell. It has four phases, three in interphase and one for cell division. Most cell growth and function happen during G1. The cell enters the S phase when it needs to divide. In this phase the cell replicates its DNA. Replication just means the cell makes a copy of its DNA. In G2, the cell undergoes further preparations for cell division. Finally, we have cell division in the M phase. The M phase consists of mitosis, which is nuclear division, and cytokinesis, or division of the cytoplasm. We'll explore the details of mitosis and cytokinesis separately
- Subjects:
- Biology
- Keywords:
- Cell cycle
- Resource Type:
- Video
-
Video
Stanford Electrical Engineering Course on Convex Optimization.
- Course related:
- AMA4850 Optimization Methods
- Subjects:
- Mathematics and Statistics
- Keywords:
- Mathematical optimization Convex functions
- Resource Type:
- Video
-
Video
With calculus well behind us, it's time to enter the next major topic in any study of mathematics. Linear Algebra! The name doesn't sound very intimidating, but there are some pretty abstract concepts in this subject. Let's start nice and easy simply by learning about what this subject covers and some basic terminology.
- Course related:
- COMP4434 Big Data Analytics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Algebras Linear
- Resource Type:
- Video
-
Video
Lecture videos from Gilbert Strang's course on Linear Algebra at MIT.
- Course related:
- AMA1120 Basic Mathematics II - Calculus and Linear Algebra
- Subjects:
- Mathematics and Statistics
- Keywords:
- Algebras Linear
- Resource Type:
- Video
-
Video
This course is an introduction to game theory and strategic thinking. Ideas such as dominance, backward induction, Nash equilibrium, evolutionary stability, commitment, credibility, asymmetric information, adverse selection, and signaling are discussed and applied to games played in class and to examples drawn from economics, politics, the movies, and elsewhere.
- Subjects:
- Mathematics and Statistics and Economics
- Keywords:
- Game theory
- Resource Type:
- Video
-
Video
Before the advent of computers around 1950, optimization centered either on small-dimensional problems solved by looking at zeroes of first derivatives and signs of second derivatives, or on infinite-dimensional problems about curves and surfaces. In both cases, "variations" were employed to understand how a local solution might be characterized. Computers changed the picture by opening the possibility of solving large-scale problems involving inequalities, instead of only equations. Inequalities had to be recognized as important because the decisions to be optimized were constrained by the need to respect many upper or lower bounds on their feasibility. A new kind of mathematical analysis, beyond traditional calculus, had to be developed to address these needs. It built first on appealing to the convexity of sets and functions, but went on to amazingly broad and successful concepts of variational geometry, subgradients, subderivatives, and variational convergence beyond just that. This talk will explain these revolutionary developments and why there were essential.
Event date: 1/11/2022
Speaker: Prof. Terry Rockafellar (University of Washington)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Convex functions Convex sets Mathematical optimization Computer science -- Mathematics
- Resource Type:
- Video
-
Video
Adaptive computation is of great importance in numerical simulations. The ideas for adaptive computations can be dated back to adaptive finite element methods in 1970s. In this talk, we shall first review some recent development for adaptive methods with some application. Then, we will propose a deep adaptive sampling method for solving PDEs where deep neural networks are utilized to approximate the solutions. In particular, we propose the failure informed PINNs (FI-PINNs), which can adaptively refine the training set with the goal of reducing the failure probability. Compared with the neural network approximation obtained with uniformly distributed collocation points, the proposed algorithms can significantly improve the accuracy, especially for low regularity and high-dimensional problems.
Event date: 18/10/2022
Speaker: Prof. Tao Tang (Beijing Normal University-Hong Kong Baptist University United International College)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Sampling (Statistics) Differential equations Partial -- Numerical solutions Mathematical models Adaptive computing systems
- Resource Type:
- Video
-
Video
Convex Matrix Optimization (MOP) arises in a wide variety of applications. The last three decades have seen dramatic advances in the theory and practice of matrix optimization because of its extremely powerful modeling capability. In particular, semidefinite programming (SP) and its generalizations have been widely used to model problems in applications such as combinatorial and polynomial optimization, covariance matrix estimation, matrix completion and sensor network localization. The first part of the talk will describe the primal-dual interior-point methods (IPMs) implemented in SDPT3 for solving medium scale SP, followed by inexact IPMs (with linear systems solved by iterative solvers) for large scale SDP and discussions on their inherent limitations. The second part will present algorithmic advances for solving large scale SDP based on the proximal-point or augmented Lagrangian framework In particular, we describe the design and implementation of an augmented Lagrangian based method (called SDPNAL+) for solving SDP problems with large number of linear constraints. The last part of the talk will focus on recent advances on using a combination of local search methods and convex lifting to solve low-rank factorization models of SP problems.
Event date: 11/10/2022
Speaker: Prof. Kim-Chuan Toh (National University of Singapore)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Convex programming Semidefinite programming
- Resource Type:
- Video
-
Video
We introduce a Dimension-Reduced Second-Order Method (DRSOM) for convex and nonconvex (unconstrained) optimization. Under a trust-region-like framework, our method preserves the convergence of the second-order method while using only Hessian-vector products in two directions. Moreover; the computational overhead remains comparable to the first-order such as the gradient descent method. We show that the method has a local super-linear convergence and a global convergence rate of 0(∈-3/2) to satisfy the first-order and second-order conditions under a commonly used approximated Hessian assumption. We further show that this assumption can be removed if we perform one step of the Krylov subspace method at the end of the algorithm, which makes DRSOM the first first-order-type algorithm to achieve this complexity bound. The applicability and performance of DRSOM are exhibited by various computational experiments in logistic regression, L2-Lp minimization, sensor network localization, neural network training, and policy optimization in reinforcement learning. For neural networks, our preliminary implementation seems to gain computational advantages in terms of training accuracy and iteration complexity over state-of-the-art first-order methods including SGD and ADAM. For policy optimization, our experiments show that DRSOM compares favorably with popular policy gradient methods in terms of the effectiveness and robustness.
Event date: 19/09/2022
Speaker: Prof. Yinyu Ye (Stanford University)
Hosted by: Department of Applied Mathematics
- Subjects:
- Mathematics and Statistics
- Keywords:
- Convex programming Nonconvex programming Mathematical optimization
- Resource Type:
- Video