Results for: Affiliation Stanford University Remove constraint Affiliation: Stanford University
Universities conduct research for three reasons: to educate students, to contribute to society, and to understand the world. While society often holds a view of the scholar as a solitary and singular genius, in reality scholars today participate in a highly collaborative, worldwide search for shared understandings that stand the test of time and the scrutiny of others. The problems in the 21st century often demand effort by teams of researchers with resources at scale: laboratories and equipment, compute resources, and expert staffing. Working with faculty, students, and other stakeholders to identify the greatest opportunities and the resources needed to address them is both a privilege and a challenge for modern academic administrators. In this talk, I will share three examples: fostering collaborative proposal-writing; planning for shared capabilities in experimental facilities, data, and computation; and transforming academic structures.
Even date: 12/4/2023
Speaker: Prof. Kathryn Ann Moler
Hosted by: PolyU Academy for Interdisciplinary Research
Focusing on tensions and links between national formation and international outlooks, this talk shows how classical world visions persist as China’s modernizers and revolutionaries adopted and revised the Western nation-state and cosmopolitanism. The concepts of tianxia (all under heaven) and datong (great harmony) have been updated into outlooks of global harmony that value unity, equality, and reciprocity as strategies of overcoming interstate conflict, national divides, and social fragmentation. The talk will delve into two debates: the embrace of the West vs. aspirations for a common world, and the difference between liberal cosmopolitanism and socialist internationalism.
Even date: 16/9/2022
Speaker: Prof. Ban Wang
Hosted by: Confucius Institute of Hong Kong, Department of Chinese Culture
We introduce a Dimension-Reduced Second-Order Method (DRSOM) for convex and nonconvex (unconstrained) optimization. Under a trust-region-like framework, our method preserves the convergence of the second-order method while using only Hessian-vector products in two directions. Moreover; the computational overhead remains comparable to the first-order such as the gradient descent method. We show that the method has a local super-linear convergence and a global convergence rate of 0(∈-3/2) to satisfy the first-order and second-order conditions under a commonly used approximated Hessian assumption. We further show that this assumption can be removed if we perform one step of the Krylov subspace method at the end of the algorithm, which makes DRSOM the first first-order-type algorithm to achieve this complexity bound. The applicability and performance of DRSOM are exhibited by various computational experiments in logistic regression, L2-Lp minimization, sensor network localization, neural network training, and policy optimization in reinforcement learning. For neural networks, our preliminary implementation seems to gain computational advantages in terms of training accuracy and iteration complexity over state-of-the-art first-order methods including SGD and ADAM. For policy optimization, our experiments show that DRSOM compares favorably with popular policy gradient methods in terms of the effectiveness and robustness.
Event date: 19/09/2022
Speaker: Prof. Yinyu Ye (Stanford University)
Hosted by: Department of Applied Mathematics