Seminars run by PhD students and postdocs with the Control and Power Group

The Control and Power (CAP) Research Group is excited to be holding a recurring Research Roulette event in the upcoming academic year! We are looking for speakers who would be happy to talk about their research to others.

The event will take place every month in the seminar room of the Electrical and Electronic Engineering Department building. It is intended to be an opportunity for our cohort of MSc students, PhD students, and postdocs in the Control and Power community to share their research, get to know each other, exchange ideas, and learn from each other’s topics. 

The format of the event is a 20-40 minute presentation followed by a discussion. This will be an excellent opportunity for you, as a speaker, to practice presenting your research, prepare for presentations related to ESA/LSR/conferences etc., and potentially receive constructive feedback from colleagues. 

If you would like to speak at this event, please complete this form. Once you have expressed an interest in speaking at the event, we will be in touch with more details. We are confident that you will find participating in this event to be a positive and enriching experience.

Contacts: Hanqing Zhang (Control talks), Yanshu Niu (Power talks)

Most Recent/Upcoming Talk 

Title: Distributed Optimization with Imperfect Model Parameters

Speaker: Yaqun Yang

Venue: EENG 909B

Date and time: Friday,  20/02/2026, 2-3 pm

Abstract:  Distributed optimization has emerged as a cornerstone for large-scale networked systems. By allowing agents to cooperatively minimize a global objective function using only local information and neighbor-to-neighbor communication, it effectively addresses concerns regarding data privacy and the lack of a central coordinator. However, most existing frameworks assume that the model parameters within the local objective functions are perfectly known, which is often impractical in dynamic or unknown environments. This presentation focuses on the coupled distributed optimization problem, where agents must simultaneously learn an unknown global parameter from decentralized problem while optimizing their decision variables. We first introduce a Coupled Distributed Stochastic Approximation (CDSA) scheme and analyze its convergence rate, specifically identifying the "transient time" required for the distributed algorithm to achieve it dominate convergence rate. Furthermore, we propose a novel Distributed Fractional Bayesian Learning (DFBL) algorithm for adaptive optimization. Theoretical proofs and numerical experiments validate that our approach ensures agents’ beliefs converge to the true parameter and their decision variables reach the global optimum efficiently.

Biography: Yaqun Yang is a Ph.D. candidate at the Shanghai Research Institute for Intelligent Autonomous Systems, Tongji University, Shanghai, China. She is currently a visiting Ph.D. student at Imperial College London, UK. Her research focuses on distributed optimization, stochastic approximation, and Bayesian Learning.