Non-convex Optimisation and Matrix Factorisation

Featuring speakers from across the globe delving into Non-convex Optimisation and Matrix Factorisation, the second bi-annual London Workshop on Signal Processing Theory and Methods will take place on 13 and 14 September 2018 at Imperial College London. Join us for networking, learning and discussion with leading academic, industrial and research funding experts in the field.

DateS: 13-14 September 2018

Venue: Room 408, EEE Building, Imperial College London, South Kensington Campus (No. 16 on Campus Map)

Download the Programme


Speakers

Speakers

Nigel Birch, Engineering and Physical Sciences Research Council, UK

Opportunities in Signal Processing

Opportunities in Signal Processing

Nigel Birch's 2018 London Workshop talk, Opportunities in Signal Processing.

Helmut Bölcskei, Swiss Federal Institute of Technology, ETH Zurich, Switzerland

Harmonic analysis of deep convolutional neural networks 

Abstract

Deep convolutional neural networks (CNNs) used in practice employ potentially hundreds of layers and 10,000s of nodes. Such network sizes entail formidable challenges in training, operating, and storing the network. Very deep and wide CNNs may therefore not be well suited to applications operating under severe resource constraints as is the case, e.g., in low-power embedded and mobile platforms. This talk develops a harmonic analysis approach to CNNs with the aim of understanding the impact of CNN topology, specifically depth and width, on the network’s feature extraction and expressivity capabilities.  

Bio 

Helmut Bölcskei was born in Mödling, Austria on May 29, 1970, and received the Dipl.-Ing. and Dr. techn. degrees in electrical engineering from Vienna University of Technology, Vienna, Austria, in 1994 and 1997, respectively. In 1998 he was with Vienna University of Technology. From 1999 to 2001 he was a postdoctoral researcher in the Information Systems Laboratory, Department of Electrical Engineering, and in the Department of Statistics, Stanford University, Stanford, CA. He was in the founding team of Iospan Wireless Inc., a Silicon Valley-based startup company (acquired by Intel Corporation in 2002) specialized in multiple-input multiple-output (MIMO) wireless systems for high-speed Internet access, and was a co-founder of Celestrius AG, Zurich, Switzerland. From 2001 to 2002 he was an Assistant Professor of Electrical Engineering at the University of Illinois at Urbana-Champaign. He has been with ETH Zurich since 2002, where he is a Professor of Electrical Engineering. He was a visiting researcher at Philips Research Laboratories Eindhoven, The Netherlands, ENST Paris, France, and the Heinrich Hertz Institute Berlin, Germany. His research interests are in information theory, mathematical signal processing, machine learning, and statistics.

He received the 2001 IEEE Signal Processing Society Young Author Best Paper Award, the 2006 IEEE Communications Society Leonard G. Abraham Best Paper Award, the 2010 Vodafone Innovations Award, the ETH "Golden Owl" Teaching Award, is a Fellow of the IEEE, a 2011 EURASIP Fellow, was a Distinguished Lecturer (2013-2014) of the IEEE Information Theory Society, an Erwin Schrödinger Fellow (1999-2001) of the Austrian National Science Foundation (FWF), was included in the 2014 Thomson Reuters List of Highly Cited Researchers in Computer Science, and is the 2016 Padovani Lecturer of the IEEE Information Theory Society. He served as an associate editor of the IEEE Transactions on Information Theory, the IEEE Transactions on Signal Processing, the IEEE Transactions on Wireless Communications, and the EURASIP Journal on Applied Signal Processing. He was editor-in-chief of the IEEE Transactions on Information Theory during the period 2010-2013. He served on the editorial board of the IEEE Signal Processing Magazine and is currently on the editorial boards of "Foundations and Trends in Networking” and "Foundations and Trends in Communications and Information Theory”. He was TPC co-chair of the 2008 IEEE International Symposium on Information Theory and the 2016 IEEE Information Theory Workshop and served on the Board of Governors of the IEEE Information Theory Society. He has been a delegate of the president of ETH Zurich for faculty appointments since 2008.

Harmonic analysis of deep convolutional neural networks

Helmut Bölcskei's 2018 London Workshop talk, Harmonic analysis of deep convolutional neural networks.

 

Alex Bronstein, Technion - Israel Institute of Technology, Israel

Tradeoffs between Speed and Accuracy in Inverse Problems  

Abstract 

Solving a linear system of the type Ax + n = y with many more unknowns than equations is a fundamental ingredient in a plethora of applications. The classical approach to this inverse problem is by formulating an optimization problem consisting a data fidelity term and a signal prior, and minimizing it using an iterative algorithm. Imagine we have a wonderful iterative algorithm but real-time limitations allows only to execute five iterations thereof. Will it achieve the best accuracy within this budget? Imagine another setting in which an iterative algorithm pursues a certain data model, which is known to be accurate only to a certain amount. Can this knowledge be used to design faster iterations? In this talk, I will try to answer these questions by showing how the introduction of smartly controlled inaccuracy can significantly increase the convergence speed of iterative algorithms used to solve various inverse problems. I will also elucidate connections to deep learning an provide a theoretical justification of the very successful LISTA networks. Examples of applications in computational imaging and audio processing will be provided. 

Bio 

Alex Bronstein is an associate professor of computer science at the Technion – Israel Institute of Technology a principal engineer at Intel Corporation. His research interests include numerical geometry, computer vision, and machine learning. Prof. Bronstein has authored over 100 publications in leading journals and conferences, over 30 patents and patent applications, the research monograph "Numerical geometry of non-rigid shapes", and edited several books. Highlights of his research were featured in CNN, SIAM News, Wired. Prof. Bronstein is a Fellow of the IEEE for his contribution to 3D imaging and geometry processing. In addition to his academic activity, he co-founded and served as Vice President of technology in the Silicon Valley start-up company Novafora (2005-2009), and was a co-founder and one of the main inventors and developers of the 3D sensing technology in the Israeli startup Invision, subsequently acquired by Intel in 2012. Prof. Bronstein's technology is now the core of the Intel RealSense 3D camera integrated into a variety of consumer electronic products. Prof. Bronstein is also a co-founder of Videocites where he serves as Chief Scientist.

Tradeoffs between Speed and Accuracy in Inverse Problems

Alex Bronstein's 2018 London Workshop talk, Tradeoffs between Speed and Accuracy in Inverse Problems.

Yuxin Chen, Princeton University, USA

Gradient Descent with Random Initialization: Fast Global Convergence for Nonconvex Phase Retrieval 

Abstract

We consider the problem of solving systems of quadratic equations, namely, recovering an object of interest from a set of quadratic equations / samples. This problem, also dubbed as phase retrieval, spans multiple domains including physical sciences and machine learning. We investigate the efficiency of gradient descent designed for the nonconvex least squares problem. We prove that under Gaussian designs, gradient descent --- when randomly initialized --- converges to the truth within a logarithmic number of iterations given nearly minimal samples, thus achieving near-optimal computational and sample complexities at once. This provides the first global convergence guarantee concerning vanilla gradient descent for phase retrieval, without the need of (i) carefully-designed initialization, (ii) sample splitting, or (iii) sophisticated saddle-point escaping schemes. All of these are achieved by exploiting the statistical models in analyzing optimization algorithms, via a leave-one-out approach that enables the decoupling of certain statistical dependency between the gradient descent iterates and the data. 

Bio

Yuxin Chen is currently an assistant professor in the Department of Electrical Engineering at Princeton University. Prior to joining Princeton, he was a postdoctoral scholar in the Department of Statistics at Stanford University, and he completed his Ph.D. in Electrical Engineering at Stanford University. His research interests include high-dimensional data analysis, convex and nonconvex optimization, statistical learning, and information theory.

Gradient Descent with Random Initialization: Fast Global Convergence for Nonconvex Phase Retrieval

Yuxin Chen's 2018 London Workshop talk, Gradient Descent with Random Initialization: Fast Global Convergence for Nonconvex Phase Retrieval.

 

Yuejie Chi, Carnegie Mellon University, USA

Geometry and Regularization in Nonconvex Statistical Estimation 

Abstract

Recent years have seen a flurry of activities in designing provably efficient nonconvex procedures for solving statistical estimation problems. The premise is that despite nonconvexity, the loss function may possess benign geometric properties that enable fast global convergence under carefully designed initializations, such as local strong convexity, local restricted convexity, etc. In many sample-starved problems, this benign geometry only holds over a restricted region of the entire parameter space with certain structural properties, yet gradient descent seems to follow a trajectory staying within this nice region without explicit regularizations, thus is extremely computationally efficient. In this talk, we formally establish this “implicit regularization” phenomenon of gradient descent for a few fundamental statistical estimation problems by exploiting statistical modeling in analyzing iterative optimization algorithms via a leave-one-out perturbation argument. This is joint work with Yuxin Chen, Cong Ma, Yuanxin Li and Kaizheng Wang. 

Bio

Dr Yuejie Chi received the PhD degree in Electrical Engineering from Princeton University in 2012, and the BE (Hon) degree in Electrical Engineering from Tsinghua University, Beijing, China, in 2007. Since January 2018, she is Robert E. Doherty Career Development Professor and Associate Professor with the department of Electrical and Computer Engineering at Carnegie Mellon University, after spending five years at The Ohio State University. She is interested in the mathematics of data representation that take advantage of structures and geometry to minimize complexity and improve performance. Specific topics include mathematical and statistical signal processing, machine learning, large-scale optimization, sampling and information theory, with applications in sensing, imaging and data science. More information can be found at Yuejie's website.

Geometry and Regularization in Nonconvex Statistical Estimation

Yuejie Chi's 2018 London Workshop talk, Geometry and Regularization in Nonconvex Statistical Estimation.

 

Cédric Févotte, Institut de recherche en informatique de Toulouse (IRIT), France

Estimation with low-rank time-frequency synthesis models 

Abstract

Many state-of-the-art signal decomposition techniques rely on a low-rank factorisation of a time-frequency (t-f) transform. In particular, nonnegative matrix factorisation (NMF) of the spectrogram has been considered in many audio applications. This is an analysis approach in the sense that the factorisation is applied to the squared magnitude of the analysis coefficients returned by the t-f transform. In this talk, I will instead present a synthesis approach, where low-rankness is imposed to the synthesis coefficients of the data signal over a given t-f dictionary (such as a Gabor frame). As such, this offers a novel modelling paradigm that bridges t-f synthesis modelling and traditional analysis based NMF approaches. The proposed generative model allows us in turn to design more sophisticated multilayer representations that can efficiently capture diverse forms of structure. Additionally, the generative modelling allows to exploit t-f low-rankness for compressive sensing. We present efficient iterative shrinkage algorithms to perform estimation in the proposed models and illustrate the capabilities of the new modelling paradigm over audio signal processing examples. 

Reference

C. Févotte and M. Kowalski. Estimation with low-rank time-frequency synthesis models. IEEE Trans. Signal Processing, 2018. 

Bio

Cédric Févotte is a CNRS senior researcher at Institut de Recherche en Informatique de Toulouse (IRIT). Previously, he has been a CNRS researcher at Laboratoire Lagrange (Nice, 2013-2016) & Télécom ParisTech (2007-2013), a research engineer at Mist-Technologies (the startup that became Audionamix, 2006-2007) and a postdoc at University of Cambridge (2003-2006). He holds MEng and PhD degrees in EECS from École Centrale de Nantes. His research interests concern statistical signal processing and machine learning, in particular for source separation and inverse problems. He was a member of the IEEE Machine Learning for Signal Processing technical committee (2012-2018) and has been a member of SPARS steering committee since 2018. He has been a member of the editorial board of the IEEE Transactions on Signal Processing since 2014, first as an associate editor and then as a senior area editor (since 2018). In 2014, he was the co-recipient of an IEEE Signal Processing Society Best Paper Award for his work on audio source separation using multichannel nonnegative matrix factorisation. He is the principal investigator of the European Research Council (ERC) project FACTORY (New paradigms for latent factor estimation, 2016-2021).

Estimation with low-rank time-frequency synthesis models

Cédric Févotte's 2018 London Workshop talk, Estimation with low-rank time-frequency synthesis models.

 

Reinhold Häb-Umbach, Paderborn University, Germany

Latent Structure Discovery in Speech using Hidden Markov Model Variational Autoencoders

Abstract

To leverage the expressiveness of neural networks and the interpretability of graphical models we consider Variational Autoencoders (VAEs) to learn nonlinear low-dimensional manifolds in speech. Being a generative model VAEs not only allow the discovery of latent structure but also the generation of new data by sampling from the learnt model. To capture the sequential nature of speech we employ structure in the latent space in the form of a hidden Markov Model (HMM). The resulting HMM-VAE is used for the unsupervised discovery of the acoustic building blocks of a language. The model is then extended to a full Bayesian model, and its optimization using stochastic variational inference and natural  gradients is discussed. In experiments the resulting system is shown to outperform unsupervised acoustic unit discovery based on a HMM with Gaussian Mixture Model emission probabilities. Furthermore, the proposed approach is able to infer the number of acoustic units without supervision.

Bio

Reinhold Häb-Umbach is a professor at Paderborn University, Germany. After obtaining a PhD degree from RWTH Aachen University, he was PostDoc at the IBM Almaden Research Laboratory, and from 1990 - 2001 he worked as a Senior Scientist at Philips Research, Aachen and Eindhoven. Since 2001 he is professor of Communcations Engineering at Paderborn University.  He has  more than 200 scientific publications, and recently co-authored the book Robust  Automatic Speech Recognition - a Bridge to Practical Applications (Academic Press, 2015). Since 2015 he is member of the IEEE Signal Processing Society Speech and Language Technical Committee. He has been and is on the organizing committees of Interspeech 2015, SLT 2016, ASRU 2017, and SLT 2018. He is a fellow of the International Speech Communication Association (ISCA). His main research interests are in the fields of statistical signal processing and machine learning, with applications to speech enhancement, automatic speech recognition and unsupervised learning from speech and audio. For more information, visit Reinhold's website.

Latent Structure Discovery in Speech using Hidden Markov Model Variational Autoencoders

Reinhold Häb-Umbach's 2018 London Workshop talk, Latent Structure Discovery in Speech using Hidden Markov Model Variational Autoencoders.

 

John R. Hershey, Google AI Perception, USA

Building a Brain, Starting at the Ears: Machine Hearing in the Integrative Era

Abstract

As we perceive the world around us we are generally unaware of how the brain combines different types of information to synthesize a coherent description of the world.  Historically, in machine perception, very different methods were used in each domain, leaving integration as an additional stumbling block. However, currently many tasks from microphone array signal processing to audio-visual scene understanding are utilizing similar deep learning methods.  As a result we have now entered an era where integrative tasks are the first-class subjects of end-to-end modeling efforts. Long-standing issues for integration, such as robust fusion of multiple inputs, and the question of the correspondence between percepts across modalities, are resurfacing to be addressed by new approaches from the deep learning toolbox. This talk will present recent attempts to integrate different combinations of beamforming, source separation, visual processing, and multi-lingual speech recognition. Experimental work will be presented on a variety of integrative tasks that attempt to push the envelope of what can be done within a single coherent system. We are at a point in time where such work raises as many questions as it answers; the talk will highlight open issues and new directions suggested by the current state of the art. 

Bio

John is a researcher at Google in Cambridge, Massachusetts where he leads a research team in machine perception, since joining in January 2018. Prior to that he spent seven years leading the speech and audio research team at MERL (Mitsubishi Electric Research Labs), and five years at IBM's T. J. Watson Research Center in New York, where he led a team of researchers in noise-robust speech recognition. He also spent a year as a visiting researcher in the speech group at Microsoft Research in 2004, after obtaining his PhD from UCSD. Over the years he has contributed to more than 100 publications and over 30 patents in the areas of machine perception, speech processing, speech recognition, and natural language understanding.

Building a Brain, Starting at the Ears: Machine Hearing in the Integrative Era

John Hershey's 2018 London Workshop talk, Building a Brain, Starting at the Ears: Machine Hearing in the Integrative Era.

 

Yue M. Lu, Harvard University, USA

Spectral Methods for Nonconvex Estimation: A mean-field analysis for structured sensing ensembles

Abstract

Spectral initialization methods are widely used in nonconvex optimization approaches to signal estimation. Examples include phase retrieval, blind deconvolution, and low-rank matrix/tensor recovery. In the case of generalized linear regression, and when the sensing matrix is drawn from the i.i.d. Gaussian distribution, an asymptotically precise characterization of the performance of the spectral method was obtained in Lu and Li (2018), which was further refined in Mondelli and Montanari (2018). Such analysis reveals a phase transition phenomenon that depends on the ratio between the number of samples and the signal dimension. When the ratio is below a minimum threshold, the estimates given by the spectral method are no better than random guesses drawn from a uniform distribution on the hypersphere, thus carrying no information; above a maximum threshold, the estimates become increasingly aligned with the target signal. The asymptotic characterization also allows one to design optimal shrinkage schemes to further improve the performance of the method.

In this talk, I will review these existing results in the literature and then present some recent work on analyzing the spectral method when the sensing matrix comes from more general and structured ensembles. Examples include Fourier transforms with random masks, and randomly subsampled orthogonal transforms. I will show how to obtain precise asymptotic characterizations for such non-i.i.d. ensembles by using mean-field methods from statistical physics. 

Spectral Methods for Nonconvex Estimation: A mean-field analysis for structured sensing ensembles

Yue Lu's 2018 London Workshop talk, Spectral Methods for Nonconvex Estimation: A mean-field analysis for structured sensing ensembles.

 

Gongguo Tang, Colorado School of Mines, USA

Nonconvex Matrix Optimization: Centralized and Distributed Geometry 

Abstract

The past few years have seen a surge of interest in nonconvex reformulations of convex optimizations using nonlinear reparameterizations of the optimization variables. Compared with the convex formulations, the nonconvex ones typically involve many fewer variables, allowing them to scale to scenarios with millions of variables. However, one pays the price of solving nonconvex optimizations to global optimality, which is generally believed to be impossible. In this talk, I will characterize the nonconvex geometries of several low-rank matrix optimizations in both centralized and distributed settings. In particular, I will argue that under reasonable assumptions, each critical point of the nonconvex problems either corresponds to the global optimum of the original convex optimizations, or is a strict saddle point where the Hessian matrix has a negative eigenvalue. Such a geometric structure ensures that many centralized and distributed local search algorithms can converge to the global optimum with random initialisations.

Bio

Dr Gongguo Tang has been an Assistant Professor in the Department of Electrical Engineering at Colorado School of Mines since 2014. He received his PhD in Electrical Engineering from Washington University in St. Louis in 2011. He was a Postdoctoral Research Associate at the Department of Electrical and Computer Engineering, University of Wisconsin-Madison from 2011 to 2013, and a visiting scholar at the University of California, Berkeley, in 2013. Dr Tang's research interests are in the area of optimisation, signal processing and machine learning, as well as their applications in big data analytics, optics, imaging and networks.

Nonconvex Matrix Optimization: Centralized and Distributed Geometry

Gongguo Tang's 2018 London Workshop talk, Nonconvex Matrix Optimization: Centralized and Distributed Geometry.

 

John Wright, Columbia University, USA

Geometry and Symmetry in Short-and-Sparse Deconvolution 

Bio

John Wright is an Associate Professor in the Electrical Engineering Department at Columbia University. He received his PhD in Electrical Engineering from the University of Illinois at Urbana-Champaign in October 2009, and was with Microsoft Research from 2009-2011. His research is in the area of high-dimensional data analysis. In particular, his recent research has focused on developing algorithms for robustly recovering structured signal representations from incomplete and corrupted observations, and applying them to practical problems in imaging and vision. His work has received an number of awards and honors, including the 2009 Lemelson-Illinois Prize for Innovation for his work on face recognition, the 2009 UIUC Martin Award for Excellence in Graduate Research, a 2008-2010 Microsoft Research Fellowship, and the 2012 COLT Best Paper Award (with Wang and Spielman).

Geometry and Symmetry in Short-and-Sparse Deconvolution

John Wright's 2018 London Workshop talk, Geometry and Symmetry in Short-and-Sparse Deconvolution.

 


Timetable

Thursday, 13 September 2018
Friday, 14 September 2018
  • 10:00-10:30 Check-in and welcome coffee, EE 611
  • 10:30-11:00 Welcome from the organisers
  • 11:00-11:45 Yuejie Chi
  • 11:45-12:30 Gongguo Tang
  • 12:30-14:00 Lunch and poster session, EE 611
  • 14:00-14:45 Yue M. Lu
  • 14:45-15:30 Reinhold Häb-Umbach
  • 15:30-16:00 Coffee break, EE 611
  • 16:00-16:45 John Hershey
  • 16:45-17:30 Yuxin Chen
  • 17:30-19:00 Rooms 408 and 611 available for networking
  • 19:00-21:30 Dinner (by invitation only)
  • 9:00-9:30 Nigel Birch
  • 9:30-10:15 Cédric Févotte
  • 10:15-11:00 Helmut Bölcskei
  • 11:00-11:30 Coffee Break, EE 611
  • 11:30-12:15 John Wright
  • 12:15-13:00 Alex Bronstein
  • 13:00-14:00 Lunch, EE 611
  • 14:00-15:00 Closing remarks from the organisers
All activities take place in EE 408 unless otherwise indicated.
2018 London Workshop Schedule.

Posters

  • Xin Deng and Junjie Huang (with Pier Luigi Dragotti)
  • Conghui Li and Shanxiang Lyu (with Cong Ling)
  • Alastair Moore (with Patrick Naylor) and Patrick Naylor
  • Panagiotis Barmpoutis (with Tania Stathaki)
  • Mohamed Suliman and Maxime Ferreira Da Costa (with Wei Dai)

Registration

Registration has now closed.

The deadline for securing your place at the 2018 London Workshop on Non-convex Optimisation and Matrix Factorisation has now passed.


Special thanks to...

...our sponsors:

Communications and Signal Processing logo

  • Emeritus Professor Tony Constantinides for his kind and generous contribution worth £2,000.
  • Contributions from research funds of organisers Dr Wei Dai, Professor Pier Luigi Dragotti and Dr Patrick Naylor.
...our event organisers (in alphabetical order):
  • Melanie Albright
  • Wei Dai
  • Pier-Luigi Dragotti
  • Christine Evers
  • Cong Ling
  • Patrick Naylor
...our volunteers (in alphabetical order):
  • Cheng Cheng for receptionist work
  • Maxime Ferreira Da Costa for photography
  • Hengyan Liu for receptionist work
  • Yang Lu for receptionist work
  • Yifan Ran for receptionist work
  • Jingyuan Xia for CIT support