Imperial College London

Dr Imad M. Jaimoukha

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Senior Lecturer
 
 
 
//

Contact

 

+44 (0)20 7594 6279i.jaimouka Website

 
 
//

Location

 

617Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Xia:2023:10.1109/TNNLS.2022.3165627,
author = {Xia, J-Y and Li, S and Huang, J-J and Yang, Z and Jaimoukha, IM and Gunduz, D},
doi = {10.1109/TNNLS.2022.3165627},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
pages = {5366--5380},
title = {Metalearning-based alternating minimization algorithm for nonconvex optimization},
url = {http://dx.doi.org/10.1109/TNNLS.2022.3165627},
volume = {34},
year = {2023}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - In this article, we propose a novel solution for nonconvex problems of multiple variables, especially for those typically solved by an alternating minimization (AM) strategy that splits the original optimization problem into a set of subproblems corresponding to each variable and then iteratively optimizes each subproblem using a fixed updating rule. However, due to the intrinsic nonconvexity of the original optimization problem, the optimization can be trapped into a spurious local minimum even when each subproblem can be optimally solved at each iteration. Meanwhile, learning-based approaches, such as deep unfolding algorithms, have gained popularity for nonconvex optimization; however, they are highly limited by the availability of labeled data and insufficient explainability. To tackle these issues, we propose a meta-learning based alternating minimization (MLAM) method that aims to minimize a part of the global losses over iterations instead of carrying minimization on each subproblem, and it tends to learn an adaptive strategy to replace the handcrafted counterpart resulting in advance on superior performance. The proposed MLAM maintains the original algorithmic principle, providing certain interpretability. We evaluate the proposed method on two representative problems, namely, bilinear inverse problem: matrix completion and nonlinear problem: Gaussian mixture models. The experimental results validate the proposed approach outperforms AM-based methods.
AU - Xia,J-Y
AU - Li,S
AU - Huang,J-J
AU - Yang,Z
AU - Jaimoukha,IM
AU - Gunduz,D
DO - 10.1109/TNNLS.2022.3165627
EP - 5380
PY - 2023///
SN - 1045-9227
SP - 5366
TI - Metalearning-based alternating minimization algorithm for nonconvex optimization
T2 - IEEE Transactions on Neural Networks and Learning Systems
UR - http://dx.doi.org/10.1109/TNNLS.2022.3165627
UR - http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000785812200001&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=1ba7043ffcc86c417c072aa74d649202
UR - https://ieeexplore.ieee.org/document/9760074
UR - http://hdl.handle.net/10044/1/96959
VL - 34
ER -