Imperial College London

Professor Geoffrey Ye Li

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Chair in Wireless Systems
 
 
 
//

Contact

 

geoffrey.li Website CV

 
 
//

Location

 

804Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

558 results found

Zhang B, Qin Z, Li GY, 2023, Semantic Communications With Variable-Length Coding for Extended Reality, IEEE Journal of Selected Topics in Signal Processing, Vol: 17, Pages: 1038-1051, ISSN: 1932-4553

Journal article

Wang O, Gao J, Li GY, 2023, Learn to Adapt to New Environments From Past Experience and Few Pilot Blocks, IEEE Transactions on Cognitive Communications and Networking, Vol: 9, Pages: 373-385

Journal article

Zhou S, Li GY, 2023, Federated learning via inexact ADMM, IEEE Transactions on Pattern Analysis and Machine Intelligence, ISSN: 0162-8828

One of the crucial issues in federated learning is how to develop efficient optimization algorithms. Most of the current ones require full device participation and/or impose strong assumptions for convergence. Different from the widely-used gradient descent-based algorithms, in this paper, we develop an inexact alternating direction method of multipliers (ADMM), which is both computation- and communication-efficient, capable of combating the stragglers' effect, and convergent under mild conditions. Furthermore, it has high numerical performance compared with several state-of-the-art algorithms for federated learning.

Journal article

Ye H, Liang L, Li GY, 2022, Decentralized Federated Learning With Unreliable Communications, IEEE Journal of Selected Topics in Signal Processing, Vol: 16, Pages: 487-500, ISSN: 1932-4553

Journal article

Zhou S, Luo Z, Xiu N, Li Get al., 2022, Computing one-bit compressive sensing via double-sparsity constrained optimization, IEEE Transactions on Signal Processing, Vol: 70, Pages: 1593-1608, ISSN: 1053-587X

One-bit compressive sensing is popular in signal processing and communications due to the advantage of its low storage costs and hardware complexity. However, it has been a challenging task all along since only the one-bit (the sign) information is available to recover the signal. In this paper, we appropriately formulate the one-bit compressed sensing by a double-sparsity constrained optimization problem. The first-order optimality conditions via the newly introduced τ-stationarity for this nonconvex and discontinuous problem are established, based on which, a gradient projection subspace pursuit (GPSP) approach with global convergence and fast convergence rate is proposed. Numerical experiments against other leading solvers illustrate the high efficiency of our proposed algorithm in terms of the computation time and the quality of the signal recovery as well.

Journal article

Gao J, Hu M, Zhong C, Li GY, Zhang Zet al., 2022, An Attention-Aided Deep Learning Framework for Massive MIMO Channel Estimation, IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, Vol: 21, Pages: 1823-1835, ISSN: 1536-1276

Journal article

Li GY, Saad W, Ozgur A, Kairouz P, Qin Z, Hoydis J, Han Z, Gunduz D, Elmirghani Jet al., 2022, Series Editorial The Fourth Issue of the Series on Machine Learning in Communications and Networks, IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, Vol: 40, Pages: 1-4, ISSN: 0733-8716

Journal article

Zhou S, Li GY, 2021, Communication-Efficient ADMM-based Federated Learning, Publisher: ArXiv

Federated learning has shown its advances over the last few years but isfacing many challenges, such as how algorithms save communication resources,how they reduce computational costs, and whether they converge. To addressthese issues, this paper proposes exact and inexact ADMM-based federatedlearning. They are not only communication-efficient but also converge linearlyunder very mild conditions, such as convexity-free and irrelevance to datadistributions. Moreover, the inexact version has low computational complexity,thereby alleviating the computational burdens significantly.

Working paper

Jiang P, Wen C-K, Jin S, Li GYet al., 2021, Dual CNN based channel estimation for MIMO-OFDM systems, IEEE Transactions on Wireless Communications, Vol: 69, Pages: 5859-5872, ISSN: 1536-1276

Recently, convolutional neural network (CNN)-based channel estimation (CE) for massive multiple-input multiple-output communication systems has achieved remarkable success. However, complexity even needs to be reduced, and robustness can even be improved. Meanwhile, existing methods do not accurately explain which channel features help the denoising of CNNs. In this paper, we first compare the strengths and weaknesses of CNN-based CE in different domains. When complexity is limited, the channel sparsity in the angle-delay domain improves denoising and robustness whereas large noise power and pilot contamination are handled well in the spatial-frequency domain. Thus, we develop a novel network, called dual CNN, to exploit the advantages in the two domains. Furthermore, we introduce an extra neural network, called HyperNet, which learns to detect scenario changes from the same input as the dual CNN. HyperNet updates several parameters adaptively and combines the existing dual CNNs to improve robustness. Experimental results show improved estimation performance for the time-varying scenarios. To further exploit the correlation in the time domain, a recurrent neural network framework is developed, and training strategies are provided to ensure robustness to the changing of temporal correlation. This design improves channel estimation performance but its complexity is still low.

Journal article

Li GY, Saad W, Ozgur A, Kairouz P, Qin Z, Hoydis J, Han Z, Gunduz D, Elmirghani Jet al., 2021, Series Editorial: The Third Issue of the Series on Machine Learning in Communications and Networks, IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, Vol: 39, Pages: 2267-2270, ISSN: 0733-8716

Journal article

Li GY, Saad W, Ozgur A, Kairouz P, Qin Z, Hoydis J, Han Z, Gunduz D, Elmirghani Jet al., 2021, Series Editorial: The Second Issue of the Series on Machine Learning in Communications and Networks, IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, Vol: 39, Pages: 1855-1857, ISSN: 0733-8716

Journal article

Jiang P, Wang T, Han B, Gao X, Zhang J, Wen C-K, Jin S, Li Get al., 2021, AI-aided online adaptive OFDM receiver: design and experimental results, IEEE Transactions on Wireless Communications, Vol: 20, Pages: 7655-7668, ISSN: 1536-1276

Orthogonal frequency division multiplexing (OFDM) has been widely applied in many wireless communication systems. The artificial intelligence (AI)-aided OFDM receivers are currently brought to the forefront to replace and improve the traditional OFDM receivers. In this paper, we first compare two AI-aided OFDM receivers, namely, data-driven fully connected deep neural network and model-driven ComNet, through extensive simulation and real-time video transmission using a 5G rapid prototyping system for an over-the-air (OTA) test. We find a performance gap between the simulation and the OTA test caused by the discrepancy between the channel model for offline training and the real environment. We develop a novel online training system, which is called SwitchNet receiver, to address this issue. This receiver has a flexible and extendable architecture and can adapt to real channels by training only several parameters online. From the OTA test, the AI-aided OFDM receivers, especially the SwitchNet receiver, are robust to OTA environments and promising for future communication systems. At the end of this paper, we discuss potential challenges and future research inspired by our initial study in this paper.

Journal article

Clerckx B, Mao Y, Schober R, Jorswieck E, Love DJ, Yuan J, Hanzo L, Ye Li G, Larsson EG, Caire Get al., 2021, Is NOMA efficient in multi-antenna networks? A critical look at next generation multiple access techniques, IEEE Open Journal of the Communications Society, Vol: 2, Pages: 1310-1343, ISSN: 2644-125X

In the past few years, a large body of literature has been created on downlink Non-Orthogonal Multiple Access (NOMA),employing superposition coding and Successive Interference Cancellation (SIC), in multi-antenna wireless networks. Furthermore, the benefits of NOMA over Orthogonal Multiple Access (OMA) have been highlighted. In this paper, we take a critical and fresh look at the downlink Next Generation Multiple Access (NGMA) literature. Instead of contrasting NOMA with OMA, we contrast NOMA with two other multiple access baselines. The first is conventional Multi-User Linear Precoding (MU–LP), as used in Space-Division Multiple Access (SDMA) and multi-user Multiple-Input Multiple-Output (MIMO) in 4G and 5G. The second, called Rate-Splitting Multiple Access (RSMA), is based on multi-antenna Rate-Splitting (RS). It is also a non-orthogonal transmission strategy relying on SIC developed in the past few years in parallel and independently from NOMA. We show that there is some confusion about the benefits of NOMA, and we dispel the associated misconceptions. First, we highlight why NOMA is inefficient in multi-antenna settings based on basic multiplexing gain analysis. We stress that the issue lies in how the NOMA literature, originally developed for single-antenna setups, has been hastily applied to multi-antenna setups, resulting in a misuse of spatial dimensions and therefore loss in multiplexing gains and rate. Second, we show that NOMA incurs a severe multiplexing gain loss despite an increased receiver complexity due to an inefficient use of SIC receivers. Third, we emphasize that much of the merits of NOMA are due to the constant comparison to OMA instead of comparing it to MU–LP and RS baselines. We then expose the pivotal design constraint that multi-antenna NOMA requires one user to fully decode the

Journal article

ElMossallamy MA, Seddik KG, Chen W, Wang L, Li GY, Han Zet al., 2021, RIS Optimization on the Complex Circle Manifold for Interference Mitigation in Interference Channels, IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, Vol: 70, Pages: 6184-6189, ISSN: 0018-9545

Journal article

Gao S, Dong P, Pan Z, Li GYet al., 2021, Deep Multi-Stage CSI Acquisition for Reconfigurable Intelligent Surface Aided MIMO Systems, IEEE Communications Letters, Vol: 25, Pages: 2024-2028, ISSN: 1089-7798

Journal article

Gao F, Wang B, Xing C-W, An J-P, Li Get al., 2021, Wideband beamforming for hybrid massive MIMO terahertz communications, IEEE Journal on Selected Areas in Communications, Vol: 39, Pages: 1725-1740, ISSN: 0733-8716

The combination of large bandwidth at terahertz (THz) and the large number of antennas in massive MIMO results in the non-negligible spatial wideband effect in time domain or the corresponding beam squint issue in frequency domain, which will cause severe performance degradation if not properly treated. In particular, for a phased array based hybrid transceiver, there exists a contradiction between the requirement of mitigating the beam squint issue and the hardware implementation of the analog beamformer/combiner, which makes the accurate beamforming an enormous challenge. In this paper, we propose two wideband hybrid beamforming approaches, based on the virtual sub-array and the true-time-delay (TTD) lines, respectively, to eliminate the impact of beam squint. The former one divides the whole array into several virtual sub-arrays to generate a wider beam and provides an evenly distributed array gain across the whole operating frequency band. To further enhance the beamforming performance and thoroughly address the aforementioned contradiction, the latter one introduces the TTD lines and propose a new hardware implementation of analog beamformer/combiner. This TTD-aided hybrid implementation enables the wideband beamforming and achieves the nearoptimal performance close to full-digital transceivers. Analytical and numerical results demonstrate the effectiveness of two proposed wideband beamforming approaches.

Journal article

Xie H, Qin Z, Li G, Juang B-Het al., 2021, Deep learning enabled semantic communication systems, IEEE Transactions on Signal Processing, Vol: 69, Pages: 2663-2675, ISSN: 1053-587X

Recently, deep learned enabled end-to-end communication systems have been developed to merge all physical layer blocks in the traditional communication systems, which make joint transceiver optimization possible. Powered by deep learning, natural language processing has achieved great success in analyzing and understanding a large amount of language texts. Inspired by research results in both areas, we aim to provide a new view on communication systems from the semantic level. Particularly, we propose a deep learning based semantic communication system, named DeepSC, for text transmission. Based on the Transformer, the DeepSC aims at maximizing the system capacity and minimizing the semantic errors by recovering the meaning of sentences, rather than bit- or symbol-errors in traditional communications. Moreover, transfer learning is used to ensure the DeepSC applicable to different communication environments and to accelerate the model training process. To justify the performance of semantic communications accurately, we also initialize a new metric, named sentence similarity. Compared with the traditional communication system without considering semantic information exchange, the proposed DeepSC is more robust to channel variation and is able to achieve better performance, especially in the low signal-to-noise (SNR) regime, as demonstrated by the extensive simulation results.

Journal article

Li G, Hu Q, Gao F, Zhang H, Jin Set al., 2021, Deep learning for channel estimation: interpretation, performance, and comparison, IEEE Transactions on Wireless Communications, Vol: 20, Pages: 2398-2412, ISSN: 1536-1276

Deep learning (DL) has emerged as an effective tool for channel estimation in wireless communication systems, especially under some imperfect environments. However, even with such unprecedented success, DL methods are often regarded as black boxes and are lack of explanations on their internal mechanisms, which severely limits their further improvement and extension. In this paper, we present preliminary theoretical analysis on DL based channel estimation for single-input multiple-output (SIMO) systems to understand and interpret its internal mechanisms. As deep neural network (DNN) with rectified linear unit (ReLU) activation function is mathematically equivalent to a piecewise linear function, the corresponding DL estimator can achieve universal approximation to a large family of functions by making efficient use of piecewise linearity. We demonstrate that DL based channel estimation does not restrict to any specific signal model and asymptotically approaches to the minimum mean-squared error (MMSE) estimation in various scenarios without requiring any prior knowledge of channel statistics. Therefore, DL based channel estimation outperforms or is at least comparable with traditional channel estimation, depending on the types of channels. Simulation results confirm the accuracy of the proposed interpretation and demonstrate the effectiveness of DL based channel estimation under both linear and nonlinear signal models.

Journal article

Lee M, Yu G, Li G, 2021, Graph embedding based wireless link scheduling with few training samples, IEEE Transactions on Communications, Vol: 20, Pages: 2282-2294, ISSN: 0090-6778

Link scheduling in device-to-device (D2D) networks is usually formulated as a non-convex combinatorial problem, which is generally NP-hard and difficult to get the optimal solution. Traditional methods to solve this problem are mainly based on mathematical optimization techniques, where accurate channel state information (CSI), usually obtained through channel estimation and feedback, is needed. To overcome the high computational complexity of the traditional methods and eliminate the costly channel estimation stage, machine leaning (ML) has been introduced recently to address the wireless link scheduling problems. In this paper, we propose a novel graph embedding based method for link scheduling in D2D networks. We first construct a fully-connected directed graph for the D2D network, where each D2D pair is a node while interference links among D2D pairs are the edges. Then we compute a low-dimensional feature vector for each node in the graph. The graph embedding process is based on the distances of both communication and interference links, therefore without requiring the accurate CSI. By utilizing a multi-layer classifier, a scheduling strategy can be learned in a supervised manner based on the graph embedding results for each node. We also propose an unsupervised manner to train the graph embedding based method to further reinforce the scalability and develop a K-nearest neighbor graph representation method to reduce the computational complexity. Extensive simulation demonstrates that the proposed method is near-optimal compared with the existing state-of-art methods but is with only hundreds of training network layouts. It is also competitive in terms of scalability and generalizability to more complicated scenarios.Index Terms—Machine learning, device-to-device communications, graph embedding, link scheduling, combinatorial optimization

Journal article

Guo C, He W, Li GY, 2021, Optimal fairness-aware resource supply and demand management for mobile edge computing, IEEE Wireless Communications Letters, Vol: 10, Pages: 678-682, ISSN: 2162-2337

This letter focuses on fairness-aware resource management in a multi-user and multi-server mobile edge computing (MEC) network, where the resource supply and demand are jointly considered for resource allocation and task assignment, respectively. In particular, we aim to minimize the maximum task execution latency of all users subject to task and resource constraints. Although the optimization problem includes power, spectrum, hashrate, and task variables and is nonconvex in its primal form, it can be equivalently transformed to a more tractable programming. Then, a low-complexity iteration based algorithm is proposed to find the global optimum of the primal problem since only a convex feasibility problem is tackled in each iteration. Simulation results in typical scenarios show that the proposed resource management strategy can reduce the maximum task execution latency of users by more than 15% comparing with the available baseline approaches.

Journal article

Ye H, Li GY, Juang B-HF, 2021, Deep learning based end-to-end wireless communication systems without pilots, IEEE Transactions on Cognitive Communications and Networking, ISSN: 2332-7731

The recent development in machine learning, especially in deep neural networks (DNN), has enabled learning-based end-to-end communication systems, where DNNs are employed to substitute all modules at the transmitter and receiver. In this article, two end-to-end frameworks for frequency-selective channels and multi-input and multi-output (MIMO) channels are developed, where the wireless channel effects are modeled with an untrainable stochastic convolutional layer. The end-to-end framework is trained with mini-batches of input data and channel samples. Instead of using pilot information to implicitly or explicitly estimate the unknown channel parameters as in current communication systems, the transmitter DNN learns to transform the input data in a way that is robust to various channel conditions. The receiver consists of two DNN modules used for channel information extraction and data recovery, respectively. A bilinear production operation is employed to combine the features extracted from the channel information extraction module and the received signals. The combined features are further utilized in the data recovery module to recover the transmitted data. Compared with the conventional communication systems, performance improvement has been shown for frequency-selective channels and MIMO channels. Furthermore, the end-to-end system can automatically leverage the correlation in the channels and in the source data to improve the overall performance.

Journal article

Liu R, Yuan J, Li G, 2021, Resource management for mmWave ultra-reliable and low-latency communications, IEEE Transactions on Communications, Vol: 69, Pages: 1094-1108, ISSN: 0090-6778

Many mission-critical and latency-sensitive applications require ultra-reliable and low-latency communications(URLLC), which has been listed as a new service category of 5GNew Radio (NR). To guarantee stringent latency and reliability constraints, URLLC services always exclusively occupy the spectrum and have priority over enhanced mobile broadband (eMBB)communications in the current coexistence scenario, which will greatly affect the performance of eMBB services and degrade the utilization efficiency of the spectrum resource. On the other hand, millimeter-wave (mmWave) communications can fulfill thee normous throughput requirements of 5G cellular communications. In this paper, we introduce mmWave communications into URLLC systems to provide a more efficient coexistence for eMBB and URLLC. A novel mmWave URLLC system is first developed, where URLLC users are allowed to share the spectrum resources with eMBB users. Besides, multi-connectivity technology, which enables users to access multiple base stations simultaneously, is introduced to the mmWave URLLC system to enhance the reliability. Then, a resource management problem is formulated, which maximizes the throughput of eMBB users while guaranteeing the latency and reliability requirements of URLLC users. To obtain optimal solutions, we first divide it into three subproblems, i.e., power allocation, resource matching, and user paring, and then solve them respectively. Simulation results demonstrate the data rate improvement compared against the traditional coexistence scenario without reusing strategy. Moreover, the multi-connectivity functionality poses a great effect on guaranteeing the latency and reliability requirements for URLLC users.

Journal article

Wu W, Gao X, Sun C, Li GY, Li Get al., 2021, Shallow underwater acoustic massive MIMO communications, IEEE Transactions on Signal Processing, Vol: 69, Pages: 1124-1139, ISSN: 1053-587X

The potential benefits of massive multiple-input multiple-output (MIMO) make it possible to achieve high-quality underwater acoustic (UWA) communications. Nevertheless, due to the wideband nature of UWA channels, existing massive MIMO techniques for radio frequency cannot be directly applied to UWA communications. This paper investigates a UWA massive MIMO system in the shallow-water environment, deploying large array apertures at both the transmitter and the receiver. We propose a beam-based UWA massive MIMO channel model and analyze its properties. Based on this model, we reveal that the transmit design for rate maximization can be performed in a dimension-reduced space related to the channel taps. Then, we prove that the beam-domain transmission is optimal to maximize the rate when with unlimited numbers of transducers. Furthermore, if the number of hydrophones also tends to infinity, the optimal power allocation can be obtained just by the water-filling algorithm and the corresponding rate positively correlates with the number of channel taps for the high signal-to-noise-ratio regime. Moreover, we devise a low-complexity algorithm to optimize the input covariance matrix for general cases. Simulation results illustrate the significant performance of the proposed algorithm and the high throughput achieved by massive MIMO.

Journal article

Li GY, Saad W, Ozgur A, Kairouz P, Qin Z, Hoydis J, Han Z, Gunduz D, Elmirghani Jet al., 2021, Series editorial: inauguration issue of the series on machine learning in communications and networks, IEEE Journal on Selected Areas in Communications, Vol: 39, Pages: 1-3, ISSN: 0733-8716

In the era of the new generation of communication systems, data traffic is expected to continuously strain the capacity of future communication networks. Along with the remarkable growth in data traffic, new applications, such as wearable devices, autonomous systems, and the Internet of Things (IoT), continue to emerge and generate even more data traffic with vastly different requirements. This growth in the application domain brings forward an inevitable need for more intelligent processing, operation, and optimization of future communication networks.

Journal article

Gong Z, Wu L, Zhang Z, Dang J, Zhu B, Jiang H, Li GYet al., 2021, Joint TOA and DOA Estimation With CFO Compensation Using Large-Scale Array, IEEE TRANSACTIONS ON SIGNAL PROCESSING, Vol: 69, Pages: 4204-4218, ISSN: 1053-587X

Journal article

Liang Y-C, Zhang Q, Larsson EG, Li GYet al., 2020, Symbiotic Radio: Cognitive Backscattering Communications for Future Wireless Networks, IEEE Transactions on Cognitive Communications and Networking, Vol: 6, Pages: 1242-1255

Journal article

He H, Zhang M, Jin S, Wen C-K, Li GYet al., 2020, Model-Driven Deep Learning for Massive MU-MIMO With Finite-Alphabet Precoding, IEEE Communications Letters, Vol: 24, Pages: 2216-2220, ISSN: 1089-7798

Journal article

He Y, Zhang J, Jin S, Wen C-K, Li GYet al., 2020, Model-Driven DNN Decoder for Turbo Codes: Design, Simulation, and Experimental Results, IEEE Transactions on Communications, Vol: 68, Pages: 6127-6140, ISSN: 0090-6778

Journal article

Li Z, Qi C, Li GY, 2020, Low-Complexity Multicast Beamforming for Millimeter Wave Communications, IEEE Transactions on Vehicular Technology, Vol: 69, Pages: 12317-12320, ISSN: 0018-9545

Journal article

ElMossallamy MA, Zhang H, Song L, Seddik KG, Han Z, Li GYet al., 2020, Reconfigurable Intelligent Surfaces for Wireless Communications: Principles, Challenges, and Opportunities, IEEE Transactions on Cognitive Communications and Networking, Vol: 6, Pages: 990-1002

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=01061800&limit=30&person=true