Imperial College London

ProfessorDenizGunduz

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor in Information Processing
 
 
 
//

Contact

 

+44 (0)20 7594 6218d.gunduz Website

 
 
//

Assistant

 

Ms Joan O'Brien +44 (0)20 7594 6316

 
//

Location

 

1016Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

375 results found

Bharath BN, Nagananda KG, Gunduz D, Poor HVet al., 2018, Learning-based content caching with time-varying popularity profiles, IEEE Global Communications Conference (GLOBECOM), Publisher: IEEE, ISSN: 2334-0983

Content caching at the small-cell base stations (sBSs) in a heterogeneous wireless network is considered. A cost function is proposed that captures the backhaul link load called the "offloading loss", which measures the fraction of the requested files that are not available in the sBS caches. Previous approaches minimize this offloading loss assuming that the popularity profile of the content is time-invariant and perfectly known. However, in many practical applications, the popularity profile is unknown and time-varying. Therefore, the analysis of caching with non-stationary and statistically dependent popularity profiles (assumed unknown, and hence, estimated) is studied in this paper from a learning-theoretic perspective. A probably approximately correct (PAC) result is derived, in which a high probability bound on the offloading loss difference, i.e., the error between the estimated (outdated) and the optimal offloading loss, is investigated. The difference is a function of the Rademacher complexity of the set of all probability measures on the set of cached content items, the β-mixing coefficient, 1/√t (t is the number of time slots), and a measure of discrepancy between the estimated and true popularity profiles.

Conference paper

Ozfatura E, Gündüz D, 2018, Mobility-aware coded storage and delivery

Content caching at small-cell base stations (SBSs) is a promising method to mitigate the excessive backhaul load and delay, particularly for on-demand video streaming applications. A cache-enabled heterogeneous cellular network architecture is considered in this paper, where mobile users connect to multiple SBSs during a video downloading session, and the SBSs request files, or fragments of files, from the macro-cell base station (MBS) according to the user requests they receive. A novel coded storage and delivery scheme is introduced to reduce the load on the backhaul link from the MBS to the SBSs. The achievable backhaul delivery rate as well as the number of sub-files required to achieve this rate are studied for the proposed coded delivery scheme, and it is shown that the proposed scheme provides significant reduction in the number of sub-files required, making it more viable for practical applications.

Conference paper

Shi J, Liu L, Gündüz D, Ling Cet al., 2018, Polar codes and polar lattices for the heegard-berger problem, Pages: 100-105

Explicit coding schemes are proposed to achieve the rate-distortion bound for the Heegard-Berger problem using polar codes. Specifically, a nested polar code construction is employed to achieve the rate-distortion bound for the binary case. The nested structure contains two optimal polar codes for lossy source coding and channel coding, respectively. Moreover, a similar nested polar lattice construction is employed for the Gaussian case. The proposed polar lattice is constructed by nesting a quantization polar lattice and an AWGN capacityachieving polar lattice.

Conference paper

Mital N, Gunduz D, Ling C, 2018, Coded caching in a multi-server system with random topology, IEEE Wireless Communications and Networking Conference (WCNC), Publisher: IEEE, ISSN: 1525-3511

Cache-aided content delivery is studied in a multi-server system with P servers and K users, each equipped with a local cache memory. In the delivery phase, each user connects randomly to any ρ out of P servers. Thanks to the availability of multiple servers, which model small base stations with limited storage capacity, user demands can be satisfied with reduced storage capacity at each server and reduced delivery rate per server; however, this also leads to reduced multicasting opportunities compared to a single server serving all the users simultaneously. A joint storage and proactive caching scheme is proposed, which exploits coded storage across the servers, uncoded cache placement at the users, and coded delivery. The delivery latency is studied for both successive and simultaneous transmission from the servers. It is shown that, with successive transmission the achievable average delivery latency is comparable to that achieved by a single server, while the gap between the two depends on ρ, the available redundancy across servers, and can be reduced by increasing the storage capacity at the SBSs.

Conference paper

Mohammadi Amiri M, Gunduz D, 2018, Cache-aided content delivery over erasure broadcast channels, IEEE Transactions on Communications, Vol: 66, Pages: 370-381, ISSN: 0090-6778

A cache-aided broadcast network is studied, in which a server delivers contents to a group of receivers over a packet erasure broadcast channel (BC). The receivers are divided into two sets with regards to their channel qualities: the weak and strong receivers, where all the weak receivers have statistically worse channel qualities than all the strong receivers. The weak receivers, in order to compensate for the high erasure probability they encounter over the channel, are equipped with cache memories of equal size, while the receivers in the strong set have no caches. Data can be pre-delivered to weak receivers’ caches over the off-peak traffic period before the receivers reveal their demands. Allowing arbitrary erasure probabilities for the weak and strong receivers, a joint caching and channel coding scheme, which divides each file into several subfiles, and applies a different caching and delivery scheme for each subfile, is proposed. It is shown that all the receivers, even those without any cache memories, benefit from the presence of caches across the network. An information theoretic trade-off between the cache size and the achievable rate is formulated. It is shown that the proposed scheme improves upon the state-of-the-art in terms of the achievable trade-off.

Journal article

Sezgin A, Gündüz D, 2018, Welcome

Conference paper

Giaconi G, Gunduz D, Poor HV, 2018, Smart meter privacy with renewable energy and an energy storage device, IEEE Transactions on Information Forensics and Security, Vol: 13, Pages: 129-142, ISSN: 1556-6013

A smart meter (SM) measures a consumer’s electricity consumption and reports it automatically to a utility provider (UP) in almost real time. Despite many advantages of SMs, their use also leads to serious concerns about consumer privacy. In this paper, SM privacy is studied by considering the presence of a renewable energy source (RES) and a rechargeable battery (RB), which can be used to partially hide the consumer’s energy consumption behavior. Privacy is measured by the information leakage rate, which denotes the average mutual information between the user’s real energy consumption and the energy requested from the grid, which the SM reads and reports to the UP. The impact of the knowledge of the amount of energy generated by the RES at the UP is also considered. The minimum information leakage rate is characterized as a computable information theoretic single-letter expression in the two extreme cases, that is, when the battery capacity is infinite or zero. Numerical results are presented for the finite battery capacity case to illustrate the potential privacy gains from the existence of an RB. It is shown that, while the information leakage rate decreases with increasing availability of an RES, larger storage capacity is needed to fully exploit the available energy to improve the privacy.

Journal article

Varasteh M, Rassouli B, Simeone O, Gunduz Det al., 2017, Zero-delay source-channel coding with a one-bit ADC front end and correlated receiver side information, IEEE Transactions on Communications, Vol: 65, Pages: 5429-5444, ISSN: 0090-6778

Zero-delay transmission of a Gaussian source over an additive white Gaussian noise (AWGN) channel is considered with a one-bit analog-to-digital converter (ADC) front end and a correlated side information at the receiver. The design of the optimal encoder and decoder is studied for two different performance criteria, namely the mean squared error (MSE) distortion and the distortion outage probability (DOP), under an average power constraint on the channel input. For both criteria, necessary optimality conditions for the encoder and the decoder are derived, which are then used to numerically obtain encoder and decoder mappings that satisfy these conditions. Using these conditions, it is observed that the numerically optimized encoder (NOE) under the MSE distortion criterion is periodic, and its period increases with the correlation between the source and the receiver side information. For the DOP, it is instead seen that the NOE mappings periodically acquire positive and negative values, which decay to zero with increasing source magnitude, and the interval over which the mapping takes non-zero values becomes wider with the correlation between the source and the side information. Finally, inspired by the mentioned properties of the NOE mappings, parameterized encoder mappings with a small number of degrees of freedom are proposed for both distortion criteria, and their performance is compared with that of the NOE mappings.

Journal article

Rosas De Andraca F, Chen K-C, 2017, Social learning against data falsification in sensor networks, Conference on Complex Networks 2017, Publisher: Springer Verlag, Pages: 704-716, ISSN: 1860-949X

Sensor networks generate large amounts of geographically-distributed data. The conventional approach to exploit this data is to first gather it in a special node that then performs processing and inference. However, what happens if this node is destroyed, or even worst, if it is hijacked? To explore this problem, in this work we consider a smart attacker who can take control of critical nodes within the network and use them to inject false information. In order to face this critical security thread, we propose a novel scheme that enables data aggregation and decision-making over networks based on social learning, where the sensor nodes act resembling how agents make decisions in social networks. Our results suggest that social learning enables high network resilience, even when a significant portion of the nodes have been compromised by the attacker.

Conference paper

Ozfatura E, Gunduz D, 2017, Mobility and popularity-aware coded small-cell caching, IEEE Communications Letters, Vol: 22, Pages: 288-291, ISSN: 1089-7798

In heterogeneous cellular networks with caching capability, due to mobility of users and storage constraints of small-cell base stations (SBSs), users may not be able to download all of their requested content from the SBSs within the delay deadline of the content. In that case, the users are directed to the macro-cell base station (MBS) in order to satisfy the service quality requirement. Coded caching is exploited here to minimize the amount of data downloaded from the MBS, taking into account the mobility of the users as well as the popularity of the contents. An optimal distributed caching policy is presented when the delay deadline is below a certain threshold, and a distributed greedy caching policy is proposed when the delay deadline is relaxed.

Journal article

Abad MSH, Gunduz D, Ercetin O, 2017, Communication over a time correlated channel with an energy harvesting transmitter, International Symposium on Wireless Communication Systems (ISWCS), Publisher: IEEE, Pages: 331-336, ISSN: 2154-0225

In this work, communication over a time-correlated point-to-point wireless channel is studied for an energy harvesting (EH) transmitter. In this model, we take into account the time and energy cost of acquiring channel state information. At the beginning of the time slot, the EH transmitter, has to choose among three possible actions: i) deferring the transmission to save its energy for future use, ii) transmitting without sensing, and iii) sensing the channel before transmission. At each time slot, the transmitter chooses one of the three possible actions to maximize the total expected discounted number of bits transmitted over an infinite time horizon. This problem can be formulated as a partially observable Markov decision process (POMDP) which is then converted to an ordinary MDP by introducing a belief on the channel state, and the optimal policy is shown to exhibit a threshold behavior on the belief state, with battery-dependent threshold values. Optimal threshold values and corresponding optimal performance are characterized through numerical simulations, and it is shown that having the sensing action and intelligently using it to track the channel state improves the achievable long-term throughput significantly.

Conference paper

Guler B, Gunduz D, Yener A, 2017, On the necessary conditions for transmitting correlated sources over a multiple access channel, IEEE Int’l Symposium on Information Theory (ISIT) 2017, Publisher: IEEE

We study the lossy communication of correlatedsources over a multiple access channel (MAC). In particular, weprovide a new set of necessary conditions for the achievability ofa distortion pair over a given channel. The necessary conditionsare then specialized to the case of bivariate Gaussian sources anddoubly symmetric binary sources over a Gaussian multiple accesschannel. Our results indicate that the new necessary conditionsprovide the tightest conditions to date in certain cases.

Conference paper

Mohammadi Amiri M, Gunduz D, 2017, Decentralized caching and coded delivery over Gaussian broadcast channels, International Symposium on Information Theory (ISIT) 2017, Publisher: IEEE, Pages: 2785-2789, ISSN: 2157-8117

A cache-aided K-user Gaussian broadcast channel (BC) is considered. The transmitter has a library of N equal-rate files, from which each user demands one. The impact of the equal-capacity receiver cache memories on the minimum required transmit power to satisfy all user demands is studied. Decentralized caching with uniformly random demands is considered, and both the minimum average power (averaged over all demand combinations) and the minimum peak power (minimum power required to satisfy the worst-case demand combination) are studied. Upper and lower bounds are presented on the minimum required average and peak transmit power as a function of the cache capacity, assuming uncoded cache placement. The gaps between the upper and lower bounds on both the minimum peak and average power values are shown to be relatively small through numerical results, particularly for large cache capacities.

Conference paper

Rassouli B, Varasteh M, Gunduz D, 2017, Capacity region of a one-bit quantized Gaussian multiple access channel, ISIT 2017, Publisher: IEEE, Pages: 2633-2637

The capacity region of a two-transmitter Gaussian multiple access channel (MAC) under average input power constraints is studied, when the receiver employs a zero-threshold one-bit analog-to-digital converter (ADC). It is proved that the input distributions that achieve the boundary points of the capacity region are discrete. Based on the position of a boundary point, upper bounds on the number of the mass points of the corresponding distributions are derived. Finally, a conjecture on the sufficiency of K mass points in a point-to-point real AWGN with a K-bin ADC front end (symmetric or asymmetric) is settled.

Conference paper

Li Z, Oechtering T, Gunduz D, 2017, Smart meter privacy based on adversarial hypothesis testing, IEEE Int’l Symposium on Information Theory (ISIT) 2017, Publisher: IEEE

Privacy-preserving energy management is studied inthe presence of a renewable energy source. It is assumed thatthe energy demand/supply from the energy provider is trackedby a smart meter. The resulting privacy leakage is measuredthrough the probabilities of error in a binary hypothesis test,which tries to detect the consumer behavior based on the meterreadings. An optimal privacy-preserving energy managementpolicy maximizes the minimal Type II probability of error subjectto a constraint on the Type I probability of error. When theprivacy-preserving energy management policy is based on allthe available information of energy demands, energy supplies,and hypothesis, the asymptotic exponential decay rate of themaximum minimal Type II probability of error is characterizedby a divergence rate expression. Two special privacy-preservingenergy management policies, the memoryless hypothesis-awarepolicy and the hypothesis-unaware policy with memory, are thenconsidered and their performances are compared. Further, itis shown that the energy supply alphabet can be constrainedto the energy demand alphabet without loss of optimality forthe evaluation of a single-letter-divergence privacy-preservingguarantee.

Conference paper

Mohammadi Amiri M, Yang Q, Gunduz D, 2017, Decentralized Caching and Coded Delivery with Distinct Cache Capacities, IEEE Transactions on Communications, Vol: 65, Pages: 4657-4669, ISSN: 0090-6778

Decentralized proactive caching and coded delivery is studied in a content delivery network, where each user is equipped with a cache memory, not necessarily of equal capacity. Cache memories are filled in advance during the off-peak traffic period in a decentralized manner, i.e., without the knowledge of the number of active users, their identities, or their particular demands. User demands are revealed during the peak traffic period, and are served simultaneously through an error-free shared link. The goal is to find the minimum delivery rate during the peak traffic period that is sufficient to satisfy all possible demand combinations. A group-based decentralized caching and coded delivery scheme is proposed, and it is shown to improve upon the state-of-the-art in terms of the minimum required delivery rate when there are more users in the system than files. Numerical results indicate that the improvement is more significant as the cache capacities of the users become more skewed. A new lower bound on the delivery rate is also presented, which provides a tighter bound than the classical cut-set bound.

Journal article

Amiri MM, Gunduz D, 2017, Cache-aided data delivery over erasure broadcast channels, 2017 IEEE International Conference on Communications (ICC), Publisher: IEEE, ISSN: 1550-3607

A cache-aided erasure broadcast channel is studied. The receivers are divided into two sets: the weak and strong receivers, where the receivers in the same set all have the same erasure probability. The weak receivers, in order to compensate for the high erasure probability, are equipped with cache memories of equal size, while the receivers in the strong set have no caches. Data can be pre-delivered to weak receivers' caches over the off-peak traffic period before the receivers reveal their demands. A joint caching and channel coding scheme is proposed such that all the receivers, even the receivers without any cache memories, benefit from the presence of caches across the network. The trade-off between the cache size and the achievable rate is studied, and it is shown that the proposed scheme significantly improves the achievable trade-off upon the state-of-the-art.

Conference paper

Yang Q, Mohammadi Amiri, Gunduz D, 2017, Audience retention rate aware coded video caching, IEEE ICC Workshop, Publisher: IEEE, ISSN: 2474-9133

Users often do not watch an online video content in its entirety, and abort the video before it is completed. This is captured by the notion of audience retention rate, which indicates the portion of a video users watch on average. A decentralized coded caching scheme, called partial coded caching (PCC), is proposed here to take into account both the popularity, and the audience retention rate of the video files in a database. The achievable average delivery rate of PCC is characterised over all possible demand combinations. Two different cache allocation schemes, called the optimal cache allocation (OCA) and the popularity based cache allocation (PCA), are proposed to allocate cache capacities among the different chunks of video files. Numerical results validate that the proposed coded caching scheme, either with the OCA or the PCA, outperforms conventional uncoded caching, as well as the state-of-the-art coded caching schemes that consider only file popularities.

Conference paper

Somuyiwa S, Gyorgy A, Gunduz D, 2017, Improved policy representation and policy search for proactive content caching in wireless networks, 15th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks, Publisher: IEEE

We study the problem of proactively pushing contents into a finite capacity cache memory of a user equipment in order to reduce the long-term average energy consumption in a wireless network. We consider an online social network (OSN) framework, in which new contents are generated over time and each content remains relevant to the user for a random time period, called the lifetime of the content. The user accesses the OSN through a wireless network at random time instants to download and consume all the relevant contents. Downloading contents has an energy cost that depends on the channel state and the number of downloaded contents. Our aim is to reduce the long-term average energy consumption by proactively caching contents at favorable channel conditions. In previous work, it was shown that the optimal caching policy is infeasible to compute (even with the complete knowledge of a stochastic model describing the system), and a simple family of threshold policies was introduced and optimised using the finite difference method. In this paper we improve upon both components of this approach: we use linear function approximation (LFA) to better approximate the considered family of caching policies, and apply the REINFORCE algorithm to optimise its parameters. Numerical simulations show that the new approach provides reduction in both the average energy cost and the running time for policy optimisation.

Conference paper

Somuyiwa S, Gyorgy A, Gunduz D, 2017, Energy-efficient wireless content delivery with proactive caching, 15th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt), Publisher: IEEE

We propose an intelligent proactive content caching scheme to reduce the energy consumption in wireless downlink. We consider an online social network (OSN) setting where new contents are generated over time, and remain relevant to the user for a random lifetime. Contents are downloaded to the user equipment (UE) through a time-varying wireless channel at an energy cost that depends on the channel state and the number of contents downloaded. The user accesses the OSN at random time instants, and consumes all the relevant contents. To reduce the energy consumption, we propose proactive caching of contents under favorable channel conditions to a finite capacity cache memory. Assuming that the channel quality (or equivalently, the cost of downloading data) is memoryless over time slots, we show that the optimal caching policy, which may replace contents in the cache with shorter remaining lifetime with contents at the server that remain relevant longer, has a threshold structure with respect to the channel quality. Since the optimal policy is computationally demanding in practice, we introduce a simplified caching scheme and optimize its parameters using policy search. We also present two lower bounds on the energy consumption. We demonstrate through numerical simulations that the proposed caching scheme significantly reduces the energy consumption compared to traditional reactive caching tools, and achieves close- to-optimal performance for a wide variety of system parameters.

Conference paper

Varasteh M, Rassouli B, Simeone O, Gunduz Det al., 2017, Zero-Delay Source-Channel Coding with a Low-Resolution ADC Front End, IEEE Transactions on Information Theory, Vol: 64, Pages: 1241-1261, ISSN: 0018-9448

Motivated by the practical constraints arising in emerging sensor network and Internet-of-Things (IoT) applications, the zero-delay transmission of a Gaussian measurement over a real single-input multiple-output (SIMO) additive white Gaussian noise (AWGN) channel is studied with a low-resolution analog-to-digital converter (ADC) front end. Joint optimization of the encoder and the decoder mapping is tackled under both the mean squared error (MSE) distortion and the distortion outage probability (DOP) criteria, with an average power constraint on the channel input. Optimal encoder and decoder mappings are identified for a one-bit ADC front end under both criteria. For the MSE distortion, the optimal encoder mapping is shown to be non-linear in general, while it tends to a linear encoder in the low signal-to-noise ratio (SNR) regime, and to an antipodal digital encoder in the high SNR regime. This is in contrast to the optimality of linear encoding at all SNR values in the presence of a full-precision front end. For the DOP criterion, it is shown that the optimal encoder mapping is piecewise constant and can take only two opposite values when it is non-zero. For both the MSE distortion and the DOP criteria, necessary optimality conditions are then derived for K-level ADC front ends as well as front ends with multiple one-bit ADCs. These conditions are used to obtain numerically optimized solutions. Extensive numerical results are also provided in order to gain insights into the structure of the optimal encoding and decoding mappings.

Journal article

Murin Y, Kaspi Y, Dabora R, Gunduz Det al., 2017, On the energy-distortion tradeoff of Gaussian broadcast channels with feedback, Entropy, Vol: 19, ISSN: 1099-4300

This work studies the relationship between the energy allocated for transmitting a pair of correlated Gaussian sources over a two-user Gaussian broadcast channel with noiseless channel output feedback (GBCF) and the resulting distortion at the receivers. Our goal is to characterize the minimum transmission energy required for broadcasting a pair of source samples, such that each source can be reconstructed at its respective receiver to within a target distortion, when the source-channel bandwidth ratio is not restricted. This minimum transmission energy is defined as the energy-distortion tradeoff (EDT). We derive a lower bound and three upper bounds on the optimal EDT. For the upper bounds, we analyze the EDT of three transmission schemes: two schemes are based on separate source-channel coding and apply encoding over multiple samples of source pairs, and the third scheme is a joint source-channel coding scheme that applies uncoded linear transmission on a single source-sample pair and is obtained by extending the Ozarow–Leung (OL) scheme. Numerical simulations show that the EDT of the OL-based scheme is close to that of the better of the two separation-based schemes, which makes the OL scheme attractive for energy-efficient, low-latency and low-complexity source transmission over GBCFs.

Journal article

Sreekumar S, Gunduz D, 2017, Distributed hypothesis testing over noisy channels, 2017 IEEE International Symposium on Information Theory (ISIT), Publisher: IEEE

A distributed binary hypothesis testing problem, in which multiple observers transmit their observations to a detector over noisy channels, is studied. Together with its own observations, the goal of the detector is to decide between two hypotheses for the joint distribution of the data. Single-letter upper and lower bounds on the optimal type 2 error exponent (T2-EE), when the type 1 error probability vanishes with the block-length are obtained. These bounds coincide and characterize the optimal T2-EE when only a single helper is involved. Our result shows that the optimal T2-EE depends on the marginal distributions of the data and the channels rather than their joint distribution. However, an operational separation between HT and channel coding does not hold, and the optimal T2-EE is achieved by generating channel inputs correlated with observed data.

Conference paper

Murin Y, Kaspi Y, Dabora R, Gunduz Det al., 2017, Finite-length linear schemes for joint source-channel coding over Gaussian broadcast channels with feedback, IEEE Transactions on Information Theory, Vol: 63, Pages: 2737-2772, ISSN: 0018-9448

We study linear encoding for a pair of correlatedGaussian sources transmitted over a two-user Gaussian broadcastchannel in the presence of unit-delay noiseless feedback, abbre-viated as the GBCF. Each pair of source samples is transmittedusing a linear transmission scheme in afinitenumber of channeluses. We investigate three linear transmission schemes: A schemebased on the Ozarow-Leung (OL) code, a scheme based onthe linear quadratic Gaussian (LQG) code of Ardestanizadehet al., and a novel scheme derived in this work using a dynamicprogramming (DP) approach. For the OL and LQG schemeswe present lower and upper bounds on the minimal number ofchannel uses needed to achieve a target mean-square error (MSE)pair. For the LQG scheme in the symmetric setting, we identifythe optimal scaling of the sources, which results in a significantimprovement of its finite horizon performance, and, in addition,characterize the (exact) minimal number of channel uses requiredto achieve a target MSE. Finally, for the symmetric setting, weshow that for any fixed and finite number of channel uses, theDP scheme achieves an MSE lower than the MSE achieved byeither the LQG or the OL schemes.

Journal article

Mohammadi Amiri M, Yang, Gunduz D, 2017, Decentralized coded caching with distinct cache capacities, Asilomar Conference on Signals, Systems and Computers, Publisher: IEEE, Pages: 734-738, ISSN: 1058-6393

Decentralized coded caching is studied for a content server with N files, each of size F bits, serving K active users, each equipped with a cache of distinct capacity. It is assumed that the users' caches are filled in advance during the off-peak traffic period without the knowledge of the number of active users, their identities, or the particular demands. User demands are revealed during the peak traffic period, and are served simultaneously through an error-free shared link. A new decentralized coded caching scheme is proposed for this scenario, and it is shown to improve upon the state-of-the-art in terms of the required delivery rate over the shared link, when there are more users in the system than the number of files. Numerical results indicate that the improvement becomes more significant as the cache capacities of the users become more skewed.

Conference paper

Zhao J, Simeone, Gunduz D, Gomez-Barqueroet al., 2017, Non-orthogonal unicast and broadcast transmission via joint beamforming and LDM in cellular networks, Global Communications Conference (GLOBECOM), 2016 IEEE

Research efforts to incorporate multicast and broadcast transmission into the cellular network architecture are gaining momentum, particularly for multimedia streaming applications. Layered division multiplexing (LDM), a form of nonorthogonal multiple access (NOMA), can potentially improve unicast throughput and broadcast coverage with respect to traditional orthogonal frequency division multiplexing (FDM) or time division multiplexing (TDM), by simultaneously using the same frequency and time resources for multiple unicast or broadcast transmissions. In this paper, the performance of LDM-based unicast and broadcast transmission in a cellular network is studied by assuming a single frequency network (SFN) operation for the broadcast layer, while allowing for arbitrarily clustered cooperation for the transmission of unicast data streams. Beamforming and power allocation between unicast and broadcast layers, and hence the so-called injection level in the LDM literature, are optimized with the aim of minimizing the sum-power under constraints on the user-specific unicast rates and on the common broadcast rate. The problem is tackled by means of successive convex approximation (SCA) techniques, as well as through the calculation of performance upper bounds by means of semidefinite relaxation (SDR). Numerical results are provided to compare the orthogonal and non-orthogonal multiplexing of broadcast and unicast traffic.

Conference paper

Mohammadi Amiri M, Gunduz D, 2017, Improved delivery rate-cache capacity trade-off for centralized coded caching, International Symposium on Information Theory and Its Applications (ISITA), Publisher: IEEE

Centralized coded caching problem, in which a server with N distinct files, each with the size of F bits, serves K users, each equipped with a cache of capacity MF bits, is considered. The server is allowed to proactively cache contents into user terminals during the placement phase, without knowing the particular user requests. After the placement phase, each user requests one of the N files from the server, and all the users' requests are satisfied simultaneously by the server through an error-free shared link during the delivery phase. A novel coded caching algorithm is proposed, which is shown to achieve a smaller delivery rate compared to the existing coded caching schemes in the literature for a range of N and K values; particularly when the number of files is larger than the number of users in the system.

Conference paper

Mohammadi Amiri M, Gunduz D, 2017, Fundamental limits of coded caching: improved delivery rate-cache capacity tradeoff, IEEE Transactions on Communications, Vol: 65, Pages: 806-815, ISSN: 0090-6778

A centralized coded caching system, consisting of a server delivering N popular files, each of size F bits, to K users through an error-free shared link, is considered. It is assumed that each user is equipped with a local cache memory with capacity MF bits, and contents can be proactively cached into these caches over a low traffic period, however, without the knowledge of the user demands. During the peak traffic period, each user requests a single file from the server. The goal is to minimize the number of bits delivered by the server over the shared link, known as the delivery rate, over all user demand combinations. A novel coded caching scheme for the cache capacity of M = (N-1)/K is proposed. It is shown that the proposed scheme achieves a smaller delivery rate than the existing coded caching schemes in the literature, when K > N ≥ 3. Furthermore, we argue that the delivery rate of the proposed scheme is within a constant multiplicative factor of 2 of the optimal delivery rate for cache capacities 1/K ≤ M ≤ (N -1)/K, when K > N ≥ 3.

Journal article

Mohammadi Amiri M, Gunduz D, 2017, Cache-aided data delivery over erasure broadcast channels, IEEE International Conference on Communications, Publisher: Institute of Electrical and Electronics Engineers (IEEE), ISSN: 0536-1486

A cache-aided erasure broadcast channel is studied.The receivers are divided into two sets: the weak and strongreceivers, where the receivers in the same set all have the sameerasure probability. The weak receivers, in order to compensatefor the high erasure probability, are equipped with cache mem-ories of equal size, while the receivers in the strong set haveno caches. Data can be pre-delivered to weak receivers’ cachesover the off-peak traffic period before the receivers reveal theirdemands. A joint caching and channel coding scheme is proposedsuch that all the receivers, even the receivers without any cachememories, benefit from the presence of caches across the network.The trade-off between the cache size and the achievable rate isstudied, and it is shown that the proposed scheme significantlyimproves the achievable trade-off upon the state-of-the-art.

Conference paper

Tan O, Gomez-Vilardebo J, Gunduz D, 2017, Privacy-cost trade-offs in demand-side management with storage, IEEE Transactions on Information Forensics & Security, Vol: 12, Pages: 1458-1469, ISSN: 1556-6013

Demand-side energy management (EM) is studiedfrom aprivacy-cost trade-offperspective, considering time-of-usepricing and the presence of an energy storage unit. Privacy ismeasured as the variation of the power withdrawn from the gridfrom a fixed target value. Assuming non-causal knowledge of thehousehold’s aggregate power demand profile and the electricityprices at the energy management unit (EMU), the privacy-costtrade-off is formulated as a convex optimization problem, anda low-complexitybackward water-filling algorithmis proposed tocompute the optimal EM policy. The problem is studied also inthe online setting assuming that the power demand profile isknown to the EMU only causally, and the optimal EM policy isobtained numerically through dynamic programming (DP). Dueto the high computational cost of DP, a low-complexity heuristicEM policy with a performance close to the optimal online solutionis also proposed, exploiting the water-filling algorithm obtainedin the offline setting. As an alternative, information theoreticleakage rate is also evaluated, and shown to follow a similartrend as the load variance, which supports the validity of theload variance as a measure of privacy. Finally, the privacy-costtrade-off, and the impact of the size of the storage unit on thistrade-off are studied through numerical simulations usingrealsmart meter data in both the offline and online settings.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00761710&limit=30&person=true&page=8&respub-action=search.html