Publications
63 results found
Vlaski S, Sayed AH, 2021, Graph-homomorphic perturbations for private decentralized learning, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Publisher: IEEE, Pages: 5240-5244
Decentralized algorithms for stochastic optimization and learning rely on the diffusion of information through repeated local exchanges of intermediate estimates. Such structures are particularly appealing in situations where agents may be hesitant to share raw data due to privacy concerns. Nevertheless, in the absence of additional privacy-preserving mechanisms, the exchange of local estimates, which are generated based on private data can allow for the inference of the data itself. The most common mechanism for guaranteeing privacy is the addition of perturbations to local estimates before broadcasting. These perturbations are generally chosen independently at every agent, resulting in a significant performance loss. We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible (to first order in the step-size) to the network centroid, while preserving privacy guarantees. The analysis allows for general nonconvex loss functions, and is hence applicable to a large number of machine learning and signal processing problems, including deep learning.
Ntemos K, Bordignon V, Vlaski S, et al., 2021, Social learning under inferential attacks, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Publisher: IEEE, Pages: 5479-5483
A common assumption in the social learning literature is that agents exchange information in an unselfish manner. In this work, we consider the scenario where a subset of agents aims at driving the network beliefs to the wrong hypothesis. The adversaries are unaware of the true hypothesis. However, they will "blend in" by behaving similarly to the other agents and will manipulate the likelihood functions used in the belief update process to launch inferential attacks. We will characterize the conditions under which the network is misled. Then, we will explain that it is possible for such attacks to succeed by showing that strategies exist that can be adopted by the malicious agents for this purpose. We examine both situations in which the agents have minimal or no information about the network model.
Vlaski S, Sayed AH, 2021, Distributed learning in non-convex environments-Part II: polynomial escape from saddle-points, IEEE Transactions on Signal Processing, Vol: 69, Pages: 1257-1270, ISSN: 1053-587X
The diffusion strategy for distributed learning from streaming data employs local stochastic gradient updates along with exchange of iterates over neighborhoods. In Part I [3] of this work we established that agents cluster around a network centroid and proceeded to study the dynamics of this point. We established expected descent in non-convex environments in the large-gradient regime and introduced a short-term model to examine the dynamics over finite-time horizons. Using this model, we establish in this work that the diffusion strategy is able to escape from strict saddle-points in O(1/μ) iterations, where μ denotes the step-size; it is also able to return approximately second-order stationary points in a polynomial number of iterations. Relative to prior works on the polynomial escape from saddle-points, most of which focus on centralized perturbed or stochastic gradient descent, our approach requires less restrictive conditions on the gradient noise process.
Vlaski S, Sayed AH, 2021, Distributed learning in non-convex environments-Part I: agreement at a linear rate, IEEE Transactions on Signal Processing, Vol: 69, Pages: 1242-1256, ISSN: 1053-587X
Driven by the need to solve increasingly complex optimization problems in signal processing and machine learning, there has been increasing interest in understanding the behavior of gradient-descent algorithms in non-convex environments. Most available works on distributed non-convex optimization problems focus on the deterministic setting where exact gradients are available at each agent. In this work and its Part II, we consider stochastic cost functions, where exact gradients are replaced by stochastic approximations and the resulting gradient noise persistently seeps into the dynamics of the algorithm. We establish that the diffusion learning strategy continues to yield meaningful estimates non-convex scenarios in the sense that the iterates by the individual agents will cluster in a small region around the network centroid. We use this insight to motivate a short-term model for network evolution over a finite-horizon. In Part II of this work, we leverage this model to establish descent of the diffusion strategy through saddle points in O(1/μ) steps, where μ denotes the step-size, and the return of approximately second-order stationary points in a polynomial number of iterations.
Shumovskaia V, Ntemos K, Vlaski S, et al., 2021, Online Graph Learning from Social Interactions, Pages: 1263-1267, ISSN: 1058-6393
Social learning algorithms provide models for the formation of opinions over social networks resulting from local reasoning and peer-to-peer exchanges. Interactions occur over an underlying graph topology, which describes the flow of information and relative influence between pairs of agents. For a given graph topology, these algorithms allow for the prediction of formed opinions. In this work, we study the inverse problem. Given a social learning model and observations of the evolution of beliefs over time, we aim at identifying the underlying graph topology. The learned graph allows for the inference of pairwise influence between agents, the overall influence agents have over the behavior of the network, as well as the flow of information through the social network. The proposed algorithm is online in nature and can adapt dynamically to changes in the graph topology or the true hypothesis.
Kayaalp M, Vlaski S, Sayed AH, 2021, Distributed Meta-Learning with Networked Agents, 29th European Signal Processing Conference (EUSIPCO), Publisher: EUROPEAN ASSOC SIGNAL SPEECH & IMAGE PROCESSING-EURASIP, Pages: 1361-1365, ISSN: 2076-1465
- Author Web Link
- Cite
- Citations: 3
Rizk E, Vlaski S, Sayed AH, 2020, Dynamic federated learning, 21st IEEE International Workshop on Signal Processing Advances in Wireless Communications (IEEE SPAWC), Publisher: IEEE, Pages: 1-5, ISSN: 2325-3789
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments. While many federated learning architectures process data in an online manner, and are hence adaptive by nature, most performance analyses assume static optimization problems and offer no guarantees in the presence of drifts in the problem solution or data characteristics. We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data. Under a nonstationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm. The results clarify the trade-off between convergence and tracking performance.
Nassif R, Vlaski S, Richard C, et al., 2020, Multitask Learning Over Graphs: An Approach for Distributed, Streaming Machine Learning, IEEE SIGNAL PROCESSING MAGAZINE, Vol: 37, Pages: 14-25, ISSN: 1053-5888
- Author Web Link
- Cite
- Citations: 43
Nassif R, Vlaski S, Richard C, et al., 2020, Learning Over Multitask Graphs-Part II: Performance Analysis, IEEE OPEN JOURNAL OF SIGNAL PROCESSING, Vol: 1, Pages: 46-63
- Author Web Link
- Cite
- Citations: 6
Nassif R, Vlaski S, Richard C, et al., 2020, Learning Over Multitask Graphs-Part I: Stability Analysis, IEEE OPEN JOURNAL OF SIGNAL PROCESSING, Vol: 1, Pages: 28-45
- Author Web Link
- Cite
- Citations: 15
Vlaski S, Rizk E, Sayed AH, 2020, SECOND-ORDER GUARANTEES IN FEDERATED LEARNING, 54th Asilomar Conference on Signals, Systems and Computers, Publisher: IEEE, Pages: 915-922, ISSN: 1058-6393
- Author Web Link
- Cite
- Citations: 1
Vlaski S, Sayed AH, 2020, Second-order guarantees in centralized, federated and decentralized nonconvex optimization, COMMUNICATIONS IN INFORMATION AND SYSTEMS, Vol: 20, Pages: 353-388, ISSN: 1526-7555
- Author Web Link
- Cite
- Citations: 4
Vlaski S, Rizk E, Sayed AH, 2020, Tracking Performance of Online Stochastic Learners, IEEE SIGNAL PROCESSING LETTERS, Vol: 27, Pages: 1385-1389, ISSN: 1070-9908
- Author Web Link
- Cite
- Citations: 2
Vlaski S, Sayed AH, 2020, LINEAR SPEEDUP IN SADDLE-POINT ESCAPE FOR DECENTRALIZED NON-CONVEX OPTIMIZATION, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Publisher: IEEE, Pages: 8589-8593, ISSN: 1520-6149
Nassif R, Vlaski S, Sayed AH, 2020, Adaptation and Learning Over Networks Under Subspace Constraints-Part II: Performance Analysis, IEEE TRANSACTIONS ON SIGNAL PROCESSING, Vol: 68, Pages: 2948-2962, ISSN: 1053-587X
- Author Web Link
- Cite
- Citations: 5
Nassif R, Vlaski S, Sayed AH, 2020, Adaptation and Learning Over Networks Under Subspace Constraints-Part I: Stability Analysis, IEEE TRANSACTIONS ON SIGNAL PROCESSING, Vol: 68, Pages: 1346-1360, ISSN: 1053-587X
- Author Web Link
- Cite
- Citations: 16
Nassif R, Vlaski S, Richard C, et al., 2019, A Regularization Framework for Learning Over Multitask Graphs, IEEE SIGNAL PROCESSING LETTERS, Vol: 26, Pages: 297-301, ISSN: 1070-9908
- Author Web Link
- Cite
- Citations: 13
Ying B, Yuan K, Vlaski S, et al., 2019, Stochastic Learning Under Random Reshuffling With Constant Step-Sizes, IEEE TRANSACTIONS ON SIGNAL PROCESSING, Vol: 67, Pages: 474-489, ISSN: 1053-587X
- Author Web Link
- Cite
- Citations: 10
Vlaski S, Sayed AH, 2019, POLYNOMIAL ESCAPE-TIME FROM SADDLE POINTS IN DISTRIBUTED NON-CONVEX OPTIMIZATION, 8th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Publisher: IEEE, Pages: 171-175
- Author Web Link
- Cite
- Citations: 1
Nassif R, Vlaski S, Sayed AH, 2019, DISTRIBUTED INFERENCE OVER NETWORKS UNDER SUBSPACE CONSTRAINTS, 44th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Publisher: IEEE, Pages: 5232-5236, ISSN: 1520-6149
- Author Web Link
- Cite
- Citations: 9
Vlaski S, Sayed AH, 2019, DIFFUSION LEARNING IN NON-CONVEX ENVIRONMENTS, 44th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Publisher: IEEE, Pages: 5262-5266, ISSN: 1520-6149
- Author Web Link
- Cite
- Citations: 8
Nassif R, Vlaski S, Sayed AH, 2019, Distributed Learning over Networks under Subspace Constraints, 53rd Asilomar Conference on Signals, Systems, and Computers (ACSSC), Publisher: IEEE, Pages: 194-198, ISSN: 1058-6393
Merched R, Vlaski S, Sayed AH, 2019, ENHANCED DIFFUSION LEARNING OVER NETWORKS, 27th European Signal Processing Conference (EUSIPCO), Publisher: IEEE, ISSN: 2076-1465
Vlaski S, Maretic HP, Nassif R, et al., 2018, ONLINE GRAPH LEARNING FROM SEQUENTIAL DATA, IEEE Data Science Workshop (DSW), Publisher: IEEE, Pages: 190-194
- Author Web Link
- Cite
- Citations: 18
Nassif R, Vlaski S, Sayed AH, 2018, Distributed Inference over Multitask Graphs under Smoothness, IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Publisher: IEEE, Pages: 631-635, ISSN: 2325-3789
- Author Web Link
- Cite
- Citations: 3
Ying B, Yuan K, Vlaski S, et al., 2017, ON THE PERFORMANCE OF RANDOM RESHUFFLING IN STOCHASTIC LEARNING, Information Theory and Applications Workshop (ITA), Publisher: IEEE
- Author Web Link
- Cite
- Citations: 1
Basir-Kazeruni S, Vlaski S, Salami H, et al., 2017, A Blind Adaptive Stimulation Artifact Rejection (ASAR) Engine for Closed-Loop Implantable Neuromodulation Systems, 8th International IEEE/EMBS Conference on Neural Engineering (NER), Publisher: IEEE, Pages: 186-189, ISSN: 1948-3546
- Author Web Link
- Cite
- Citations: 15
Yuan K, Ying B, Vlaski S, et al., 2016, STOCHASTIC GRADIENT DESCENT WITH FINITE SAMPLES SIZES, 26th IEEE International Workshop on Machine Learning for Signal Processing (MLSP), Publisher: IEEE, ISSN: 2161-0363
- Author Web Link
- Cite
- Citations: 4
Vlaski S, Vandenberghe L, Sayed AH, 2016, DIFFUSION STOCHASTIC OPTIMIZATION WITH NON-SMOOTH REGULARIZERS, 41st IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Publisher: IEEE, Pages: 4149-4153, ISSN: 1520-6149
- Author Web Link
- Cite
- Citations: 16
Vlaski S, Ying B, Sayed AH, 2016, THE BRAIN STRATEGY FOR ONLINE LEARNING, IEEE Global Conference on Signal and Information Processing (GlobalSIP), Publisher: IEEE, Pages: 1285-1289, ISSN: 2376-4066
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.