Citation

BibTex format

@article{Di:2023,
author = {Di, Giovanni F and Rowbottom, J and Chamberlain, BP and Markovich, T and Bronstein, MM},
journal = {Transactions on Machine Learning Research},
title = {Understanding convolution on graphs via energies},
volume = {2023},
year = {2023}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Graph Neural Networks (GNNs) typically operate by message-passing, where the state of a node is updated based on the information received from its neighbours. Most message-passing models act as graph convolutions, where features are mixed by a shared, linear transformation before being propagated over the edges. On node-classification tasks, graph convolutions have been shown to suffer from two limitations: poor performance on heterophilic graphs, and over-smoothing. It is common belief that both phenomena occur because such models behave as low-pass filters, meaning that the Dirichlet energy of the features decreases along the layers incurring a smoothing effect that ultimately makes features no longer distinguishable. In this work, we rigorously prove that simple graph-convolutional models can actually enhance high frequencies and even lead to an asymptotic behaviour we refer to as over-sharpening, opposite to over-smoothing. We do so by showing that linear graph convolutions with symmetric weights minimize a multi-particle energy that generalizes the Dirichlet energy; in this setting, the weight matrices induce edge-wise attraction (repulsion) through their positive (negative) eigenvalues, thereby controlling whether the features are being smoothed or sharpened. We also extend the analysis to non-linear GNNs, and demonstrate that some existing time-continuous GNNs are instead always dominated by the low frequencies. Finally, we validate our theoretical findings through ablations and real-world experiments.
AU - Di,Giovanni F
AU - Rowbottom,J
AU - Chamberlain,BP
AU - Markovich,T
AU - Bronstein,MM
PY - 2023///
TI - Understanding convolution on graphs via energies
T2 - Transactions on Machine Learning Research
VL - 2023
ER -