Aller au contenu
Accueil » Troisième journée CodeGNN

Troisième journée CodeGNN

Cette journée c’est déroulée à Caen le 8 Décembre 2023.

  • Jason Piquenot: G2N2: Grammatical Graph Neural Network and more.
  • This presentation introduces a framework for formally establishing a connection between a portion of an algebraic language and a Graph Neural Network (GNN). The framework leverages Context-Free Grammars (CFG) to organize algebraic operations into generative rules that can be translated into a GNN layer model. As CFGs derived directly from a language tend to contain redundancies in their rules and variables, we present a grammar reduction scheme. By applying this strategy, we define a CFG that conforms to the third-order Weisfeiler-Lehman (3-WL) test using the matricial language MATLANG. From this 3-WL CFG, we derive a GNN model, named G$^2$N$^2$, which is provably 3-WL compliant. Through various experiments, we demonstrate the superior efficiency of G$^2$N$^2$ compared to other 3-WL GNNs across numerous downstream tasks. Specifically, one experiment highlights the benefits of grammar reduction within our framework.This work is related to a submission to ICLR 2024.

  • Yann Tirad Gatel : Les GNN pour les graphes spatio-temporels pour la reconnaissance d’actions
  • Stevan Stanovic: Impact of pooling methods on over-squashing and over-smoothing.
  • Convolutional Neural Networks (CNNs) have enabled major advances in image classification through convolution and pooling. In particular, image pooling transforms a connected discrete lattice into a reduced lattice with the same connectivity and allows reduction functions to consider all pixels in an image. However, there is no pooling that satisfies these properties for graphs. In fact, traditional graph pooling methods suffer from at least one of the following drawbacks: Graph disconnection or overconnection, low decimation ratio, and deletion of large parts of graphs. In this presentation, we present three pooling methods and the effect of pooling on two major shortcuts of convolution operations. Namely over-smoothing and over-squashing.