Latent Topology Inference

Claudio Battiloro
10 min readFeb 19, 2024

How combinatorial topological spaces can be used to promote a paradigm shift from inferring pairwise to multiway latent relationships in data.

Several problems in machine learning call for methods able to infer and exploit multi-way, higher-order relationships hidden in the data. We propose the new beyond-graph paradigm of Latent Topology Inference, which aims to learn latent higher-order combinatorial topological spaces describing multi-way interactions among data points. To make Latent Topology Inference implementable, we introduce the Differentiable Cell Complex Module, a novel learnable function able to infer a latent cell complex to improve the downstream task.

This post is based on the paper by Claudio Battiloro, Indro Spinelli, et al., “From Latent graph to Latent Topology Inference: Differentiable Cell Complex Module”, ICLR 2024, a collaboration with Lev Telyatnikov, Michael Bronstein, Simone Scardapane, and Paolo Di Lorenzo. The code can be found here.

Graph Neural Networks (GNNs) have become prominent in several domains, due to their ability to merge neural networks flexibility with the knowledge of data relationships modeled through graphs. However, GNNs typically need as input a fixed graph (which is assumed optimal for the task at hand), and they can explicitly model only pairwise interactions.

Very recently, a series of works tackled the former limitation by introducing techniques for Latent Graph Inference (LGI) [4], where the intuition is that data can have some underlying but unknown latent graph structure, both in cases where only a point cloud of data is available or when the given graph is suboptimal for the downstream task.

On the other hand, Topological Deep Learning (TDL)[1] is gaining traction for its ability to capture higher-order interactions beyond the pairwise structure of graphs, using tools from algebraic topology, especially combinatorial topological spaces. However, Topological Neural Networks (TNNs) often rely on deterministic approaches to integrate graphs into higher-order complexes, thus implicitly assuming that these (task-agnostic, deterministic) complexes are optimal for the downstream task.

Motivated by the above reasons, in our recent paper [2] we introduced the notion of Latent Topology Inference (LTI) by generalizing LGI to higher-order spaces. The goal of LTI is not (only) learning a graph structure describing pairwise interactions but rather learning a higher-order combinatorial topological space describing multi-way interactions among data points. As a first instance of LTI, we introduced the Differentiable Cell Complex Module (DCM), a novel deep learning architecture that dynamically learns a latent cell complex to improve the downstream task.

“The goal of LTI is not (only) learning a graph structure describing pairwise interactions but rather learning a combinatorial topological space describing higher-order interactions among data points.”

Why Do Higher-order Interactions Matter?

Topological Deep Learning (TDL) promotes significant advancements in understanding complex data relationships, particularly by addressing the limitations of traditional methods that focus solely on pairwise interactions. The importance of Topological Neural Networks (TNNs) lies in their ability to model higher-order interactions, which are crucial for capturing the multifaceted nature of real-world systems.

For instance, in biological networks, the interactions between proteins, genes, and other biomolecules often involve more than two entities and can have a pretty long range, leading to significant implications for understanding diseases and developing treatments. In social networks, group dynamics and community structures cannot be fully understood through pairwise relationships alone. Similarly, in recommender systems, the context and collective preferences of groups offer insights beyond individual pairwise recommendations.

By leveraging the capacity to model these complex relationships, TDL offers a more accurate and detailed representation of the data, leading to improved predictive performance and deeper insights.

“The importance of Topological Deep Learning lies in its ability to model higher-order interactions, crucial for capturing the multifaceted nature of real-world systems”

Message Passing Networks over Regular Cell Complexes

The foundational block of TNNs is again message passing, this time not over graphs, but rather over more general, higher-order combinatorial topological spaces, e.g. simplicial and regular cell complexes. Although they usually are a pretty general and sophisticated object, here we are interested in a particular case of regular cell complexes. For the sake of readability, we will define it (very) informally, please refer to our paper [2] or to [1][3] for a more detailed mathematical treatment.

Suppose to have a set of entities, that we refer to as nodes. An edge between two nodes is a subset of nodes of cardinality 2, and it is useful to model pairwise interactions among entities. A triangle between three nodes is a subset of nodes of cardinality 3, and it is useful to model triadic interactions. In general, a subset of k>2 nodes, which we refer to as a polygon, is useful to model k-adic interactions. We refer to each of these subsets, including the nodes, i.e. subsets of cardinality 1, as cells. A regular cell complex is then a collection of cells that have to respect some properties. We are interested in a specific simple case of cell complexes, that can be obtained by “lifting” a graph.

In particular, given a graph, we obtain a cell complex by considering nodes, edges, and some of the induced cycles (i.e. chordless cycles) present in the graph as polygons. Before our work, the common choice was considering all the induced cycles of the underlying graph as polygons, implicitly assuming that the resulting cell complex was the optimal one for the downstream task. In the experimental section, we will show how this choice is indeed reasonable in terms of performance, but practically always significantly improvable.

Example of a cell complex obtained by lifting a graph. The nodes are in red, the edges are in blue, and the polygons are in various colors based on their length. As stated above, not all the cycles are polygons.

In a cell complex, we can leverage a way more sophisticated adjacency structure among cells than the usual node adjacency in graphs. Indeed, two nodes are neighbors if they are connected by an edge, a node and an edge are neighbors if one is an endpoint of the other, two edges are neighbors if they have a common endpoint node (in this case they are said to be lower neighborhood) or if they are part of the same polygon (in this case they are said to be upper neighborhood), and two polygons are neighbors if they share a side (edge).

TNNs are then Message Passing (MP) networks over cell complexes, in which nodes can exchange messages with nodes and edges, edges can exchange messages with nodes, edges and polygons, and cycles can exchange messages with cycles and edges.

As an example, an edge-to-edge MP round can be written as

where the inner operator is an aggregation over the edges that are upper/lower neighbors, while the outer operator is an aggregation over the different neighborhoods. In other words, the embedding of edge i is updated by first exchanging messages with its lower and upper neighbors via a neighborhood-dependent message function, and then by aggregating them before the update.

An example of edge adjacency. The lower (left) and upper (right) neighbors of the edge connecting the two colored polygons. [7]

“In a cell complex, we can leverage a way more sophisticated adjacency structure among cells than the usual node adjacency in graphs.”

The Differentiable Cell Complex Module

Architecture

The proposed two-step procedure for Latent Topology Inference (LTI) via the DCM.

We now introduce the Differentiable Cell Complex Module (DCM), the first architecture able to perform Latent Topology Inference. The proposed layer integrating the DCM takes as input the node features (the features of each node are referred to as a data point) and (optionally) an input graph, and gives as output the updated node features and the inferred latent cell complex.

The DCM first learns a latent graph describing the pairwise interactions among data points through a novel improved version of the Differentiable Graph Module (DGM) [4], dubbed alpha-DGM, able to infer an irregular and sparse graph.

Once the latent graph is inferred, it is used as the underlying graph of a regular cell complex whose set of polygons, i.e. a subset of cycles, is learned via the Polygon Inference Module (PIM), again to improve the downstream task.

The inferred topology is then used in a cell complex MP network, operating at node and edge levels to solve the downstream task. The whole architecture is trained in an end-to-end fashion.

The DCM is able to infer a sparse topology thanks to (edges and polygons) differentiable sampling schemes based on the alpha-entmax family of functions [6]. A sophisticated system of auxiliary and actual cell features is designed to drive the sampling on a similarity basis.

We employ the aforementioned two-step (first edges, then polygons) inference procedure to keep the computational complexity tractable. Directly sampling among all the possible polygons, thus trivially generalizing the DGM framework [4], would have led to intractable complexity, e.g. even at the triangles level the sampling would have had a cubic complexity in the number of nodes, being all edges candidate to be sampled.

“The DCM is able to infer a sparse topology thanks to differentiable sampling schemes. Auxiliary and actual cell features drive the sampling on a similarity basis.”

The proposed layer. The input is (optionally) a graph and node features. The output is the updated node features and the inferred latent cell complex.

Experimental Results

We evaluated the effectiveness of the DCM on several heterophilic and homophilic graph benchmarks. Graph homophily is the principle that nodes with similar labels are more likely to be connected. Traditional Graph/Cell Complex Neural Networks implicitly rely on the homophily assumption, and performance drops are typically observed in heterophilic settings.

Since DCM is a generalization of the DGM, we use as a comparison the original (discrete) DGM [4] (denoted DGM-E) and its recently introduced non-Euclidean version [5] (denoted DGM-M). A series of other variants and baselines are tested, please refer to the paper [2] for the details.

We first focus on standard node classification datasets that have high homophily levels. We then test our method on several challenging heterophilic datasets that have low homophily levels ranging between 0.11 and 0.23. All the models were tested in two settings: assuming the original graph is available and a more challenging case in which the input graph is assumed to be unavailable.

Homophilic-graph node classification benchmarks. Test accuracy in % avg.ed over 10 splits.
Heterophilic-graph node classification benchmarks. Test accuracy in % avg.ed over 10 splits.

The DCM exhibits consistently superior performance compared to alternative methods across both homophilic and heterophilic datasets, both with and without provided input graphs. As expected, our method achieves top performance without an input graph for the heterophilic datasets and with an input graph for the homophilic datasets. We achieve remarkable results considering the input graph for heterophilic datasets. Despite exposing the model to the “wrong’’ input graphs (in the sense that the topology does not match the structure of the node features), the overall performance of the DCM remains stable. Overall, we can appreciate how DCM exploits the available “good’’ graphs in the homophilic cases while being less sensitive to the “wrong’’ ones in the heterophilic cases.

Our performance on the heterophilic datasets suggests that the learned latent complex has a homophilic 1-skeleton. To corroborate this hypothesis, we show the evolution of the latent topology during the training for the Texas dataset along with the nodes degree distribution, the percentage of sampled polygons, and the homophily level that we note reaching 0.99 on the final inferred graph from the initial 0.11. Moreover, the fact that most of the inferred polygons belong to the same class and the pretty high spread of the degree distribution further confirm the effectiveness of the DCM. The reader can refer to the paper for further insights into the interpretability of the inferred latent complex.

Evolution of the latent complex for the Texas dataset, along with homophily level and nodes degree distribution. Edges in orange, triangles in lilac, squares in purple.

Conclusions

The DCM is the first example of Latent Topology Inference. It is simple yet very effective. Most importantly, it further validates the intuition in which TDL is grounded: higher-order multiway interactions matter, and topology is an excellent tool to devise principled models. However, the DCM also shows that there is still a lot of work to do, on the one hand, to change the common (arguably ill-defined) belief that graphs are always all we need, and on the other hand, to devise TDL models that (only) wisely consider how and if to use input graphs. Finally, LTI models could help to model/embed data over combinatorial topological spaces, a key feature in light of the scarcity of datasets that are natively modeled over higher-order structures, despite the presence of higher-order interactions. The several research directions that the LTI paradigm inaugurates represent an appealing opportunity to build further bridges between algebraic topology, differential geometry, and deep learning.

“The DCM validates the intuition in which TDL is grounded: higher-order multiway interactions matter, and topology is an excellent tool to devise principled models.”

References

[1] Mustafa Hajij, et al. “Topological Deep Learning: Going Beyond Graph Data”, 2022

[2] Claudio Battiloro, Indro Spinelli, et al. “From Latent Graph to Latent Topology Inference: Differentiable Cell Complex Module.” ICLR 2024.

[3] Cristian Bodnar, et al. “Weisfeiler and Lehman go cellular: Cw networks.” NeurIPS 2021.

[4] Anees Kazi, et al. “Differentiable graph module (DGM) for graph convolutional networks.” IEEE TPAMI, 2023.

[5] Haitz Sáez de Ocáriz Borde, et al. “Latent graph inference using product manifolds.” ICLR 2023.

[6] Ben Peters, et al. “Sparse sequence-to-sequence models.”, 2019.

[7] Lorenzo Giusti, et al. “Cell attention networks.” IJCNN 2023.

--

--

Claudio Battiloro

Claudio is a postdoctoral fellow at Harvard University. Passionate about topology for signal processing and learning, talks about simplices, cells, and sheaves.