You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Empirical Validations of Graph Structure Learning methods for Citation Network Applications
1
+
# DGM_pytorch
2
2
3
-
Code for the BSc Thesis "Empirical Validations of Graph Structure Learning methods for Citation Network Applications" by Hoang Thien Ly at Faculty of Mathematics and Information Science, Warsaw University of Technology.
3
+
Code for the paper "Differentiable Graph Module (DGM) for Graph Convolutional Networks" by Anees Kazi*, Luca Cosmo*, Seyed-Ahmad Ahmadi, Nassir Navab, and Michael Bronstein
4
4
5
-
## Abstract
6
-
7
-
This Bachelor’s Thesis aims to examine the classification accuracy of graph structure learning methods in graph neural networks domain, with a focus on classifying a paper
8
-
in citation network datasets. Graph neural networks (GNNs) have recently emerged as a powerful machine learning concept allowing to generalize successful deep neural archi-
9
-
tectures to non-Euclidean structured data with high performance. However, one of the limitations of the majority of current GNNs is the assumption that the underlying graph
10
-
is known and fixed. In practice, real-world graphs are often noisy and incomplete or might even be completely unknown. In such cases, it would be helpful to infer the graph
11
-
structure directly from the data. Additionally, graph structure learning permits learning latent structures, which may increase the understanding of GNN models by providing edge
12
-
weights among entities in the graph structure, allowing modellers to further analysis.
13
-
14
-
As part of the work, we will:
15
-
* review the current state-of-the-art graph structure learning (GSL) methods.
16
-
* empirically validate GSL methods by accuracy scores with citation network datasets.
17
-
* analyze the mechanism of these approaches and analyze the influence of hyperparameters on model’s behavior.
For the dDGM framework, to train a model with the default options run the following command:
27
+
To train a model with the default options run the following command:
43
28
```
44
29
python train.py
45
30
```
46
31
47
-
Other frameworks, codes are presented in the Jupyter notebook.
32
+
## Notes
33
+
The graph sampling code is based on a modified version of the KeOps libray (www.kernel-operations.io) to speed-up the computation. In particular, the argKmin function of the original libray has been modified to handle the stochasticity of the sampling strategy, adding samples drawn from a Gumbel distribution to the input before performing the reduction.
1. Experiments 5.1 (5.1.1, 5.1.2, 5.1.3, 5.1.4, 5.1.5) for discrete DGM are not consistent.
3
+
2. Point Cloud 3D experiments are based on DGCNN: https://github.com/WangYueFt/dgcnn
4
+
- Successfully re-run experiments in dgcnn.
5
+
- Section 5.3 describes that kNN sampling scheme by DGCNN was replaced by the discrete sampling strategy of DGM. This experiment is demonstrated only for the **discrete** case.
6
+
7
+
3. Zero-shot learning application is also done with **discrete** DGM. Not much information was provided to reproduce experiments in this section.
8
+
9
+
**Currently asking two co-authors on this issue, no update yet**
0 commit comments