Monday 2:00-2:50, room: MEB 128, building: Mechanical Engineering Building
Machine learning on graphs (MLG) is an exciting and growing research topic mainly for two reasons: (1) Many relevant real-world problems (in recommendation, infrastructure, healthcare, etc.) have some structure that can be captured as nodes, edges, and their attributes; and (2) machine learning has been the hottest topic in computer science in the past decade, so it has impacted the entire field. In this seminar, we will discuss some of the most recent papers on deep learning on graphs (Graph Neural Networks), which arguably has become the go-to approach for MLG. Many of these developments can be posed as 'How can I do X on a graph?', where 'X' might be convolution, pooling, reinforcement learning, etc. However, deep learning also provides a common framework with the potential to solve most (if not all) problems in network science, graph mining, and even graph algorithms, by fitting well-designed functions using data (sometimes lots of it). In the last six years, at least three books, 20 surveys, and hundreds of journal and conference papers supporting this claim have been published. This seminar is an opportunity to read and discuss the state-of-the-art in machine learning on graphs.
Credit Hours: 1-3
This is a one credit seminar. It can be converted to 3 credits with the addition of a class project or a (high quality) survey (both subject to the instructor’s approval).
Read and discuss recent papers on machine learning on graphs.
- Graduate students working on machine learning, network science, graphs, etc. that want to broaden their knowledge on machine learning with graphs;
- Undergraduate students who want to be exposed to machine learning beyond the classical problems (clustering, classification, regression, etc.);
- Anyone else who is interested.
A course on machine learning (some examples are COMP 502, COMP 540, COMP 542, COMP 576, COMP 602, COMP 640-3, COMP 680).
The seminar will be focused on research papers that are publicly available.
The seminar will be composed of two introductory lectures and paper presentations with discussions. Each presentation will be 30 minutes long and will be followed by 20 minutes of discussion. Each student should submit reviews for five papers of their choice during the semester. The reviews should follow the NeurIPS'22 guidelines
and should be submitted the day before the presentation.
Tips for a good presentation:
- Provide the relevant background;
- Make sure the problem, motivation, and assumptions are clear;
- Skip the details and focus on the major contribution;
- Don't limit yourself to the paper, make connections with other work;
- Criticise the paper, point out how it could be improved.
You can contribute to the discussion in the following ways:
- Complementing the presentation with relevant information;
- Criticizing the paper in terms of contributions, soundness, presentation, ethics, etc.;
- Asking questions you had while reading the paper;
- Proposing novel research based on the paper;
- Suggesting other papers for further reading;
- Commenting on the how the paper fits the author(s) broader research agenda.
Grading will be Pass/Fail.
- Read each paper presented;
- Write a review for 5 out of 10 papers;
- Participate of the paper discussions;
- Present at least one paper.
Paper reviews and slides should be submitted on Canvas. Questions should be submitted on Piazza. For personal matters, please email the instructor.
Rice Honor Code
Students are expected to adhere to the Rice Honor Code
. You are encouraged to collaborate and to find resources online. However, all the material to be graded is expected to be original. The exceptional use of someone else's work, such as a figure, should be properly recognized.
Students with Disabilities
If you have a documented disability that may affect academic performance, you should: 1) make sure this documentation is on file with Disability Support Services (Allen Center, Room 111 / email@example.com / x5841) to determine the accommodations you need; and 2) meet with me to discuss your accommodation needs.
Related Courses at Rice
- COMP 559: Machine Learning with Graphs (taught by Prof. Arlei Silva in the Spring).
- ELEC 573: Network Science and Analytics (taught by Prof. Segarra in the Fall);
- ECE677-001: Distributed Optimization and ML (taught by Prof. Uribe in the Fall);
- CAAM 570: Graph Theory (taught by Prof. Hicks in the Spring).
- 08/22: Logistics, overview, intro
- 08/29: Graph Neural Networks
- 09/05: Labor day
- 09/12: Paper presentation 1
- 09/19: Paper presentation 2
- 09/26: Paper presentation 3
- 10/03: Paper presentation 4
- 10/10: Midterm recess
- 10/17: Paper presentation 5
- 10/24: Paper presentation 6
- 10/31: Paper presentation 7
- 11/07: Paper presentation 8
- 11/14: Paper presentation 9
- 11/21: Paper presentation 10
- 11/28: Discussion
Suggested topics and papers
Architecture: e.g. pooling, aggregation, depth
Theory: e.g. expressive power, invariances
- Yang et al. Graph neural networks inspired by classical iterative algorithms. ICML, 2021
- Alon and Yahav. On the bottleneck of graph neural networks and its practical implications. ICLR, 2020
Supervised learning: e.g. node classification, graph classification, link prediction
- Ganea et al. Independent se (3)-equivariant models for end-to-end rigid protein docking. ICLR, 2021
- Xu et al. Optimization of graph neural networks: Implicit acceleration by skip connections and more depth. ICML, 2021
Scalability and systems: e.g. distributed algorithms, simplified architectures
- Qu et al. Neural structured prediction for inductive node classification. ICLR, 2021
- Baek et al. Accurate learning of graph representations with graph multiset pooling. ICLR, 2020
Fairness and privacy: e.g. information leakage, private computation
- Sriram et al. Towards training billion parameter graph neural networks for atomic simulations. ICLR, 2021
- Kaler et al. Accelerating training and inference of graph neural networks with fast sampling and pipelining. MLSys, 2022
Reinforcement learning: e.g. policy learning
- Agarwal et al. Towards a unified framework for fair and stable graph representation learning. UAI, 2021
- Liao et al. Information obfuscation of graph neural networks. ICML, 2021
Alternatives: e.g. label propagation, Markov Random Fields, transformers
- Paliwal et al. Reinforced genetic algorithm learning for optimizing computation graphs. ICLR, 2019
- Trivedi et al. Graphopt: Learning optimization models of graph formation. ICML, 2020.
Interpretability: e.g. GNN explainers
- Ying et al. Do transformers really perform badly for graph representation? Neurips, 2021.
- Huang et al. Combining label propagation and simple models out-performs graph neural networks. ICLR, 2020
Generalizations: e.g. hypergraphs, heterogenous graphs, dynamic graphs
- Miao et al. Interpretable and generalizable graph learning via stochastic attention mechanism. ICML, 2022.
Unsupervised learning: e.g. autoencoders, self-supervised learning
- Chen et al. Tamps2gcnets: coupling time-aware multipersistence knowledge representation with spatio-supragraph convolutional networks for time-series forecasting. ICLR, 2021.
- Bodnar et al. Weisfeiler and lehman go topological: Message passing simplicial networks. ICML, 2021.
Adversarial learning: e.g. attacks, robustness
- Hassani and Khasahmadi. Contrastive multi-view representation learning on graphs. ICML, 2020.
- Zhang and Li. Nested graph neural networks. Neurips, 2021.
Healthcare and bioinformatics: e.g. drug discovery, diagnosis, epidemics forecasting
- Wu et al. Graph information bottleneck. Neurips, 2020.
- Xi et al. Graph backdoor. USENIX Security, 2021.
NLP and vision: e.g. translation, word embeddings, question-answering, text classification
- Kim et al. Learning dynamic graph representation of brain connectome with spatio-temporal attention. Neurips, 2021
Knowledge graphs: e.g. completion, alignment, reasoning
- Shen et al. Unsupervised dependency graph network. ACL, 2022.
- Yuxian Meng, Shi Zong, Xiaoya Li, Xiaofei Sun, Tianwei Zhang, Fei Wu, and Jiwei Li. Gnnlm:
Language modeling based on global contexts via gnn. In International Conference on
Learning Representations, 2021.
Traffic and mobility: e.g. traffic forecasting, trajectory prediction
- Yu et al. Kg-fid: Infusing knowledge graph in fusion-in-decoder for open-domain question answering. ACL, 2022.
- Zhang et al. Greaselm: Graph reasoning enhanced language models. ICLR, 2021.
Physical sciences: e.g. physical reasoning, simulation, PDE solving
- Wang et al.
Metro passenger flow prediction via dynamic hypergraph convolution networks. IEEE Transactions
on Intelligent Transportation Systems, 2021.
- Bai et al. Adaptive graph convolutional recurrent network for traffic forecasting. Neurips, 2020.
Program analysis: e.g. type inference, code summarization, bug detection
- Lienen and Gunnemann. Learning the dynamics of physical systems from sparse observations with finite element networks. ICLR, 2021.
Hyperparameter tuning and AutoML: e.g. architecture search, graph learning
- Dinella et al. Hoppity: Learning graph transformations to detect and fix bugs in programs. ICLR, 2020.
- Pashakhanloo et al. Codetrek: Flexible modeling of code using an extensible relational representation. ICLR, 2021.
Robotics: e.g. motion planning
- Chen et al. A unified lottery ticket hypothesis for graph neural networks. ICML, 2021.
- Zhang et al. Deep and flexible graph neural architecture search. ICML, 2022.
Algorithms: e.g. learning graph algorithms
- Zhou et al. Multi-robot collaborative perception with graph neural networks. IEEE Robotics and Automation Letters, 2022.
- Yu and Gao. Reducing collision checking for sampling-based motion planning using graph neural networks. Neurips, 2021.
- Bai et al. Glsearch: Maximum common subgraph detection via learning to search. ICML, 2021.
- Meirom et al. Optimizing tensor network contraction using reinforcement learning. ICML, 2022.
- William L. Hamilton. Graph representation learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2020
- Yao Ma, Jiliang Tang. Deep Learning on Graphs. Cambridge University Press, 2020
- Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, Philip S. Yu. A Comprehensive Survey on Graph Neural Networks. arxiv 2019
- Ziwei Zhang, Peng Cui, Wenwu Zhu. Deep Learning on Graphs: A Survey. arxiv 2018
- Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher Ré, Kevin Murphy. Machine learning on graphs: A model and comprehensive taxonomy. ArXiv 2020
- Faezeh Faez, Yassaman Ommi, Mahdieh Soleymani Baghshah, Hamid R. Rabiee. Graph deep learning: State of the art and challenges. IEEE Access 2021
- N. A. Asif et al. Graph Neural Network: A Comprehensive Review on Non-Euclidean Space. IEEE Access 2021