PhD Candidate
Department of Computer Science and Engineering
Shanghai Jiao Tong University
echo740 [AT] sjtu.edu.cn
Github | Google Scholar | DBLP | Twitter | ResearchGate | Medium | Zhihu
I am a final-year PhD student at the Department of Computer Science and Engineering from Shanghai Jiao Tong University (SJTU), advised by Junchi Yan. I also extensively collaborated with David Wipf and Hongyuan Zha. I achieved the Bachelor (Microelectronics, Mathematics) and Master (Computer Science) degrees from SJTU, and worked as research intern at Tencent WeChat, Amazon AI Lab and BioMap.
My research interest predominantly revolves around machine learning foundations and applications. On the foundation side, I build theoretically principled and practically useful methodology, particularly for learning with complex structures and distribution shifts. I also explore the intersection with interdisciplinary areas such as life sciences (e.g., drug discovery and healthcare) and recommender systems, and sought inspirations from physics. My research is supported by Microsoft PhD Fellowship and Baidu PhD Fellowship.
My recent works aim at machine learning with complex structured data, especially making the models more expressive, generalizable and reliable. in both closed-world and open-world regimes.
The most recent works can be found on Google Scholar.
DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained Diffusion
Qitian Wu, Chenxiao Yang, Wentao Zhao, Yixuan He, David Wipf and Junchi Yan
International Conference on Learning Representations (ICLR) 2023 spotlight presentation, avg. ranking among top 0.5%
Summary: We propose a geometric diffusion framework with energy constraints and show its solution aligns with widely used attention networks, upon which we propose diffusion-based Transformers.
NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification
Qitian Wu, Wentao Zhao, Zenan Li, David Wipf and Junchi Yan
Advances in Neural Information Processing Systems (NeurIPS) 2022 spotlight presentation (less than 5%)
Summary: We propose a scalable graph Transformer with efficient all-pair message passing achieved in O(N) complexity. The global attention over 2M nodes only requires 4GB memory.
Handling Distribution Shifts on Graphs: An Invariance Perspective
Qitian Wu, Hengrui Zhang, Junchi Yan and David Wipf
International Conference on Learning Representations (ICLR) 2022
Summary: We formulate out-of-distribution generalization on graphs and discuss how to leverage (causal) invariance principle for handling graph-based distribution shifts.
SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations
Qitian Wu, Wentao Zhao, Chenxiao Yang, Hengrui Zhang, Fan Nie, Haitian Jiang, Yatao Bian and Junchi Yan
Advances in Neural Information Processing Systems (NeurIPS) 2023
Summary: We propose an efficient Transformer that only uses one-layer global attention and significantly reduces computation cost for large-graph representations.
DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained Diffusion
Qitian Wu, Chenxiao Yang, Wentao Zhao, Yixuan He, David Wipf and Junchi Yan
International Conference on Learning Representations (ICLR) 2023 spotlight presentation, avg. ranking among top 0.5%
Summary: We propose a geometric diffusion framework with energy constraints and show its solution aligns with widely used attention networks, upon which we propose diffusion-based Transformers.
Energy-based Out-of-Distribution Detection for Graph Neural Networks
Qitian Wu, Yiting Chen, Chenxiao Yang, and Junchi Yan
International Conference on Learning Representations (ICLR) 2023
Summary: We extract a OOD detection model from GNN classifier through energy-based models and energy-based belief propagation, reducing FPR95 over SOTAs by up to 44.8%.
Graph Neural Networks are Inherently Good Generalizers: Insights by Bridging GNNs and Multi-Layer Perceptrons
Chenxiao Yang, Qitian Wu, Jiahua Wang and Junchi Yan
International Conference on Learning Representations (ICLR) 2023
Summary: We identify that the efficacy of GNNs over MLP mostly stems from its inherent better generalization, demonstrated by experiments on sixteen benchmarks and neural tangent kernel theory.
NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification
Qitian Wu, Wentao Zhao, Zenan Li, David Wipf and Junchi Yan
Advances in Neural Information Processing Systems (NeurIPS) 2022 spotlight presentation (less than 5%)
Summary: We propose a scalable graph Transformer with efficient all-pair message passing achieved in O(N) complexity. The global attention over 2M nodes only requires 4GB memory.
Learning Substructure Invariance for Out-of-Distribution Molecular Representations
Nianzu Yang, Kaipeng Zeng, Qitian Wu, Xiaosong Jia and Junchi Yan
Advances in Neural Information Processing Systems (NeurIPS) 2022 spotlight presentation (less than 5%)
Summary: We propose a invariant learning approach for molecular property prediction under distribution shifts and achieve SOTA results on OGB-mol and DrugOOD benchmarks.
Geometric Knowledge Distillation: Topology Compression for Graph Neural Networks
Chenxiao Yang, Qitian Wu and Junchi Yan
Advances in Neural Information Processing Systems (NeurIPS) 2022
Summary: We explore geometric knowledge distillation, empowered by neural heat kernel, that can generalize the topological knowledge from larger GNNs to smaller one.
Towards Out-of-Distribution Sequential Event Prediction: A Causal Treatment
Chenxiao Yang, Qitian Wu, Qingsong Wen, Zhiqiang Zhou, Liang Sun and Junchi Yan
Advances in Neural Information Processing Systems (NeurIPS) 2022
Summary: We use causal analysis to reveal the limitations of Maximum Likelihood Estimation for sequential prediction under distribution shifts and propose an effective treatment based on backdoor adjustment.
Handling Distribution Shifts on Graphs: An Invariance Perspective
Qitian Wu, Hengrui Zhang, Junchi Yan and David Wipf
International Conference on Learning Representations (ICLR) 2022
Summary: We formulate out-of-distribution generalization on graphs and discuss how to leverage (causal) invariance principle for handling graph-based distribution shifts.
Towards Open-World Feature Extrapolation: An Inductive Graph Learning Approach
Qitian Wu, Chenxiao Yang and Junchi Yan
Advances in Neural Information Processing Systems (NeurIPS) 2021
Summary: We propose a graph representation learning approach for handling feature space expansion from training data to testing data.
From Canonical Correlation Analysis to Self-supervised Graph Neural Networks
Hengrui Zhang, Qitian Wu, Junchi Yan, David Wipf and Philip S. Yu
Advances in Neural Information Processing Systems (NeurIPS) 2021
Summary: We introduce a simple-yet-effective contrastive objective for self-supervised learning on graphs.
Towards Open-World Recommendation: An Inductive Model-based Collaborative Filtering Approach
Qitian Wu, Hengrui Zhang, Xiaofeng Gao, Junchi Yan and Hongyuan Zha
International Conference on Machine Learning (ICML) 2021
Summary: We propose a latent structure inference model to handle new unseen users in the testing phase of recommender systems.
Learning Latent Process from High-Dimensional Event Sequences via Efficient Sampling
Qitian Wu, Zixuan Zhang, Xiaofeng Gao, Junchi Yan and Guihai Chen
Advances in Neural Information Processing Systems (NeurIPS) 2019
Dual Sequential Prediction Models Linking Sequential Recommendation and Information Dissemination
Qitian Wu, Yirui Gao, Xiaofeng Gao, Paul Weng and Guihai Chen
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD) 2019
Summary: We use a duality perspective for unifying two sequential information retrieval problems into one model with mutual enhancement enabled.
Dual Graph Attention Networks for Deep Latent Representation of Multifaceted Social Effects in Recommender Systems
Qitian Wu, Hengrui Zhang, Xiaofeng Gao, Peng He, Paul Weng, Han Gao and Guihai Chen
The Web Conference (WWW) 2019 long oral representation (only 1/2 among all accepted long papers)
Summary: We propose a dual graph attention model that captures the social homophily and influence effects among users and items in recommender systems.
Learning Divergence Fields for Shift-Robust Message Passing
Qitian Wu, Fan Nie, Chenxiao Yang and Junchi Yan
International Conference on Machine Learning (ICML) 2024
How Graph Neural Networks Learn: Lessons from Training Dynamics
Chenxiao Yang, Qitian Wu, David Wipf, Ruoyu Sun and Junchi Yan
International Conference on Machine Learning (ICML) 2024
Graph Out-of-Distribution Detection Goes Neighborhood Shaping
Tianyi Bao, Qitian Wu, Zetian Jiang, Yiting Chen, Jiawei Sun and Junchi Yan
International Conference on Machine Learning (ICML) 2024
Graph Out-of-Distribution Generalization via Causal Intervention
Qitian Wu, Fan Nie, Chenxiao Yang, Tianyi Bao and Junchi Yan
The Web Conference (WWW) 2024 oral presentation
Rethinking Cross-Domain Sequential Recommendation Under Open-World Assumptions
Wujiang Xu, Qitian Wu, Runzhong Wang, Mingming Ha, Qiongxu Ma, Linxun Chen, Bing Han and Junchi Yan
The Web Conference (WWW) 2024
SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations
Qitian Wu, Wentao Zhao, Chenxiao Yang, Hengrui Zhang, Fan Nie, Haitian Jiang, Yatao Bian and Junchi Yan
Advances in Neural Information Processing Systems (NeurIPS) 2023
Unleashing the Power of Graph Data Augmentation on Covariate Distribution Shift
Yongduo Sui, Qitian Wu, Jiancan Wu, Qing Cui, Longfei Li, Jun Zhou, Xiang Wang, Xiangnan He
Advances in Neural Information Processing Systems (NeurIPS) 2023
GraphGlow: Universal and Genralizable Structure Learning for Graph Neural Networks
Wentao Zhao, Qitian Wu, Chenxiao Yang and Junchi Yan
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD) 2023
MoleRec: Combinatorial Drug Recommendation with SubstructureAware Molecular Representation Learning
Nianzu Yang, Kaipeng Zeng, Qitian Wu, Junchi Yan
The Web Conference (WWW) 2023
DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained Diffusion
Qitian Wu, Chenxiao Yang, Wentao Zhao, Yixuan He, David Wipf and Junchi Yan
International Conference on Learning Representations (ICLR) 2023 spotlight presentation, avg. ranking among top 0.5%
Energy-based Out-of-Distribution Detection for Graph Neural Networks
Qitian Wu, Yiting Chen, Chenxiao Yang, and Junchi Yan
International Conference on Learning Representations (ICLR) 2023
Graph Neural Networks are Inherently Good Generalizers: Insights by Bridging GNNs and Multi-Layer Perceptrons
Chenxiao Yang, Qitian Wu, Jiahua Wang and Junchi Yan
International Conference on Learning Representations (ICLR) 2023
NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification
Qitian Wu, Wentao Zhao, Zenan Li, David Wipf and Junchi Yan
Advances in Neural Information Processing Systems (NeurIPS) 2022 spotlight presentation (less than 5%)
Learning Substructure Invariance for Out-of-Distribution Molecular Representations
Nianzu Yang, Kaipeng Zeng, Qitian Wu, Xiaosong Jia and Junchi Yan
Advances in Neural Information Processing Systems (NeurIPS) 2022 spotlight presentation (less than 5%)
Geometric Knowledge Distillation: Topology Compression for Graph Neural Networks
Chenxiao Yang, Qitian Wu and Junchi Yan
Advances in Neural Information Processing Systems (NeurIPS) 2022
Towards Out-of-Distribution Sequential Event Prediction: A Causal Treatment
Chenxiao Yang, Qitian Wu, Qingsong Wen, Zhiqiang Zhou, Liang Sun and Junchi Yan
Advances in Neural Information Processing Systems (NeurIPS) 2022
GraphDE: A Generative Framework for Debiased Learning and Out-of-Distribution Detection on Graphs
Zenan Li, Qitian Wu, Fan Nie and Junchi Yan
Advances in Neural Information Processing Systems (NeurIPS) 2022
Variational Inference for Training Graph Neural Networks in Low-Data Regime through Joint Structure-Label Estimation
Danning Lao, Xinyu Yang, Qitian Wu, Junchi Yan
ACM SIGKDD Conference on Knowledge Discovery and Data Mining (SIGKDD) 2022
DICE: Domain-attack Invariant Causal Learning for Improved Data Privacy Protection and Adversarial Robustness
Qibing Ren, Yiting Chen, Yichuan Mo, Qitian Wu, Junchi Yan
ACM SIGKDD Conference on Knowledge Discovery and Data Mining (SIGKDD) 2022
Handling Distribution Shifts on Graphs: An Invariance Perspective
Qitian Wu, Hengrui Zhang, Junchi Yan and David Wipf
International Conference on Learning Representations (ICLR) 2022
Trading Hard Negatives and True Negatives: A Debiased Contrastive Collaborative Filtering Approach
Chenxiao Yang, Qitian Wu, Jipeng Jin, Junwei Pan, Xiaofeng Gao, and Guihai Chen
International Joint Conference on Artificial Intelligence (IJCAI) 2022
ScaleGCN: Efficient and Effective Graph Convolution via Channel-Wise Scale Transformation
Tianqi Zhang, Qitian Wu, Junchi Yan, Yunan Zhao and Bing Han
IEEE Transactions on Neural Networks and Learning Systems (TNNLS) 2022
Towards Open-World Feature Extrapolation: An Inductive Graph Learning Approach
Qitian Wu, Chenxiao Yang and Junchi Yan
Advances in Neural Information Processing Systems (NeurIPS) 2021
From Canonical Correlation Analysis to Self-supervised Graph Neural Networks
Hengrui Zhang, Qitian Wu, Junchi Yan, David Wipf and Philip S. Yu
Advances in Neural Information Processing Systems (NeurIPS) 2021
Bridging Explicit and Implicit Deep Generative Models via Neural Stein Estimators
Qitian Wu, Han Gao and Hongyuan Zha
Advances in Neural Information Processing Systems (NeurIPS) 2021
Towards Open-World Recommendation: An Inductive Model-based Collaborative Filtering Approach
Qitian Wu, Hengrui Zhang, Xiaofeng Gao, Junchi Yan and Hongyuan Zha
International Conference on Machine Learning (ICML) 2021
Seq2Bubbles: Region-Based Embedding Learning for User Behaviors in Sequential Recommenders
Qitian Wu, Chenxiao Yang, Shuodian Yu, Xiaofeng Gao and Guihai Chen
ACM International Conference on Information & Knowledge Management (CIKM) 2021 spotlight presentation (only 1/3 among accepted papers)
Learning High-Order Graph Convolutional Networks via Adaptive Layerwise Aggregation Combination
Tianqi Zhang, Qitian Wu and Junchi Yan
IEEE Transactions on Neural Networks and Learning Systems (TNNLS) 2021
Sentimem: Attentive memory networks for sentiment classification in user review
Xiaosong Jia, Qitian Wu, Xiaofeng Gao and Guihai Chen
International Conference on Database Systems for Advanced Applications (DASFAA) 2020
Learning Latent Process from High-Dimensional Event Sequences via Efficient Sampling
Qitian Wu, Zixuan Zhang, Xiaofeng Gao, Junchi Yan and Guihai Chen
Advances in Neural Information Processing Systems (NeurIPS) 2019
Feature Evolution Based Multi-Task Learning for Collaborative Filtering with Social Trust
Qitian Wu, Lei Jiang, Xiaofeng Gao, Xiaochun Yang and Guihai Chen
International Joint Conference on Artificial Intelligence (IJCAI) 2019
Dual Sequential Prediction Models Linking Sequential Recommendation and Information Dissemination
Qitian Wu, Yirui Gao, Xiaofeng Gao, Paul Weng and Guihai Chen
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD) 2019
Dual Graph Attention Networks for Deep Latent Representation of Multifaceted Social Effects in Recommender Systems
Qitian Wu, Hengrui Zhang, Xiaofeng Gao, Peng He, Paul Weng, Han Gao and Guihai Chen
The Web Conference (WWW) 2019 long oral representation (only 1/2 among all accepted long papers)
EPAB: Early pattern aware bayesian model for social content popularity prediction
Qitian Wu, Chaoqi Yang, Xiaofeng Gao, Peng He and Guihai Chen
IEEE international conference on data mining (ICDM) 2018
Adversarial training model unifying feature driven and point process perspectives for event popularity prediction
Qitian Wu, Chaoqi Yang, Hengrui Zhang, Xiaofeng Gao, Paul Weng and Guihai Chen
ACM International Conference on Information & Knowledge Management (CIKM) 2018
EPOC: A survival perspective early pattern detection model for outbreak cascades
Chaoqi Yang, Qitian Wu, Xiaofeng Gao and Guihai Chen
International Conference on Database and Expert Systems Applications (DEXA) 2018
Academic Star in SJTU (the highest academic award for PhD students across all research areas), 2023
National Scholarship (only 0.2% for PhD students in China), 2022, 2023
Baidu PhD Fellowship (only 10 recipients worldwide), 2021 [link]
Microsoft Research PhD Fellowship (only 11 recipients in Asia), 2021 [link]
Global Top 100 Rising Star in Artificial Intelligence, 2021 [link]
Yuanqing Yang Scholarship (only 3 master students in the CS department), 2019
Lixin Tang Scholarship (only 60 students across all academic levels in the university), 2017, 2018
National Scholarship (only 1% for undergraduate students), 2016, 2017
The 1st-Class Academic Excellence Scholarship (top 1 in the department), 2016, 2017
Outstanding Winner, INFORMS Awards, Mathematical Contest in Modeling, Data Insights Problem (top 3 out of 4748 teams, the INFORMS Awards selects one team among all the participants), 2018 [link]
National Second Award, China Undergraduate Mathematical Contest in Modeling, 2016
First Award, Physics Competition of Chinese College Students, 2015
Outstanding Graduate of Shanghai (only 5%), 2018
Outstanding Thesis of Undergraduates (only 20%), 2018
Learning with Non-IID Data from Physics Principles, Oct. 2023, ByteDance AI Lab [slides]
Transformers induced by Energy-Constrained Diffusion, Mar. 2023, Amazon AI Lab
Learning on Graphs Under Open-world Assumptions, Mar. 2022, Bosch AI Center [slides]
Recent Advances in Graph Machine Learning, Nov. 2022, Huawei Noah's Ark Lab
Scalable Graph Transformers with Linear Complexity, Nov. 2022, AI Times [video]
Out-of-Distribution Generalization and Extrapolation on Graphs, Oct. 2022, Learning on Graphs Seminar [video] [slides]
Out-of-Distribution Generalization and Extrapolation on Graphs, May. 2022, Alipay, Alibaba
Program Committee/Reviewer
ICML: 2021, 2022, 2023
NeurIPS: 2021, 2022, 2023
ICLR: 2022, 2023, 2024
SIGKDD: 2023
WWW: 2023
AAAI: 2021, 2022, 2023
IJCAI: 2021, 2022, 2023
CVPR: 2021, 2022, 2023
ICCV: 2021
TKDE
TNNLS