John Hopcroft Center for Computer Science, School of electronic information and electrical engineering
Shanghai Jiao Tong University
Email: zqs1022 [AT] sjtu.edu.cn [知乎]
招生 Prospective Ph.D., Master, and undergraduate students: I am looking for highly motivated students to work together on interpretability of neural networks, unsupervised and weakly-supervised learning, graph mining, and other frontier topics in machine learning and computer vision. Please read “写给学生” and send me your CV and transcripts.
Biography (Curriculum Vita) Currently, I am an associate professor at the Shanghai Jiaotong University. Before that, I received the B.S. degree in machine intelligence at the Peking University, China, in 2009. I obtained the M.Eng. degree and the Ph.D. degree at University of Tokyo, in 2011 and 2014, respectively, under the supervision of Prof. Ryosuke Shibasaki. In 2014, I became a postdoctoral associate at the University of California, Los Angeles, under the supervision of Song-Chun Zhu.
Now, I am leading a group for explainable AI.
Research Interests My research mainly focuses on machine learning and computer vision, with special interests in explainable AI. Related research topics include explainable AI theories, interpretable neural networks, the symbolic/semantic explanation of neural networks, explaining the representation power (e.g., the adversarial robustness and generalization power) of neural networks.
Tutorials & invited talk in explainable AI
世界人工智能大会（WAIC）可信AI论坛 Panel Discussion [Website]
VALSE 2021 Tutorial on Interpretable Machine Learning [Website]
IJCAI 2021 Tutorial on Theoretically Unifying Conceptual Explanation and Generalization of DNNs [Website]
IJCAI 2020 Tutorial on Trustworthiness of Interpretable Machine Learning [Website] [Video]
PRCV 2020 Tutorial on Robust and Explainable Artificial Intelligence [Website]
ICML 2020 Online Panel Discussion: “Baidu AutoDL: Automated and Interpretable Deep Learning”
A few selected studies
1. Interpretable Convolutional Neural Networks. We add additional losses to force each convolutional filter in our interpretable CNN to represent a specific object part. In comparisons, a filter in ordinary CNNs usually represents a mixture of parts and textures. We learn the interpretable CNN without any part annotations for supervision. Clear semantic meanings of middle-layer filters are of significant values in real applications.
Activation regions of two convolutional filters in the interpretable CNN through different frames.
2. Explanatory Graphs for CNNs. We transform traditional CNN representations to interpretable graph representations, i.e., explanatory graphs, in an unsupervised manner. Given a pre-trained CNN, we disentangle feature representations of each convolutional filter into a number of object parts. We use graph nodes to represent the disentangled part components and use graph edges to encode the spatial relationship and co-activation relationship between nodes of different conv-layers. In this way, the explanatory graph encodes the potential knowledge hierarchy hidden inside middle layers of the CNN.
ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI, 2021 (https://icml2021-xai.github.io/)
CVPR Workshop on Explainable AI, 2019
AAAI Workshop on Network Interpretability for Deep Learning, 2019 (http://networkinterpretability.org)
CVPR Workshop on Language and Vision, 2018 (http://languageandvision.com/)
CVPR Workshop on Language and Vision, 2017 (http://languageandvision.com/2017.html)
Journal Reviewer: Nature Communications, IEEE Transactions on Pattern Analysis and Machine Intelligence, International Journal of Computer Vision, Journal of Machine Learning Research, IEEE Transactions on Knowledge and Data Engineering, IEEE Transactions on Multimedia, IEEE Transactions on Signal Processing, IEEE Signal Processing Letters, IEEE Robotics and Automation Letters, Neurocomputing