Time-Aware Robustness of Temporal Graph Neural Networks for Link Prediction (Extended Abstract)

Authors Marco Sälzer , Silvia Beddar-Wiesing



PDF
Thumbnail PDF

File

LIPIcs.TIME.2023.19.pdf
  • Filesize: 0.5 MB
  • 3 pages

Document Identifiers

Author Details

Marco Sälzer
  • School of Electrical Engineering and Computer Science, University of Kassel, Germany
  • marcosaelzer.github.io
Silvia Beddar-Wiesing
  • School of Electrical Engineering and Computer Science, University of Kassel, Germany

Cite AsGet BibTex

Marco Sälzer and Silvia Beddar-Wiesing. Time-Aware Robustness of Temporal Graph Neural Networks for Link Prediction (Extended Abstract). In 30th International Symposium on Temporal Representation and Reasoning (TIME 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 278, pp. 19:1-19:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)
https://doi.org/10.4230/LIPIcs.TIME.2023.19

Abstract

We present a first notion of a time-aware robustness property for Temporal Graph Neural Networks (TGNN), a recently popular framework for computing functions over continuous- or discrete-time graphs, motivated by recent work on time-aware attacks on TGNN used for link prediction tasks. Furthermore, we discuss promising verification approaches for the presented or similar safety properties and possible next steps in this direction of research.

Subject Classification

ACM Subject Classification
  • Computing methodologies → Neural networks
  • Security and privacy → Logic and verification
Keywords
  • graph neural networks
  • temporal
  • verification

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Artale et al. First-order temporal logic on finite traces: Semantic properties, decidable fragments, and applications. CoRR, abs/2202.00610, 2022. URL: https://arxiv.org/abs/2202.00610.
  2. Bojchevski et al. Efficient robustness certificates for discrete data: Sparsity-aware randomized smoothing for graphs, images and more. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 1003-1013. PMLR, July 2020. URL: https://proceedings.mlr.press/v119/bojchevski20a.html.
  3. Chen et al. Time-aware gradient attack on dynamic network link prediction. IEEE Trans. Knowl. Data Eng., 35(2):2091-2102, 2023. URL: https://doi.org/10.1109/TKDE.2021.3110580.
  4. Kazemi et al. Representation learning for dynamic graphs: A survey. J. Mach. Learn. Res., 21:70:1-70:73, 2020. URL: http://jmlr.org/papers/v21/19-447.html.
  5. Longa et al. Graph Neural Networks for Temporal Graphs: State of the Art, Open Challenges, and Opportunities. arXiv preprint arXiv:2302.01018, 2023. Google Scholar
  6. Skarding et al. Foundations and Modeling of Dynamic Networks using Dynamic Graph Neural Networks: A Survey. IEEE Access, 9:79143-79168, 2021. Google Scholar
  7. Wu et al. A comprehensive survey on graph neural networks. IEEE Trans. Neural Networks Learn. Syst., 32(1):4-24, 2021. URL: https://doi.org/10.1109/TNNLS.2020.2978386.
  8. Giuseppe De Giacomo and Moshe Y. Vardi. Linear temporal logic and linear dynamic logic on finite traces. In IJCAI 2013, Proceedings of the 23rd International Joint Conference on Artificial Intelligence, Beijing, China, August 3-9, 2013, pages 854-860. IJCAI/AAAI, 2013. URL: http://www.aaai.org/ocs/index.php/IJCAI/IJCAI13/paper/view/6997.
  9. Marco Sälzer and Martin Lange. Fundamental limits in formal verification of message-passing neural networks. In The Eleventh International Conference on Learning Representations, 2023. URL: https://openreview.net/forum?id=WlbG820mRH-.
  10. Daniel Zügner and Stephan Günnemann. Certifiable robustness and robust training for graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019, Anchorage, AK, USA, August 4-8, 2019, pages 246-256. ACM, 2019. URL: https://doi.org/10.1145/3292500.3330905.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail