Learning Concepts Described By Weight Aggregation Logic

Authors Steffen van Bergerem , Nicole Schweikardt

Thumbnail PDF


  • Filesize: 0.67 MB
  • 18 pages

Document Identifiers

Author Details

Steffen van Bergerem
  • RWTH Aachen University, Germany
Nicole Schweikardt
  • Humboldt-Universität zu Berlin, Germany


We thank Martin Grohe and Sandra Kiefer for helpful discussions on the subject.

Cite AsGet BibTex

Steffen van Bergerem and Nicole Schweikardt. Learning Concepts Described By Weight Aggregation Logic. In 29th EACSL Annual Conference on Computer Science Logic (CSL 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 183, pp. 10:1-10:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


We consider weighted structures, which extend ordinary relational structures by assigning weights, i.e. elements from a particular group or ring, to tuples present in the structure. We introduce an extension of first-order logic that allows to aggregate weights of tuples, compare such aggregates, and use them to build more complex formulas. We provide locality properties of fragments of this logic including Feferman-Vaught decompositions and a Gaifman normal form for a fragment called FOW₁, as well as a localisation theorem for a larger fragment called FOWA₁. This fragment can express concepts from various machine learning scenarios. Using the locality properties, we show that concepts definable in FOWA₁ over a weighted background structure of at most polylogarithmic degree are agnostically PAC-learnable in polylogarithmic time after pseudo-linear time preprocessing.

Subject Classification

ACM Subject Classification
  • Theory of computation → Logic
  • Theory of computation → Complexity theory and logic
  • Computing methodologies → Logical and relational learning
  • Computing methodologies → Supervised learning
  • first-order definable concept learning
  • agnostic probably approximately correct learning
  • classification problems
  • locality
  • Feferman-Vaught decomposition
  • Gaifman normal form
  • first-order logic with counting
  • weight aggregation logic


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. Dana Angluin. Queries and concept learning. Machine Learning, 2(4):319-342, 1987. URL: https://doi.org/10.1007/BF00116828.
  2. Haim Gaifman. On local and non-local properties. In Jacques Stern, editor, Proceedings of the Herbrand Symposium, volume 107 of Studies in Logic and the Foundations of Mathematics, pages 105-135. North-Holland, 1982. URL: https://doi.org/10.1016/S0049-237X(08)71879-2.
  3. Emilie Grienenberger and Martin Ritzert. Learning definable hypotheses on trees. In 22nd International Conference on Database Theory, ICDT 2019, March 26-28, 2019, Lisbon, Portugal, pages 24:1-24:18, 2019. URL: https://doi.org/10.4230/LIPIcs.ICDT.2019.24.
  4. Martin Grohe. Logic, graphs, and algorithms. In Logic and Automata: History and Perspectives [in Honor of Wolfgang Thomas], volume 2 of Texts in Logic and Games, pages 357-422. Amsterdam University Press, 2008. Google Scholar
  5. Martin Grohe. word2vec, node2vec, graph2vec, x2vec: Towards a theory of vector embeddings of structured data. In Dan Suciu, Yufei Tao, and Zhewei Wei, editors, Proceedings of the 39th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS 2020, Portland, OR, USA, June 14-19, 2020, pages 1-16. ACM, 2020. URL: https://doi.org/10.1145/3375395.3387641.
  6. Martin Grohe, Christof Löding, and Martin Ritzert. Learning MSO-definable hypotheses on strings. In International Conference on Algorithmic Learning Theory, ALT 2017, 15-17 October 2017, Kyoto University, Kyoto, Japan, pages 434-451, 2017. URL: http://proceedings.mlr.press/v76/grohe17a.html.
  7. Martin Grohe and Martin Ritzert. Learning first-order definable concepts over structures of small degree. In 32nd Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2017, Reykjavik, Iceland, June 20-23, 2017, pages 1-12, 2017. URL: https://doi.org/10.1109/LICS.2017.8005080.
  8. Martin Grohe and Nicole Schweikardt. First-order query evaluation with cardinality conditions. In Proceedings of the 37th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, Houston, TX, USA, June 10-15, 2018, pages 253-266, 2018. URL: https://doi.org/10.1145/3196959.3196970.
  9. Martin Grohe and György Turán. Learnability and definability in trees and similar structures. Theory Comput. Syst., 37(1):193-220, 2004. URL: https://doi.org/10.1007/s00224-003-1112-8.
  10. Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 855-864, 2016. URL: https://doi.org/10.1145/2939672.2939754.
  11. David Haussler. Decision theoretic generalizations of the PAC model for neural net and other learning applications. Inf. Comput., 100(1):78-150, 1992. URL: https://doi.org/10.1016/0890-5401(92)90010-D.
  12. Michael J. Kearns and Umesh V. Vazirani. An Introduction to Computational Learning Theory. MIT Press, 1994. URL: https://mitpress.mit.edu/books/introduction-computational-learning-theory.
  13. Dietrich Kuske and Nicole Schweikardt. First-order logic with counting. In 32nd Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2017, Reykjavik, Iceland, June 20-23, 2017, pages 1-12, 2017. URL: https://doi.org/10.1109/LICS.2017.8005133.
  14. Dietrich Kuske and Nicole Schweikardt. Gaifman normal forms for counting extensions of first-order logic. In 45th International Colloquium on Automata, Languages, and Programming, ICALP 2018, July 9-13, 2018, Prague, Czech Republic, pages 133:1-133:14, 2018. URL: https://doi.org/10.4230/LIPIcs.ICALP.2018.133.
  15. Leonid Libkin. Elements of Finite Model Theory. Texts in Theoretical Computer Science. An EATCS Series. Springer, 2004. URL: https://doi.org/10.1007/978-3-662-07003-1.
  16. Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and Leman go neural: Higher-order graph neural networks. In The 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 4602-4609, 2019. URL: https://doi.org/10.1609/aaai.v33i01.33014602.
  17. Shimei Pan and Tao Ding. Social media-based user embedding: A literature review. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 6318-6324, 2019. URL: https://doi.org/10.24963/ijcai.2019/881.
  18. Maximilian Schleich, Dan Olteanu, Mahmoud Abo Khamis, Hung Q. Ngo, and XuanLong Nguyen. Learning models over relational data: A brief tutorial. In Nahla Ben Amor, Benjamin Quost, and Martin Theobald, editors, Scalable Uncertainty Management - 13th International Conference, SUM 2019, Compiègne, France, December 16-18, 2019, Proceedings, volume 11940 of Lecture Notes in Computer Science, pages 423-432. Springer, 2019. URL: https://doi.org/10.1007/978-3-030-35514-2_32.
  19. Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, New York, NY, USA, 2014. Google Scholar
  20. Szymon Toruńczyk. Aggregate queries on sparse databases. In Dan Suciu, Yufei Tao, and Zhewei Wei, editors, Proceedings of the 39th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS 2020, Portland, OR, USA, June 14-19, 2020, pages 427-443. ACM, 2020. URL: https://doi.org/10.1145/3375395.3387660.
  21. Leslie G. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134-1142, 1984. URL: https://doi.org/10.1145/1968.1972.
  22. Steffen van Bergerem. Learning concepts definable in first-order logic with counting. In 34th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2019, Vancouver, BC, Canada, June 24-27, 2019, pages 1-13, 2019. URL: https://doi.org/10.1109/LICS.2019.8785811.
  23. Steffen van Bergerem and Nicole Schweikardt. Learning concepts described by weight aggregation logic. CoRR, abs/2009.10574, 2020. URL: http://arxiv.org/abs/2009.10574.
  24. Vladimir Vapnik. Principles of risk minimization for learning theory. In Advances in Neural Information Processing Systems 4, [NIPS Conference, Denver, Colorado, USA, December 2-5, 1991], pages 831-838, 1991. URL: http://papers.nips.cc/paper/506-principles-of-risk-minimization-for-learning-theory.