Using Embeddings to Improve Named Entity Recognition Classification with Graphs

Authors Gabriel Silva , Mário Rodrigues , António Teixeira , Marlene Amorim



PDF
Thumbnail PDF

File

OASIcs.SLATE.2024.1.pdf
  • Filesize: 0.82 MB
  • 11 pages

Document Identifiers

Author Details

Gabriel Silva
  • IEETA, DETI, University of Aveiro, Portugal
  • LASI – Intelligent System Associate Laboratory, Portugal
Mário Rodrigues
  • IEETA, ESTGA, University of Aveiro, Portugal
  • LASI – Intelligent System Associate Laboratory, Portugal
António Teixeira
  • IEETA, DETI, University of Aveiro, Portugal
  • LASI – Intelligent System Associate Laboratory, Portugal
Marlene Amorim
  • GOVCOPP, DEGEIT, University of Aveiro, Portugal

Cite AsGet BibTex

Gabriel Silva, Mário Rodrigues, António Teixeira, and Marlene Amorim. Using Embeddings to Improve Named Entity Recognition Classification with Graphs. In 13th Symposium on Languages, Applications and Technologies (SLATE 2024). Open Access Series in Informatics (OASIcs), Volume 120, pp. 1:1-1:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)
https://doi.org/10.4230/OASIcs.SLATE.2024.1

Abstract

Richer information has potential to improve performance of NLP (Natural Language Processing) tasks such as Named Entity Recognition. A linear sequence of words can be enriched with the sentence structure, as well as their syntactic structure. However, traditional NLP methods do not contemplate this kind of information. With the use of Knowledge Graphs all this information can be represented and made use off by Graph ML (Machine Learning) techniques. Previous experiments using only graphs with their syntactic structure as input to current state-of-the-art Graph ML models failed to prove the potential of the technology. As such, in this paper the use of word embeddings is explored as an additional enrichment of the graph and, in consequence, of the input to the classification models. This use of embeddings adds a layer of context that was previously missing when using only syntactic information. The proposed method was assessed using CoNLL dataset and results showed noticeable improvements in performance when adding embeddings. The best accuracy results with embedings attained 94.73 % accuracy, compared to the 88.58 % without embedings while metrics such as Macro-F1, Precision and Recall achieved an improvement in performance of over 20%. We test these models with a different number of classes to assess whether the quality of them would degrade or not. Due to the use of inductive learning methods (such as Graph SAGE) these results provide us with models that can be used in real-world scenarios as there is no need to re-train the whole graph to predict on new data points as is the case with traditional Graph ML methods (for example, Graph Convolutional Networks).

Subject Classification

ACM Subject Classification
  • Information systems → Document representation
  • Information systems → Ontologies
Keywords
  • Knowledge graphs
  • Enriched data
  • Natural language processing
  • Named Entity Recognition

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018. Google Scholar
  2. Manuel Carbonell, Pau Riba, Mauricio Villegas, Alicia Fornés, and Josep Lladós. Named entity recognition and relation extraction with graph neural networks in semi structured documents. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 9622-9627, 2021. URL: https://doi.org/10.1109/ICPR48806.2021.9412669.
  3. Alberto Cetoli, Stefano Bragaglia, Andrew O'Harney, and Marc Sloan. Graph convolutional networks for named entity recognition. In Jan Hajič, editor, Proceedings of the 16th International Workshop on Treebanks and Linguistic Theories, pages 37-45, Prague, Czech Republic, 2017. URL: https://aclanthology.org/W17-7607.
  4. Zhikai Chen, Haitao Mao, Hang Li, Wei Jin, Hongzhi Wen, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, Wenqi Fan, Hui Liu, and Jiliang Tang. Exploring the potential of large language models (llms)in learning on graphs. SIGKDD Explor., 25(2):42-61, March 2024. URL: https://doi.org/10.1145/3655103.3655110.
  5. Marie-Catherine de Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D. Manning. Universal Stanford dependencies: A cross-linguistic typology. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 4585-4592, Reykjavik, Iceland, May 2014. European Language Resources Association (ELRA). URL: http://www.lrec-conf.org/proceedings/lrec2014/pdf/1062_Paper.pdf.
  6. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In North American Chapter of the Association for Computational Linguistics, 2019. URL: https://api.semanticscholar.org/CorpusID:52967399.
  7. Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. Google Scholar
  8. William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs, 2018. URL: https://arxiv.org/abs/1706.02216.
  9. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2017. URL: https://arxiv.org/abs/1412.6980.
  10. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012. Google Scholar
  11. Guillaume Lachaud, Patricia Conde-Cespedes, and Maria Trocan. Comparison between inductive and transductive learning in a real citation network using graph neural networks. In Proceedings of the 2022 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM '22, pages 534-540. IEEE Press, 2023. URL: https://doi.org/10.1109/ASONAM55673.2022.10068589.
  12. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360, 2016. URL: https://arxiv.org/abs/1603.01360.
  13. Jinhyuk Lee, Zhuyun Dai, Xiaoqi Ren, Blair Chen, Daniel Cer, Jeremy R. Cole, Kai Hui, Michael Boratko, Rajvi Kapadia, Wen Ding, Yi Luan, Sai Meher Karthik Duddu, Gustavo Hernandez Abrego, Weiqiang Shi, Nithi Gupta, Aditya Kusupati, Prateek Jain, Siddhartha Reddy Jonnalagadda, Ming-Wei Chang, and Iftekhar Naim. Gecko: Versatile text embeddings distilled from large language models, 2024. https://arxiv.org/abs/2403.20327, URL: https://doi.org/10.48550/arXiv.2403.20327.
  14. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault, editors, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online, July 2020. Association for Computational Linguistics. URL: https://doi.org/10.18653/v1/2020.acl-main.703.
  15. Monica Madan, Ashima Rani, and Neha Bhateja. Applications of named entity recognition using graph convolution network. SN Computer Science, 4(3):266, 2023. URL: https://doi.org/10.1007/S42979-023-01739-8.
  16. Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In International Conference on Learning Representations, 2013. URL: https://api.semanticscholar.org/CorpusID:5959482.
  17. Mark A. Musen. The protégé project: a look back and a look forward. AI Matters, 1(4):4-12, 2015. URL: https://doi.org/10.1145/2757001.2757003.
  18. R OpenAI. Gpt-4 technical report. ArXiv, 2303, 2023. Google Scholar
  19. Shon Otmazgin, Arie Cattan, and Yoav Goldberg. F-coref: Fast, accurate and easy to use coreference resolution, 2022. https://arxiv.org/abs/2209.04280, URL: https://doi.org/10.48550/arXiv.2209.04280.
  20. Gabriel Silva, Mário Rodrigues, António Teixeira, and Marlene Amorim. A Framework for Fostering Easier Access to Enriched Textual Information. In Alberto Simões, Mario Marcelo Berón, and Filipe Portela, editors, 12th Symposium on Languages, Applications and Technologies (SLATE 2023), volume 113 of Open Access Series in Informatics (OASIcs), pages 2:1-2:14, Dagstuhl, Germany, 2023. Schloss Dagstuhl - Leibniz-Zentrum für Informatik. URL: https://doi.org/10.4230/OASIcs.SLATE.2023.2.
  21. Gabriel Silva, Mário Rodrigues, António Teixeira, and Marlene Amorim. First assessment of graph machine learning approaches to Portuguese named entity recognition. In Pablo Gamallo, Daniela Claro, António Teixeira, Livy Real, Marcos Garcia, Hugo Gonçalo Oliveira, and Raquel Amaro, editors, Proceedings of the 16th International Conference on Computational Processing of Portuguese, pages 563-567, Santiago de Compostela, Galicia/Spain, March 2024. Association for Computational Lingustics. URL: https://aclanthology.org/2024.propor-1.61.
  22. Erik F. Tjong Kim Sang and Fien De Meulder. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147, 2003. URL: https://aclanthology.org/W03-0419.
  23. Zhilin Yang, William W. Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning with graph embeddings, 2016. URL: https://arxiv.org/abs/1603.08861.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail