From Characters to Understanding Natural Language (C2NLU): Robust End-to-End Deep Learning for NLP (Dagstuhl Seminar 17042)

Authors Phil Blunsom, Kyunghyun Cho, Chris Dyer, Hinrich Schütze and all authors of the abstracts in this report



PDF
Thumbnail PDF

File

DagRep.7.1.129.pdf
  • Filesize: 1.17 MB
  • 29 pages

Document Identifiers

Author Details

Phil Blunsom
Kyunghyun Cho
Chris Dyer
Hinrich Schütze
and all authors of the abstracts in this report

Cite AsGet BibTex

Phil Blunsom, Kyunghyun Cho, Chris Dyer, and Hinrich Schütze. From Characters to Understanding Natural Language (C2NLU): Robust End-to-End Deep Learning for NLP (Dagstuhl Seminar 17042). In Dagstuhl Reports, Volume 7, Issue 1, pp. 129-157, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)
https://doi.org/10.4230/DagRep.7.1.129

Abstract

This report documents the program and the outcomes of Dagstuhl Seminar 17042 "From Characters to Understanding Natural Language (C2NLU): Robust End-to-End Deep Learning for NLP". The seminar brought together researchers from different fields, including natural language processing, computational linguistics, deep learning and general machine learning. 31 participants from 22 academic and industrial institutions discussed advantages and challenges of using characters, i.e., "raw text", as input for deep learning models instead of language-specific tokens. Eight talks provided overviews of different topics, approaches and challenges in current natural language processing research. In five working groups, the participants discussed current natural language processing/understanding topics in the context of character-based modeling, namely, morphology, machine translation, representation learning, end-to-end systems and dialogue. In most of the discussions, the need for a more detailed model analysis was pointed out. Especially for character-based input, it is important to analyze what a deep learning model is able to learn about language - about tokens, morphology or syntax in general. For an efficient and effective understanding of language, it might furthermore be beneficial to share representations learned from multiple objectives to enable the models to focus on their specific understanding task instead of needing to learn syntactic regularities of language first. Therefore, benefits and challenges of transfer learning were an important topic of the working groups as well as of the panel discussion and the final plenary discussion.
Keywords
  • Natural Language Understanding
  • Artificial Intelligence
  • Deep Learning
  • Natural Language Processing
  • Representation Learning

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail