Search Results

Documents authored by Yarlott, W. Victor H.


Document
Comparing Extant Story Classifiers: Results & New Directions

Authors: Joshua D. Eisenberg, W. Victor H. Yarlott, and Mark A. Finlayson

Published in: OASIcs, Volume 53, 7th Workshop on Computational Models of Narrative (CMN 2016)


Abstract
Having access to a large set of stories is a necessary first step for robust and wide-ranging computational narrative modeling; happily, language data - including stories - are increasingly available in electronic form. Unhappily, the process of automatically separating stories from other forms of written discourse is not straightforward, and has resulted in a data collection bottleneck. Therefore researchers have sought to develop reliable, robust automatic algorithms for identifying story text mixed with other non-story text. In this paper we report on the reimplementation and experimental comparison of the two approaches to this task: Gordon's unigram classifier, and Corman's semantic triplet classifier. We cross-analyze their performance on both Gordon's and Corman's corpora, and discuss similarities, differences, and gaps in the performance of these classifiers, and point the way forward to improving their approaches.

Cite as

Joshua D. Eisenberg, W. Victor H. Yarlott, and Mark A. Finlayson. Comparing Extant Story Classifiers: Results & New Directions. In 7th Workshop on Computational Models of Narrative (CMN 2016). Open Access Series in Informatics (OASIcs), Volume 53, pp. 6:1-6:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{eisenberg_et_al:OASIcs.CMN.2016.6,
  author =	{Eisenberg, Joshua D. and Yarlott, W. Victor H. and Finlayson, Mark A.},
  title =	{{Comparing Extant Story Classifiers: Results \& New Directions}},
  booktitle =	{7th Workshop on Computational Models of Narrative (CMN 2016)},
  pages =	{6:1--6:10},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-020-0},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{53},
  editor =	{Miller, Ben and Lieto, Antonio and Ronfard, R\'{e}mi and Ware, Stephen G. and Finlayson, Mark A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.CMN.2016.6},
  URN =		{urn:nbn:de:0030-drops-67079},
  doi =		{10.4230/OASIcs.CMN.2016.6},
  annote =	{Keywords: Story Detection, Machine Learning, Natural Language Processing, Perceptron Learning}
}
Document
Learning a Better Motif Index: Toward Automated Motif Extraction

Authors: W. Victor H. Yarlott and Mark A. Finlayson

Published in: OASIcs, Volume 53, 7th Workshop on Computational Models of Narrative (CMN 2016)


Abstract
Motifs are distinctive recurring elements found in folklore, and are used by folklorists to categorize and find tales across cultures and track the genetic relationships of tales over time. Motifs have significance beyond folklore as communicative devices found in news, literature, press releases, and propaganda that concisely imply a large constellation of culturally-relevant information. Until now, folklorists have only extracted motifs from narratives manually, and the conceptual structure of motifs has not been formally laid out. In this short paper we propose that it is possible to automate the extraction of both existing and new motifs from narratives using supervised learning techniques and thereby possible to learn a computational model of how folklorists determine motifs. Automatic extraction would enable the construction of a truly comprehensive motif index, which does not yet exist, as well as the automatic detection of motifs in cultural materials, opening up a new world of narrative information for analysis by anyone interested in narrative and culture. We outline an experimental design, and report on our efforts to produce a structured form of Thompson's motif index, as well as a development annotation of motifs in a small collection of Russian folklore. We propose several initial computational, supervised approaches, and describe several possible metrics of success. We describe lessons learned and difficulties encountered so far, and outline our plan going forward.

Cite as

W. Victor H. Yarlott and Mark A. Finlayson. Learning a Better Motif Index: Toward Automated Motif Extraction. In 7th Workshop on Computational Models of Narrative (CMN 2016). Open Access Series in Informatics (OASIcs), Volume 53, pp. 7:1-7:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{yarlott_et_al:OASIcs.CMN.2016.7,
  author =	{Yarlott, W. Victor H. and Finlayson, Mark A.},
  title =	{{Learning a Better Motif Index: Toward Automated Motif Extraction}},
  booktitle =	{7th Workshop on Computational Models of Narrative (CMN 2016)},
  pages =	{7:1--7:10},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-020-0},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{53},
  editor =	{Miller, Ben and Lieto, Antonio and Ronfard, R\'{e}mi and Ware, Stephen G. and Finlayson, Mark A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.CMN.2016.7},
  URN =		{urn:nbn:de:0030-drops-67088},
  doi =		{10.4230/OASIcs.CMN.2016.7},
  annote =	{Keywords: Text analysis, automated feature extraction, folklore, narrative, Russian folktales}
}
Document
ProppML: A Complete Annotation Scheme for Proppian Morphologies

Authors: W. Victor H. Yarlott and Mark A. Finlayson

Published in: OASIcs, Volume 53, 7th Workshop on Computational Models of Narrative (CMN 2016)


Abstract
We give a preliminary description of ProppML, an annotation scheme designed to capture all the components of a Proppian-style morphological analysis of narratives. This work represents the first fully complete annotation scheme for Proppian morphologies, going beyond previous annotation schemes such as PftML, ProppOnto, Bod et al., and our own prior work. Using ProppML we have annotated Propp's morphology on fifteen tales (18,862 words) drawn from his original corpus of Russian folktales. This is a significantly larger set of data than annotated in previous studies. This pilot corpus was constructed via double annotation by two highly trained annotators, whose annotations were then combined after discussion with a third highly trained adjudicator, resulting in gold standard data which is appropriate for training machine learning algorithms. Agreement measures calculated between both annotators show very good agreement (F_1>0.75, kappa>0.9 for functions; F_1>0.6 for moves; and F_1>0.8, kappa>0.6 for dramatis personae). This is the first robust demonstration of reliable annotation of Propp's system.

Cite as

W. Victor H. Yarlott and Mark A. Finlayson. ProppML: A Complete Annotation Scheme for Proppian Morphologies. In 7th Workshop on Computational Models of Narrative (CMN 2016). Open Access Series in Informatics (OASIcs), Volume 53, pp. 8:1-8:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{yarlott_et_al:OASIcs.CMN.2016.8,
  author =	{Yarlott, W. Victor H. and Finlayson, Mark A.},
  title =	{{ProppML: A Complete Annotation Scheme for Proppian Morphologies}},
  booktitle =	{7th Workshop on Computational Models of Narrative (CMN 2016)},
  pages =	{8:1--8:19},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-020-0},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{53},
  editor =	{Miller, Ben and Lieto, Antonio and Ronfard, R\'{e}mi and Ware, Stephen G. and Finlayson, Mark A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.CMN.2016.8},
  URN =		{urn:nbn:de:0030-drops-67094},
  doi =		{10.4230/OASIcs.CMN.2016.8},
  annote =	{Keywords: Narrative structure, Computational folkloristics, Russian folktales}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail