OASIcs, Volume 21

1st Symposium on Languages, Applications and Technologies



Thumbnail PDF

Event

SLATE 2012, June 21-22, 2012, Braga, Portugal

Editors

Alberto Simões
Ricardo Queirós
Daniela da Cruz

Publication Details

  • published at: 2012-06-21
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-939897-40-8
  • DBLP: db/conf/slate/slate2012

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
OASIcs, Volume 21, SLATE'12, Complete Volume

Authors: Alberto Simões, Ricardo Queirós, and Daniela da Cruz


Abstract
OASIcs, Volume 21, SLATE'12, Complete Volume

Cite as

1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@Proceedings{simoes_et_al:OASIcs.SLATE.2012,
  title =	{{OASIcs, Volume 21, SLATE'12, Complete Volume}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012},
  URN =		{urn:nbn:de:0030-drops-35839},
  doi =		{10.4230/OASIcs.SLATE.2012},
  annote =	{Keywords: Interoperability, Programming Languages, Natural Language Processing}
}
Document
Front Matter
Frontmatter, Table of Contents, Preface, Committees, List of Authors

Authors: Alberto Simões, Ricardo Queirós, and Daniela da Cruz


Abstract
Frontmatter, Table of Contents, Preface, Committees, List of Authors

Cite as

1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. i-xvii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{simoes_et_al:OASIcs.SLATE.2012.i,
  author =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  title =	{{Frontmatter, Table of Contents, Preface, Committees, List of Authors}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{i--xvii},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.i},
  URN =		{urn:nbn:de:0030-drops-35085},
  doi =		{10.4230/OASIcs.SLATE.2012.i},
  annote =	{Keywords: Frontmatter, Table of Contents, Preface, Committees, List of Authors}
}
Document
Keynote
The New Generation of Algorithmic Debuggers (Keynote)

Authors: Josep Silva Galiana


Abstract
Algorithmic debugging is a debugging technique that has been extended to practically all programming paradigms. Roughly speaking, the technique constructs an internal representation of all (sub)computations performed during the execution of a buggy program; and then, it asks the programmer about the correctness of such computations. The answers of the programmer guide the search for the bug until it is isolated by discarding correct parts of the program. After twenty years of research in algorithmic debugging many different techniques have appeared to improve the original proposal. Recent advances in the internal architecture of algorithmic debuggers face the problem of scalability with great improvements in the performance thanks to the use of static transformations of the internal data structures used. The talk will present a detailed comparison of the last algorithmic debugging techniques analyzing their differences, their costs, and how can they be integrated into a real algorithmic debugger.

Cite as

Josep Silva Galiana. The New Generation of Algorithmic Debuggers (Keynote). In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, p. 3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{silvagaliana:OASIcs.SLATE.2012.3,
  author =	{Silva Galiana, Josep},
  title =	{{The New Generation of Algorithmic Debuggers}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{3--3},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.3},
  URN =		{urn:nbn:de:0030-drops-35097},
  doi =		{10.4230/OASIcs.SLATE.2012.3},
  annote =	{Keywords: Debugging, Programming, Program Correction}
}
Document
Keynote
From Program Execution to Automatic Reasoning: Integrating Ontologies into Programming Languages (Keynote)

Authors: Alexander Paar


Abstract
Since their standardizations by the W3C, the Extensible Markup Language (XML) and XML Schema Definition (XSD) have been widely adopted as a format to describe data and to define programming language agnostic data types and content models. Several other W3C standards such as the Resource Description Framework (RDF) and the Web Ontology Language (OWL) are based on XML and XSD. At the same time, statically typed object-oriented programming languages such as Java and C# are most widely used for software development. This talk will delineate the conceptual bases of XML Schema Definition and the Web Ontology Language and how they differ from Java or C#. It will be shown how XSD facilitates the definition of data types based on value space constraints and how OWL ontologies are amenable to automatic reasoning. The superior modeling features of XSD and OWL will be elucidated based on exemplary comparisons with frame logic-based models. A significant shortcoming will become obvious: the deficient integration of XSD and OWL with the type systems of object-oriented programming languages. Eventually, the Zhi# approach will be presented that integrates XSD and OWL into the C# programming language. In Zhi#, value space-based data types and ontological concept descriptions are first-class citizens; compile time and runtime support is readily available for XSD and OWL. Thus, the execution of Zhi# programs is directly controlled by the artificial intelligence inherent in ontological models: Zhi# programs don't just execute, they reason.

Cite as

Alexander Paar. From Program Execution to Automatic Reasoning: Integrating Ontologies into Programming Languages (Keynote). In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, p. 5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{paar:OASIcs.SLATE.2012.5,
  author =	{Paar, Alexander},
  title =	{{From Program Execution to Automatic Reasoning: Integrating Ontologies into Programming Languages}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{5--5},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.5},
  URN =		{urn:nbn:de:0030-drops-35103},
  doi =		{10.4230/OASIcs.SLATE.2012.5},
  annote =	{Keywords: Ontologies, OO programming languages, Automatic reasoning}
}
Document
On Extending a Linear Tabling Framework to Support Batched Scheduling

Authors: Miguel Areias and Ricardo Rocha


Abstract
Tabled evaluation is a recognized and powerful technique that overcomes some limitations of traditional Prolog systems in dealing with recursion and redundant sub-computations. During tabled execution, several decisions have to be made. These are determined by the scheduling strategy. Whereas a strategy can achieve very good performance for certain applications, for others it might add overheads and even lead to unacceptable inefficiency. The two most successful tabling scheduling strategies are local scheduling and batched scheduling. In previous work, we have developed a framework, on top of the Yap system, that supports the combination of different linear tabling strategies for local scheduling. In this work, we propose the extension of our framework, to support batched scheduling. In particular, we are interested in the two most successful linear tabling strategies, the DRA and DRE strategies. To the best of our knowledge, no single tabling Prolog system supports both strategies simultaneously for batched scheduling.

Cite as

Miguel Areias and Ricardo Rocha. On Extending a Linear Tabling Framework to Support Batched Scheduling. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 9-24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{areias_et_al:OASIcs.SLATE.2012.9,
  author =	{Areias, Miguel and Rocha, Ricardo},
  title =	{{On Extending a Linear Tabling Framework to Support Batched Scheduling}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{9--24},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.9},
  URN =		{urn:nbn:de:0030-drops-35113},
  doi =		{10.4230/OASIcs.SLATE.2012.9},
  annote =	{Keywords: Linear Tabling, Scheduling, Implementation}
}
Document
Mode-Directed Tabling and Applications in the YapTab System

Authors: João Santos and Ricardo Rocha


Abstract
Tabling is an implementation technique that solves some limitations of Prolog's operational semantics in dealing with recursion and redundant sub-computations. Tabling works by memorizing generated answers and then by reusing them on similar calls that appear during the resolution process. In a traditional tabling system, all the arguments of a tabled subgoal call are considered when storing answers into the table space. Traditional tabling systems are thus very good for problems that require finding all answers. Mode-directed tabling is an extension to the tabling technique that supports the definition of selective criteria for specifying how answers are inserted into the table space. Implementations of mode-directed tabling are already available in systems like ALS-Prolog, B-Prolog and XSB. In this paper, we propose a more general approach to the declaration and use of mode-directed tabling, implemented on top of the YapTab tabling system, and we show applications of our approach to problems involving Justification, Preferences and Answer Subsumption.

Cite as

João Santos and Ricardo Rocha. Mode-Directed Tabling and Applications in the YapTab System. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 25-40, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{santos_et_al:OASIcs.SLATE.2012.25,
  author =	{Santos, Jo\~{a}o and Rocha, Ricardo},
  title =	{{Mode-Directed Tabling and Applications in the YapTab System}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{25--40},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.25},
  URN =		{urn:nbn:de:0030-drops-35123},
  doi =		{10.4230/OASIcs.SLATE.2012.25},
  annote =	{Keywords: Tabling, Mode Operators, Applications}
}
Document
Generating flex Lexical Scanners for Perl Parse::Yapp

Authors: Alberto Simões, Nuno Ramos Carvalho, and José João Almeida


Abstract
Perl is known for its versatile regular expressions. Nevertheless, using Perl regular expressions for creating fast lexical analyzer is not easy. As an alternative, the authors defend the automated generation of the lexical analyzer in a well known fast application (flex) based on a simple Perl definition in the syntactic analyzer. In this paper we extend the syntax used by Parse::Yapp, one of the most used parser generators for Perl, making the automatic generation of flex lexical scanners possible. We explain how this is performed and conclude with some benchmarks that show the relevance of the approach.

Cite as

Alberto Simões, Nuno Ramos Carvalho, and José João Almeida. Generating flex Lexical Scanners for Perl Parse::Yapp. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 41-50, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{simoes_et_al:OASIcs.SLATE.2012.41,
  author =	{Sim\~{o}es, Alberto and Carvalho, Nuno Ramos and Almeida, Jos\'{e} Jo\~{a}o},
  title =	{{Generating flex Lexical Scanners for Perl Parse::Yapp}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{41--50},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.41},
  URN =		{urn:nbn:de:0030-drops-35133},
  doi =		{10.4230/OASIcs.SLATE.2012.41},
  annote =	{Keywords: flex, Perl, yapp, lexical analyzer}
}
Document
A Purely Functional Combinator Language for Software Quality Assessment

Authors: Pedro Martins, João Paulo Fernandes, and João Saraiva


Abstract
Quality assessment of open source software is becoming an important and active research area. One of the reasons for this recent interest is the consequence of Internet popularity. Nowadays, programming also involves looking for the large set of open source libraries and tools that may be reused when developing our software applications. In order to reuse such open source software artifacts, programmers not only need the guarantee that the reused artifact is certified, but also that independently developed artifacts can be easily combined into a coherent piece of software. In this paper we describe a domain specific language that allows programmers to describe in an abstract level how software artifacts can be combined into powerful software certification processes. This domain specific language is the building block of a web-based, open-source software certification portal. This paper introduces the embedding of such domain specific language as combinator library written in the Haskell programming language. The semantics of this language is expressed via attribute grammars that are embedded in Haskell, which provide a modular and incremental setting to define the combination of software artifacts.

Cite as

Pedro Martins, João Paulo Fernandes, and João Saraiva. A Purely Functional Combinator Language for Software Quality Assessment. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 51-69, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{martins_et_al:OASIcs.SLATE.2012.51,
  author =	{Martins, Pedro and Fernandes, Jo\~{a}o Paulo and Saraiva, Jo\~{a}o},
  title =	{{A Purely Functional Combinator Language for Software Quality Assessment}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{51--69},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.51},
  URN =		{urn:nbn:de:0030-drops-35149},
  doi =		{10.4230/OASIcs.SLATE.2012.51},
  annote =	{Keywords: Process Management, Combinators, Attribute Grammars, Functional Programming}
}
Document
PH-Helper - a Syntax-Directed Editor for Hoshimi Programming Language, HL

Authors: Mariano Luzza, Mario Marcelo Beron, and Pedro Rangel Henriques


Abstract
It is well known that students face many difficulties when they have to learn programming. Generally, these difficulties arise from two main reasons: i) the kind of exercises proposed by the teacher, and ii) the programming language used for solving those problems. The first problem is overcome by selecting an interesting application domain for the students. The second problem is tackled by using programming languages specialized for teaching. Nowadays, there are many programming languages aimed at simplifying the learning process. However, many of them still have the same drawbacks of traditional programming languages: the language used to write the statements is different from the programmers' native language; and the syntactic rules impose many tricky restrictions not easy to follow. This paper presents an approach for solving the problems previously mentioned. The approach consists of using: an application domain motivating for the student, the Project Hoshimi (PH); and a programming environment, PH-Helper that is a simple and user-friendly syntax-directed editor and compiler for Hoshimi Language (HL), the actual PH programming language.

Cite as

Mariano Luzza, Mario Marcelo Beron, and Pedro Rangel Henriques. PH-Helper - a Syntax-Directed Editor for Hoshimi Programming Language, HL. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 71-89, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{luzza_et_al:OASIcs.SLATE.2012.71,
  author =	{Luzza, Mariano and Beron, Mario Marcelo and Henriques, Pedro Rangel},
  title =	{{PH-Helper - a Syntax-Directed Editor for Hoshimi Programming Language, HL}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{71--89},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.71},
  URN =		{urn:nbn:de:0030-drops-35153},
  doi =		{10.4230/OASIcs.SLATE.2012.71},
  annote =	{Keywords: Syntax-directed Editors, Visual Programming Enviroments, DSL}
}
Document
Problem Domain Oriented Approach for Program Comprehension

Authors: Maria João Varanda Pereira, Mario Marcelo Beron, Daniela da Cruz, Nuno Oliveira, and Pedro Rangel Henriques


Abstract
This paper is concerned with an ontology driven approach for Program Comprehension that starts picking up concepts from the problem domain ontology, analyzing source code and, after locating problem concepts in the code, goes up and links them to the programming language ontology. Different location techniques are used to search for concepts embedded in comments, in the code (identifier names and execution traces), and in string-literals associated with I/O statements. The expected result is a mapping between problem domain concepts and code slices. This mapping can be visualized using graph-based approaches like, for instance, navigation facilities through a System Dependency Graph. The paper also describes a PCTool suite, Quixote, that implements the approach proposed.

Cite as

Maria João Varanda Pereira, Mario Marcelo Beron, Daniela da Cruz, Nuno Oliveira, and Pedro Rangel Henriques. Problem Domain Oriented Approach for Program Comprehension. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 91-105, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{varandapereira_et_al:OASIcs.SLATE.2012.91,
  author =	{Varanda Pereira, Maria Jo\~{a}o and Beron, Mario Marcelo and da Cruz, Daniela and Oliveira, Nuno and Henriques, Pedro Rangel},
  title =	{{Problem Domain Oriented Approach for Program Comprehension}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{91--105},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.91},
  URN =		{urn:nbn:de:0030-drops-35161},
  doi =		{10.4230/OASIcs.SLATE.2012.91},
  annote =	{Keywords: Program Comprehension, Ontology-based SW development, Problem and Program domain mapping, Code Analysis. Software Visualization}
}
Document
The Impact of Programming Languages in Code Cloning

Authors: Jaime Filipe Jorge and António Menezes Leitão


Abstract
Code cloning is a duplication of source code fragments that frequently occurs in large software systems. Although different studies exist that evidence cloning benefits, several others expose its harmfulness, specifically upon inconsistent clone management. One important cause for the creation of software clones is the inherent abstraction capabilities and terseness of the programming language being used. This paper focuses on the features of two different programming languages, namely Java and Scala, and studies how different language constructs can induce or reduce code cloning. This study was further developed using our tool Kamino which provided clone detection and concrete values.

Cite as

Jaime Filipe Jorge and António Menezes Leitão. The Impact of Programming Languages in Code Cloning. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 107-122, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{jorge_et_al:OASIcs.SLATE.2012.107,
  author =	{Jorge, Jaime Filipe and Leit\~{a}o, Ant\'{o}nio Menezes},
  title =	{{The Impact of Programming Languages in Code Cloning}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{107--122},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.107},
  URN =		{urn:nbn:de:0030-drops-35178},
  doi =		{10.4230/OASIcs.SLATE.2012.107},
  annote =	{Keywords: Clone Detection, Software Engineering, Programming Languages, Software Management}
}
Document
HandSpy - a system to manage experiments on cognitive processes in writing

Authors: Carlos Monteiro and José Paulo Leal


Abstract
Experiments on cognitive processes require a detailed analysis of the contribution of many participants. In the case of cognitive processes in writing these experiments require special software tools to collect gestures performed with a pen or a stylus, and recorded with special hardware. These tools produce different kinds of data files in binary and proprietary formats that need to be managed on a workstation file system for further processing with generic tools, such as spreadsheets and statistical analysis software. The lack of common formats and open repositories hinders the possibility of distributing the work load among researchers within the research group, of re-processing the collected data with software developed by other research groups, and of sharing results with the rest of the cognitive process research community. This paper presents HandSpy, a collaborative environment for managing experiments in cognitive processes in writing. This environment was designed to cover all the stages of the experiment, from the definition of tasks to be performed by participants, to the synthesis of results. Collaboration in HandSpy is enabled by a rich web interface developed with the Google Web Toolkit. To decouple the environment from existing hardware devices for collecting written production, namely digitizing tablets and smart pens, HandSpy is based on the InkML standard, an XML data format for representing digital ink. This design choice shaped many of the features in HandSpy, such as the use of an XML database for managing application data and the use of XML transformations. XML transformations convert between persistent data representations used for storage and transient data representations required by the widgets on the user interface. This paper presents also an ongoing use case of HandSpy where this environment is being used to manage an experiment involving hundreds of primary schools participants that performed different tasks.

Cite as

Carlos Monteiro and José Paulo Leal. HandSpy - a system to manage experiments on cognitive processes in writing. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 123-132, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{monteiro_et_al:OASIcs.SLATE.2012.123,
  author =	{Monteiro, Carlos and Leal, Jos\'{e} Paulo},
  title =	{{HandSpy - a system to manage experiments on cognitive processes in writing}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{123--132},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.123},
  URN =		{urn:nbn:de:0030-drops-35180},
  doi =		{10.4230/OASIcs.SLATE.2012.123},
  annote =	{Keywords: InkML, collaborative environment, XML data processing}
}
Document
Computing Semantic Relatedness using DBPedia

Authors: José Paulo Leal, Vânia Rodrigues, and Ricardo Queirós


Abstract
Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terms using the knowledge base of DBpedia - a community effort to extract structured information from Wikipedia. Several approaches to extract semantic relatedness from Wikipedia using bag-of-words vector models are already available in the literature. The research presented in this paper explores a novel approach using paths on an ontological graph extracted from DBpedia. It is based on an algorithm for finding and weighting a collection of paths connecting concept nodes. This algorithm was implemented on a tool called Shakti that extract relevant ontological data for a given domain from DBpedia using its SPARQL endpoint. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are reported in this paper.

Cite as

José Paulo Leal, Vânia Rodrigues, and Ricardo Queirós. Computing Semantic Relatedness using DBPedia. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 133-147, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{leal_et_al:OASIcs.SLATE.2012.133,
  author =	{Leal, Jos\'{e} Paulo and Rodrigues, V\^{a}nia and Queir\'{o}s, Ricardo},
  title =	{{Computing Semantic Relatedness using DBPedia}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{133--147},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.133},
  URN =		{urn:nbn:de:0030-drops-35190},
  doi =		{10.4230/OASIcs.SLATE.2012.133},
  annote =	{Keywords: semantic similarity, processing wikipedia data, ontology generation, web recommendation}
}
Document
Query Matching Evaluation in an Infobot for University Admissions Processing

Authors: Peter Hancox and Nikolaos Polatidis


Abstract
"Infobots" are small-scale natural language question answering systems drawing inspiration from ELIZA-type systems. Their key distinguishing feature is the extraction of meaning from users' queries without the use of syntactic or semantic representations. Two approaches to identifying the users' intended meanings were investigated: keyword-based systems and Jaro-based string similarity algorithms. These were measured against a corpus of queries contributed by users of a WWW-hosted infobot for responding to questions about applications to MSc courses. The most effective system was Jaro with stemmed input (78.57%). It also was able to process ungrammatical input and offer scalability.

Cite as

Peter Hancox and Nikolaos Polatidis. Query Matching Evaluation in an Infobot for University Admissions Processing. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 149-161, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{hancox_et_al:OASIcs.SLATE.2012.149,
  author =	{Hancox, Peter and Polatidis, Nikolaos},
  title =	{{Query Matching Evaluation in an Infobot for University Admissions Processing}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{149--161},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.149},
  URN =		{urn:nbn:de:0030-drops-35206},
  doi =		{10.4230/OASIcs.SLATE.2012.149},
  annote =	{Keywords: chatbot, infobot, question-answering, Jaro string similarity, Jaro-Winkler string similarity}
}
Document
Predicting Market Direction from Direct Speech by Business Leaders

Authors: Brett M. Drury and José João Almeida


Abstract
Direct quotations from business leaders can communicate to the wider public the latent state of their organization as well as the beliefs of the organization's leaders. Candid quotes from business leaders can have dramatic effects upon the share price of their organization. For example, Gerald Ratner in 1991 stated that his company's products were crap and consequently his company (Ratners) lost in excess of 500 million pounds in market value. Information in quotes from business leaders can be used to make an estimation of the organization's immediate future financial prospects and therefore can form part of a trading strategy. This paper describes a contextual classification strategy to label direct quotes from business leaders contained in news stories. The quotes are labelled as either: 1. positive, 2. negative or 3. neutral. A trading strategy aggregates the quote classifications to issue a buy, sell or hold instruction. The quote based trading strategy is evaluated against trading strategies which are based upon whole news story classification on the NASDAQ market index. The evaluation shows a clear advantage for the quote classification strategy over the competing strategies.

Cite as

Brett M. Drury and José João Almeida. Predicting Market Direction from Direct Speech by Business Leaders. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 163-172, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{drury_et_al:OASIcs.SLATE.2012.163,
  author =	{Drury, Brett M. and Almeida, Jos\'{e} Jo\~{a}o},
  title =	{{Predicting Market Direction from Direct Speech by Business Leaders}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{163--172},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.163},
  URN =		{urn:nbn:de:0030-drops-35215},
  doi =		{10.4230/OASIcs.SLATE.2012.163},
  annote =	{Keywords: Sentiment, Direct Speech, Trading, Business, Markets}
}
Document
Learning Spaces for Knowledge Generation

Authors: Nuno Oliveira, Maria João Varanda Pereira, Alda Lopes Gancarski, and Pedro Rangel Henriques


Abstract
As the Internet is becoming the main point for information access, Libraries, Museums and similar Institutions are preserving their collections as digital object repositories. In that way, the important information associated with digital objects may be delivered as Internet content over portals equipped with modern interfaces and navigation features. This enables the virtualization of real information exhibition spaces rising new learning paradigms. Geny is a project aiming at defining domain-specific languages and developing tools to generate web-based learning spaces from existent digital object repositories and associated semantic. The motto for Geny is "Generating learning spaces to generate knowledge". Our objective within this project is to use (i) ontologies - one to give semantics to the digital object repository and another to describe the information to exhibit - and (ii) special languages to define the exhibition space, to enable the automatic construction of the learning space supported by a web browser. This paper presents the proposal of the Geny project along with a review of the state of the art concerning learning spaces and their virtualization. Geny is, currently, under appreciation by Fundação para a Ciêencia e a Tecnologia (FCT), the main Portuguese scientific funding institution.

Cite as

Nuno Oliveira, Maria João Varanda Pereira, Alda Lopes Gancarski, and Pedro Rangel Henriques. Learning Spaces for Knowledge Generation. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 175-184, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{oliveira_et_al:OASIcs.SLATE.2012.175,
  author =	{Oliveira, Nuno and Varanda Pereira, Maria Jo\~{a}o and Gancarski, Alda Lopes and Henriques, Pedro Rangel},
  title =	{{Learning Spaces for Knowledge Generation}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{175--184},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.175},
  URN =		{urn:nbn:de:0030-drops-35228},
  doi =		{10.4230/OASIcs.SLATE.2012.175},
  annote =	{Keywords: Learning Spaces, Knowledge Acquisition, Digital Object Repositories, Ontology-based semantic Web-pages Generation}
}
Document
Automatic Test Generation for Space

Authors: Ulisses Araújo Costa, Daniela da Cruz, and Pedro Rangel Henriques


Abstract
The European Space Agency (ESA) uses an engine to perform tests in the Ground Segment infrastructure, specially the Operational Simulator. This engine uses many different tools to ensure the development of regression testing infrastructure and these tests perform black-box testing to the C++ simulator implementation. VST (VisionSpace Technologies) is one of the companies that provides these services to ESA and they need a tool to infer automatically tests from the existing C++ code, instead of writing manually scripts to perform tests. With this motivation in mind, this paper explores automatic testing approaches and tools in order to propose a system that satisfies VST needs.

Cite as

Ulisses Araújo Costa, Daniela da Cruz, and Pedro Rangel Henriques. Automatic Test Generation for Space. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 185-203, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{costa_et_al:OASIcs.SLATE.2012.185,
  author =	{Costa, Ulisses Ara\'{u}jo and da Cruz, Daniela and Henriques, Pedro Rangel},
  title =	{{Automatic Test Generation for Space}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{185--203},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.185},
  URN =		{urn:nbn:de:0030-drops-35234},
  doi =		{10.4230/OASIcs.SLATE.2012.185},
  annote =	{Keywords: Automatic Test Generation, UML/OCL, White-box testing, Black-box testing}
}
Document
Interoperability in eLearning Contexts. Interaction between LMS and PLE

Authors: Miguel Conde, Francisco García-Peñalvo, Jordi Piguillem, María Casany, and Marc Alier


Abstract
The emergence of the Information and Communication Technologies and its application in several areas with varying success, implies the definition of a great number of software systems. Such systems are implemented in very different programming languages, using distinct types of resources, etc. Learning and Teaching is one of those application areas, where there are different learning platforms, repositories, tools, types of content, etc. These systems should interoperate among them to provide better and more useful learning services to students and teachers, and to do so web services and interoperability specifications are needed. This paper presents a service-based framework approach to facilitate the interoperability between Learning Management Systems and Personal Learning Environments, which has been implemented as a proof of concept and evaluated through several pilot experiences. From such experiences it is possible to see that interoperability among the personal and institutional environments it is possible and, in this way, learners can learn independently without accessing to the institutional site and teachers have information about learning that happens in informal activities.

Cite as

Miguel Conde, Francisco García-Peñalvo, Jordi Piguillem, María Casany, and Marc Alier. Interoperability in eLearning Contexts. Interaction between LMS and PLE. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 205-223, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{conde_et_al:OASIcs.SLATE.2012.205,
  author =	{Conde, Miguel and Garc{\'\i}a-Pe\~{n}alvo, Francisco and Piguillem, Jordi and Casany, Mar{\'\i}a and Alier, Marc},
  title =	{{Interoperability in eLearning Contexts. Interaction between LMS and PLE}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{205--223},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.205},
  URN =		{urn:nbn:de:0030-drops-35248},
  doi =		{10.4230/OASIcs.SLATE.2012.205},
  annote =	{Keywords: interoperability specifications, web services, LMS, PLE, personalization, BLTI}
}
Document
Enhancing Coherency of Specification Documents from Automotive Industry

Authors: Jean-Noël Martin and Damien Martin-Guillerez


Abstract
A specification describes how a system should behave. If a specification is incorrect or wrongly implemented, then the resulting system will contain errors that can lead to catastrophic states especially in sensitive systems like the one embedded in cars. This paper presents a method to construct a formal model from a specification written in natural language. This implies that the specification is sufficiently accurate to be incorporated in a model so as to find the inconsistencies in this specification. Sufficiently means that the error rate is down 2%. The error counting method is discussed in the paper. A definition of specification consistency is thus given in this paper. The method used to construct the model is automatic and points out to the user the inconsistencies of the specification. Moreover once the model is constructed, the general test plan reflecting the specification is produced. This test plan will ensure that the system that implements the specification meets the requirements.

Cite as

Jean-Noël Martin and Damien Martin-Guillerez. Enhancing Coherency of Specification Documents from Automotive Industry. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 225-237, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{martin_et_al:OASIcs.SLATE.2012.225,
  author =	{Martin, Jean-No\"{e}l and Martin-Guillerez, Damien},
  title =	{{Enhancing Coherency of Specification Documents from Automotive Industry}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{225--237},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.225},
  URN =		{urn:nbn:de:0030-drops-35251},
  doi =		{10.4230/OASIcs.SLATE.2012.225},
  annote =	{Keywords: coherency, specification, model generation, automatic text processing}
}
Document
Probabilistic SynSet Based Concept Location

Authors: Nuno Ramos Carvalho, José João Almeida, Maria João Varanda Pereira, and Pedro Rangel Henriques


Abstract
Concept location is a common task in program comprehension techniques, essential in many approaches used for software care and software evolution. An important goal of this process is to discover a mapping between source code and human oriented concepts. Although programs are written in a strict and formal language, natural language terms and sentences like identifiers (variables or functions names), constant strings or comments, can still be found embedded in programs. Using terminology concepts and natural language processing techniques these terms can be exploited to discover clues about which real world concepts source code is addressing. This work extends symbol tables build by compilers with ontology driven constructs, extends synonym sets defined by linguistics, with automatically created Probabilistic SynSets from software domain parallel corpora. And using a relational algebra, creates semantic bridges between program elements and human oriented concepts, to enhance concept location tasks.

Cite as

Nuno Ramos Carvalho, José João Almeida, Maria João Varanda Pereira, and Pedro Rangel Henriques. Probabilistic SynSet Based Concept Location. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 239-253, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{carvalho_et_al:OASIcs.SLATE.2012.239,
  author =	{Carvalho, Nuno Ramos and Almeida, Jos\'{e} Jo\~{a}o and Varanda Pereira, Maria Jo\~{a}o and Henriques, Pedro Rangel},
  title =	{{Probabilistic SynSet Based Concept Location}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{239--253},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.239},
  URN =		{urn:nbn:de:0030-drops-35267},
  doi =		{10.4230/OASIcs.SLATE.2012.239},
  annote =	{Keywords: program comprehension, program visualization, concept location, code inspection, synonym sets, probabilistic synonym sets, translation dictionary}
}
Document
A Multimedia Parallel Corpus of English-Galician Film Subtitling

Authors: Patricia Sotelo Dios and Xavier Gómez Guinovart


Abstract
In this paper, we present an ongoing research project focused on the building, processing and exploitation of a multimedia parallel corpus of English-Galician film subtitling, showing the TMX-based XML specification designed to encode both audiovisual features and translation alignments in the corpus, and the solutions adopted for making the data available over the web in multimedia format.

Cite as

Patricia Sotelo Dios and Xavier Gómez Guinovart. A Multimedia Parallel Corpus of English-Galician Film Subtitling. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 255-266, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{sotelodios_et_al:OASIcs.SLATE.2012.255,
  author =	{Sotelo Dios, Patricia and G\'{o}mez Guinovart, Xavier},
  title =	{{A Multimedia Parallel Corpus of English-Galician Film Subtitling}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{255--266},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.255},
  URN =		{urn:nbn:de:0030-drops-35274},
  doi =		{10.4230/OASIcs.SLATE.2012.255},
  annote =	{Keywords: corpora, multimedia, translation, subtitling, XML}
}
Document
Investigating the Possibilities of Using SMT for Text Annotation

Authors: László Laki


Abstract
In this paper I examine the applicability of SMT methodology for part-of-speech disambiguation and lemmatization in Hungarian. After the baseline system was created, different methods and possibilities were used to improve the efficiency of the system. I also applied some methods to decrease the size of the target dictionary and to find a proper solution to handle out-of-vocabulary words. The results show that such a light-weight system performs comparable results to other state-of-the-art systems.

Cite as

László Laki. Investigating the Possibilities of Using SMT for Text Annotation. In 1st Symposium on Languages, Applications and Technologies. Open Access Series in Informatics (OASIcs), Volume 21, pp. 267-283, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{laki:OASIcs.SLATE.2012.267,
  author =	{Laki, L\'{a}szl\'{o}},
  title =	{{Investigating the Possibilities of Using SMT for Text Annotation}},
  booktitle =	{1st Symposium on Languages, Applications and Technologies},
  pages =	{267--283},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-40-8},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{21},
  editor =	{Sim\~{o}es, Alberto and Queir\'{o}s, Ricardo and da Cruz, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2012.267},
  URN =		{urn:nbn:de:0030-drops-35285},
  doi =		{10.4230/OASIcs.SLATE.2012.267},
  annote =	{Keywords: SMT, POS-tagging, Lemmatization, Target language set, OOV}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail