OASIcs, Volume 74

8th Symposium on Languages, Applications and Technologies (SLATE 2019)



Thumbnail PDF

Event

SLATE 2019, June 27-28, 2019, Coimbra, Portugal

Editors

Ricardo Rodrigues
  • CISUC, University of Coimbra, Portugal
  • Polytechnic Institute of Coimbra, Portugal
Jan Janoušek
  • Czech Technical University, Prague, Czech Republic
Luís Ferreira
  • Instituto Politécnico do Cávado e Ave, Barcelos, Portugal
Luísa Coheur
  • Instituto Superior Técnico, Lisbon, Portugal
  • INESC - ID, Lisbon, Portugal
Fernando Batista
  • ISCTE - IUL, Lisbon, Portugal
  • INESC - ID, Lisbon, Portugal
Hugo Gonçalo Oliveira
  • CISUC, University of Coimbra, Portugal

Publication Details

  • published at: 2019-07-24
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-114-6
  • DBLP: db/conf/slate/slate2019

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
OASIcs, Volume 74, SLATE'19, Complete Volume

Authors: Ricardo Rodrigues, Jan Janoušek, Luís Ferreira, Luísa Coheur, Fernando Batista, and Hugo Gonçalo Oliveira


Abstract
OASIcs, Volume 74, SLATE'19, Complete Volume

Cite as

8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Proceedings{rodrigues_et_al:OASIcs.SLATE.2019,
  title =	{{OASIcs, Volume 74, SLATE'19, Complete Volume}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019},
  URN =		{urn:nbn:de:0030-drops-109008},
  doi =		{10.4230/OASIcs.SLATE.2019},
  annote =	{Keywords: Computing methodologies, Natural language processing, Software and its engineering, Compilers; Information systems, World Wide Web}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Ricardo Rodrigues, Jan Janoušek, Luís Ferreira, Luísa Coheur, Fernando Batista, and Hugo Gonçalo Oliveira


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 0:i-0:xviii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{rodrigues_et_al:OASIcs.SLATE.2019.0,
  author =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{0:i--0:xviii},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.0},
  URN =		{urn:nbn:de:0030-drops-108679},
  doi =		{10.4230/OASIcs.SLATE.2019.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Graph-of-Entity: A Model for Combined Data Representation and Retrieval

Authors: José Devezas, Carla Lopes, and Sérgio Nunes


Abstract
Managing large volumes of digital documents along with the information they contain, or are associated with, can be challenging. As systems become more intelligent, it increasingly makes sense to power retrieval through all available data, where every lead makes it easier to reach relevant documents or entities. Modern search is heavily powered by structured knowledge, but users still query using keywords or, at the very best, telegraphic natural language. As search becomes increasingly dependent on the integration of text and knowledge, novel approaches for a unified representation of combined data present the opportunity to unlock new ranking strategies. We tackle entity-oriented search using graph-based approaches for representation and retrieval. In particular, we propose the graph-of-entity, a novel approach for indexing combined data, where terms, entities and their relations are jointly represented. We compare the graph-of-entity with the graph-of-word, a text-only model, verifying that, overall, it does not yet achieve a better performance, despite obtaining a higher precision. Our assessment was based on a small subset of the INEX 2009 Wikipedia Collection, created from a sample of 10 topics and respectively judged documents. The offline evaluation we do here is complementary to its counterpart from TREC 2017 OpenSearch track, where, during our participation, we had assessed graph-of-entity in an online setting, through team-draft interleaving.

Cite as

José Devezas, Carla Lopes, and Sérgio Nunes. Graph-of-Entity: A Model for Combined Data Representation and Retrieval. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 1:1-1:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{devezas_et_al:OASIcs.SLATE.2019.1,
  author =	{Devezas, Jos\'{e} and Lopes, Carla and Nunes, S\'{e}rgio},
  title =	{{Graph-of-Entity: A Model for Combined Data Representation and Retrieval}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{1:1--1:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.1},
  URN =		{urn:nbn:de:0030-drops-108686},
  doi =		{10.4230/OASIcs.SLATE.2019.1},
  annote =	{Keywords: Entity-oriented search, graph-based models, collection-based graph}
}
Document
Using Lucene for Developing a Question-Answering Agent in Portuguese

Authors: Hugo Gonçalo Oliveira, Ricardo Filipe, Ricardo Rodrigues, and Ana Alves


Abstract
Given the limitations of available platforms for creating conversational agents, and that a question-answering agent suffices in many scenarios, we take advantage of the Information Retrieval library Lucene for developing such an agent for Portuguese. The solution described answers natural language questions based on an indexed list of FAQs. Its adaptation to different domains is a matter of changing the underlying list. Different configurations of this solution, mostly on the language analysis level, resulted in different search strategies, which were tested for answering questions about the economic activity in Portugal. In addition to comparing the different search strategies, we concluded that, towards better answers, it is fruitful to combine the results of different strategies with a voting method.

Cite as

Hugo Gonçalo Oliveira, Ricardo Filipe, Ricardo Rodrigues, and Ana Alves. Using Lucene for Developing a Question-Answering Agent in Portuguese. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 2:1-2:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{goncalooliveira_et_al:OASIcs.SLATE.2019.2,
  author =	{Gon\c{c}alo Oliveira, Hugo and Filipe, Ricardo and Rodrigues, Ricardo and Alves, Ana},
  title =	{{Using Lucene for Developing a Question-Answering Agent in Portuguese}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{2:1--2:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.2},
  URN =		{urn:nbn:de:0030-drops-108692},
  doi =		{10.4230/OASIcs.SLATE.2019.2},
  annote =	{Keywords: information retrieval, question answering, natural language interface, natural language processing, natural language understanding}
}
Document
Tracing Naming Semantics in Unit Tests of Popular Github Android Projects

Authors: Matej Madeja and Jaroslav Porubän


Abstract
The tests are so closely linked to the source code that we consider them up-to-date documentation. Developers are aware of recommended naming conventions and other best practices that should be used to write tests. In this paper we focus on how the developers test in practice and what conventions they use. For the analysis 5 very popular Android projects from Github were selected. The results show that 49 % of tests contain full and 76 % of tests contain a partial unit under test (UUT) method name in their name. Further, there was observed that UUT was only rarely tested by multiple test classes and thus in cases when the tester wanted to distinguish the way he or she worked with the tested object. The analysis of this paper shows that the word "test" in the test title is not a reliable metric for identifying the test. Apart from assertions, the developers use statements like verify, try-catch and throw exception to verify the correctness of UUT functionality. At the same time it was found out that the test titles contained keywords which could lead to the identification of UUT, use case of test or data used for test. It was also found out that the words in the test title were very often found in its body and in a smaller amount in UUT body which indicated the use of similar vocabulary in tests and UUT.

Cite as

Matej Madeja and Jaroslav Porubän. Tracing Naming Semantics in Unit Tests of Popular Github Android Projects. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 3:1-3:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{madeja_et_al:OASIcs.SLATE.2019.3,
  author =	{Madeja, Matej and Porub\"{a}n, Jaroslav},
  title =	{{Tracing Naming Semantics in Unit Tests of Popular Github Android Projects}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{3:1--3:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.3},
  URN =		{urn:nbn:de:0030-drops-108705},
  doi =		{10.4230/OASIcs.SLATE.2019.3},
  annote =	{Keywords: unit tests, android, real testing practices, unit tests, program comprehension}
}
Document
Robust Phoneme Recognition with Little Data

Authors: Christopher Dane Shulby, Martha Dais Ferreira, Rodrigo F. de Mello, and Sandra Maria Aluisio


Abstract
A common belief in the community is that deep learning requires large datasets to be effective. We show that with careful parameter selection, deep feature extraction can be applied even to small datasets.We also explore exactly how much data is necessary to guarantee learning by convergence analysis and calculating the shattering coefficient for the algorithms used. Another problem is that state-of-the-art results are rarely reproducible because they use proprietary datasets, pretrained networks and/or weight initializations from other larger networks. We present a two-fold novelty for this situation where a carefully designed CNN architecture, together with a knowledge-driven classifier achieves nearly state-of-the-art phoneme recognition results with absolutely no pretraining or external weight initialization. We also beat the best replication study of the state of the art with a 28% FER. More importantly, we are able to achieve transparent, reproducible frame-level accuracy and, additionally, perform a convergence analysis to show the generalization capacity of the model providing statistical evidence that our results are not obtained by chance. Furthermore, we show how algorithms with strong learning guarantees can not only benefit from raw data extraction but contribute with more robust results.

Cite as

Christopher Dane Shulby, Martha Dais Ferreira, Rodrigo F. de Mello, and Sandra Maria Aluisio. Robust Phoneme Recognition with Little Data. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 4:1-4:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{shulby_et_al:OASIcs.SLATE.2019.4,
  author =	{Shulby, Christopher Dane and Ferreira, Martha Dais and de Mello, Rodrigo F. and Aluisio, Sandra Maria},
  title =	{{Robust Phoneme Recognition with Little Data}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{4:1--4:11},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.4},
  URN =		{urn:nbn:de:0030-drops-108715},
  doi =		{10.4230/OASIcs.SLATE.2019.4},
  annote =	{Keywords: feature extraction, acoustic modeling, phoneme recognition, statistical learning theory}
}
Document
Towards European Portuguese Conversational Assistants for Smart Homes

Authors: Maksym Ketsmur, António Teixeira, Nuno Almeida, and Samuel Silva


Abstract
Nowadays, smart environments, such as Smart Homes, are becoming a reality, due to the access to a wide variety of smart devices at a low cost. These devices are connected to the home network and inhabitants can interact with them using smartphones, tablets and smart assistants, a feature with rising popularity. The diversity of devices, the user’s expectations regarding Smart Homes, and assistants' requirements pose several challenges. In this context, a Smart Home Assistant capable of conversation and device integration can be a valuable help to the inhabitants, not only for smart device control, but also to obtain valuable information and have a broader picture of how the house and its devices behave. This paper presents the current stage of development of one such assistant, targeting European Portuguese, not only supporting the control of home devices, but also providing a potentially more natural way to access a variety of information regarding the home and its devices. The development has been made in the scope of Smart Green Homes (SGH) project.

Cite as

Maksym Ketsmur, António Teixeira, Nuno Almeida, and Samuel Silva. Towards European Portuguese Conversational Assistants for Smart Homes. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{ketsmur_et_al:OASIcs.SLATE.2019.5,
  author =	{Ketsmur, Maksym and Teixeira, Ant\'{o}nio and Almeida, Nuno and Silva, Samuel},
  title =	{{Towards European Portuguese Conversational Assistants for Smart Homes}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{5:1--5:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.5},
  URN =		{urn:nbn:de:0030-drops-108725},
  doi =		{10.4230/OASIcs.SLATE.2019.5},
  annote =	{Keywords: Smart Homes, Conversational Assistants, Ontology}
}
Document
Acquiring Domain-Specific Knowledge for WordNet from a Terminological Database

Authors: Alberto Simões and Xavier Gómez Guinovart


Abstract
In this research we explore a terminological database (Termoteca) in order to expand the Portuguese and Galician wordnets (PULO and Galnet) with the addition of new synset variants (word forms for a concept), usage examples for the variants, and synset glosses or definitions. The methodology applied in this experiment is based on the alignment between concepts of WordNet (synsets) and concepts described in Termoteca (terminological records), taking into account the lexical forms in both resources, their morphological category and their knowledge domains, using the information provided by the WordNet Domains Hierarchy and the Termoteca field domains to reduce the incidence of polysemy and homography in the results of the experiment. The results obtained confirm our hypothesis that the combined use of the semantic domain information included in both resources makes it possible to minimise the problem of lexical ambiguity and to obtain a very acceptable index of precision in terminological information extraction tasks, attaining a precision above 89% when there are two or more different languages sharing at least one lexical form between the synset in Galnet and the Termoteca record.

Cite as

Alberto Simões and Xavier Gómez Guinovart. Acquiring Domain-Specific Knowledge for WordNet from a Terminological Database. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 6:1-6:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{simoes_et_al:OASIcs.SLATE.2019.6,
  author =	{Sim\~{o}es, Alberto and G\'{o}mez Guinovart, Xavier},
  title =	{{Acquiring Domain-Specific Knowledge for WordNet from a Terminological Database}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{6:1--6:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.6},
  URN =		{urn:nbn:de:0030-drops-108735},
  doi =		{10.4230/OASIcs.SLATE.2019.6},
  annote =	{Keywords: WordNet, Terminology, Lexical Resources, Natural Language Processing}
}
Document
Definite Clause Grammars with Parse Trees: Extension for Prolog

Authors: Falco Nogatz, Dietmar Seipel, and Salvador Abreu


Abstract
Definite Clause Grammars (DCGs) are a convenient way to specify possibly non-context-free grammars for natural and formal languages. They can be used to progressively build a parse tree as grammar rules are applied by providing an extra argument in the DCG rule’s head. In the simplest way, this is a structure that contains the name of the used nonterminal. This extension of a DCG has been proposed for natural language processing in the past and can be done automatically in Prolog using term expansion. We extend this approach by a meta-nonterminal to specify optional and sequences of nonterminals, as these structures are common in grammars for formal, domain-specific languages. We specify a term expansion that represents these sequences as lists while preserving the grammar’s ability to be used both for parsing and serialising, i.e. to create a parse tree by a given source code and vice-versa. We show that this mechanism can be used to lift grammars specified in extended Backus-Naur form (EBNF) to generate parse trees. As a case study, we present a parser for the Prolog programming language itself based only on the grammars given in the ISO Prolog standard which produces corresponding parse trees.

Cite as

Falco Nogatz, Dietmar Seipel, and Salvador Abreu. Definite Clause Grammars with Parse Trees: Extension for Prolog. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 7:1-7:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{nogatz_et_al:OASIcs.SLATE.2019.7,
  author =	{Nogatz, Falco and Seipel, Dietmar and Abreu, Salvador},
  title =	{{Definite Clause Grammars with Parse Trees: Extension for Prolog}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{7:1--7:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.7},
  URN =		{urn:nbn:de:0030-drops-108743},
  doi =		{10.4230/OASIcs.SLATE.2019.7},
  annote =	{Keywords: Definite Clause Grammar, Prolog, Term Expansion, Parse Tree, EBNF}
}
Document
A Conceptual Generic Framework to Debugging in the Domain-Specific Modeling Languages for Multi-Agent Systems

Authors: Baris Tekin Tezel and Geylani Kardas


Abstract
Despite the existence of many agent programming environments and platforms, the developers may still encounter difficulties on implementing Multi-agent Systems (MASs) due to the complexity of agent features and agent interactions inside the MAS organizations. Working in a higher abstraction layer and modeling agent components within a model-driven engineering (MDE) process before going into depths of MAS implementation may facilitate MAS development. Perhaps the most popular way of applying MDE for MAS is based on creating Domain-specific Modeling Languages (DSMLs) with including appropriate integrated development environments (IDEs) in which both modeling and code generation for system-to-be-developed can be performed properly. Although IDEs of these MAS DSMLs provide some sort of checks on modeled systems according to the related DSML’s syntax and semantics descriptions, currently they do not have a built-in support for debugging these MAS models. That deficiency causes the agent developers not to be sure on the correctness of the prepared MAS model at the design phase. To help filling this gap, we introduce a conceptual generic debugging framework supporting the design of agent components inside the modeling environments of MAS DSMLs. The debugging framework is composed of 4 different metamodels and a simulator. Use of the proposed framework starts with modeling a MAS using a design language and transforming design model instances to a run-time model. According to the framework, the run-time model is simulated on a built-in simulator for debugging. The framework also provides a control mechanism for the simulation in the form of a simulation environment model.

Cite as

Baris Tekin Tezel and Geylani Kardas. A Conceptual Generic Framework to Debugging in the Domain-Specific Modeling Languages for Multi-Agent Systems. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 8:1-8:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{tezel_et_al:OASIcs.SLATE.2019.8,
  author =	{Tezel, Baris Tekin and Kardas, Geylani},
  title =	{{A Conceptual Generic Framework to Debugging in the Domain-Specific Modeling Languages for Multi-Agent Systems}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{8:1--8:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.8},
  URN =		{urn:nbn:de:0030-drops-108755},
  doi =		{10.4230/OASIcs.SLATE.2019.8},
  annote =	{Keywords: debugging, domain-specific modeling languages, multi-agent systems, simulation}
}
Document
From Lexical to Semantic Features in Paraphrase Identification

Authors: Pedro Fialho, Luísa Coheur, and Paulo Quaresma


Abstract
The task of paraphrase identification has been applied to diverse scenarios in Natural Language Processing, such as Machine Translation, summarization, or plagiarism detection. In this paper we present a comparative study on the performance of lexical, syntactic and semantic features in the task of paraphrase identification in the Microsoft Research Paraphrase Corpus. In our experiments, semantic features do not represent a gain in results, and syntactic features lead to the best results, but only if combined with lexical features.

Cite as

Pedro Fialho, Luísa Coheur, and Paulo Quaresma. From Lexical to Semantic Features in Paraphrase Identification. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 9:1-9:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{fialho_et_al:OASIcs.SLATE.2019.9,
  author =	{Fialho, Pedro and Coheur, Lu{\'\i}sa and Quaresma, Paulo},
  title =	{{From Lexical to Semantic Features in Paraphrase Identification}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{9:1--9:11},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.9},
  URN =		{urn:nbn:de:0030-drops-108763},
  doi =		{10.4230/OASIcs.SLATE.2019.9},
  annote =	{Keywords: paraphrase identification, lexical features, syntactic features, semantic features}
}
Document
Learning JavaScript in a Local Playground

Authors: Ricardo Queirós


Abstract
JavaScript is currently one of the most popular languages worldwide. Its meteoric rise is mainly due to the fact that the language is no longer bound to the limits of the browser and can now be used on several platforms. This growth has led to its increasing use by companies and, consequently, to become part of the curriculum in schools. Meanwhile, in the teaching-learning process of computer programming, teachers continue to use automatic code evaluation systems to relieve their time-consuming and error prone evaluation work. However, these systems reveal a number of issues: they are very generic (one size fits all), they have scarce features to foster exercises authoring, they do not adhere to interoperability standards (e.g. LMS communication), they rely solely on remote evaluators being exposed to single point of failure problems and reducing application performance and user experience, which is a feature well appreciated by the mobile users. In this context, LearnJS is presented as a Web playground for practicing the JavaScript language. The system uses a local evaluator (the user’s own browser) making response times small and thus benefiting the user experience. LearnJS also uses a sophisticated authoring system that allows the teacher to quickly create new exercises and aggregate them into gamified activities. Finally, LearnJS includes universal LMS connectors based on international specifications. In order to validate its use, an evaluation was made by a group of students of Porto Polytechnic aiming to validate the usability of its graphical user interface.

Cite as

Ricardo Queirós. Learning JavaScript in a Local Playground. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 10:1-10:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{queiros:OASIcs.SLATE.2019.10,
  author =	{Queir\'{o}s, Ricardo},
  title =	{{Learning JavaScript in a Local Playground}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{10:1--10:11},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.10},
  URN =		{urn:nbn:de:0030-drops-108775},
  doi =		{10.4230/OASIcs.SLATE.2019.10},
  annote =	{Keywords: programming languages, gamification, e-learning, automatic evaluation, web development}
}
Document
Scaling up a Programmers' Profile Tool

Authors: Martinho Aragão, Maria João Varanda Pereira, and Pedro Rangel Henriques


Abstract
The style of programming, the proficiency on the programming language, the conciseness of the solution, the use of comments and so on, allow comparison of programmers through static analysis of their code. The Programmer Profiler Tool, which has been commonly named PP Tool, is an open source profiling tool for Java language where the programmer’s ability can be classified in one out of five possible profiles and the distinction among them falls upon the levels of both skill and readability. Taking a set of correct solutions the comparison between solutions for the same problems is fundamental to evaluate proficiency on the analysed criteria. As such, there was a need to tune the tool in order to handle, simultaneously, with a bigger amount of programs and with a wider scope of solutions. By scaling up PP Tool it will be possible to apply it in a far wider scope of situations as it will be able to cope with programmers from different geographies, with or without formal education, between 1 and 20 years of experience amongst other factors. For that, a set of features were implemented and tested and are described in this paper.

Cite as

Martinho Aragão, Maria João Varanda Pereira, and Pedro Rangel Henriques. Scaling up a Programmers' Profile Tool. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 11:1-11:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{aragao_et_al:OASIcs.SLATE.2019.11,
  author =	{Arag\~{a}o, Martinho and Pereira, Maria Jo\~{a}o Varanda and Henriques, Pedro Rangel},
  title =	{{Scaling up a Programmers' Profile Tool}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{11:1--11:8},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.11},
  URN =		{urn:nbn:de:0030-drops-108781},
  doi =		{10.4230/OASIcs.SLATE.2019.11},
  annote =	{Keywords: Programmers Profiling, Code Analysis, Programming Skills, Code Readability}
}
Document
Beyond Classical Parallel Programming Frameworks: Chapel vs Julia

Authors: Rok Novosel and Boštjan Slivnik


Abstract
Although parallel programming languages have existed for decades, (scientific) parallel programming is still dominated by Fortran and C/C++ augmented with parallel programming frameworks, e.g., MPI, OpenMP, OpenCL and CUDA. This paper contains a comparative study of Chapel and Julia, two languages quite different from one another as well as from Fortran and C, in regard to parallel programming on distributed and shared memory computers. The study is carried out using test cases that expose the need for different approaches to parallel programming. Test cases are implemented in Chapel and Julia, and in C augmented with MPI and OpenMP. It is shown that both languages, Chapel and Julia, represent a viable alternative to Fortran and C/C++ augmented with parallel programming frameworks: the programmer’s efficiency is considerably improved while the speed of programs is not significantly affected.

Cite as

Rok Novosel and Boštjan Slivnik. Beyond Classical Parallel Programming Frameworks: Chapel vs Julia. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 12:1-12:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{novosel_et_al:OASIcs.SLATE.2019.12,
  author =	{Novosel, Rok and Slivnik, Bo\v{s}tjan},
  title =	{{Beyond Classical Parallel Programming Frameworks: Chapel vs Julia}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{12:1--12:8},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.12},
  URN =		{urn:nbn:de:0030-drops-108796},
  doi =		{10.4230/OASIcs.SLATE.2019.12},
  annote =	{Keywords: parallel programming languages, Chapel, Julia}
}
Document
Knowledge Representation of Crime-Related Events: a Preliminary Approach

Authors: Gonçalo Carnaz, Vitor Beires Nogueira, and Mário Antunes


Abstract
The crime is spread in every daily newspaper, and particularly on criminal investigation reports produced by several Police departments, creating an amount of data to be processed by Humans. Other research studies related to relation extraction (a branch of information retrieval) in Portuguese arisen along the years, but with few extracted relations and several computer methods approaches, that could be improved by recent features, to achieve better performance results. This paper aims to present the ongoing work related to SEM (Simple Event Model) ontology population with instances retrieved from crime-related documents, supported by an SVO (Subject, Verb, Object) algorithm using hand-crafted rules to extract events, achieving a performance measure of 0.86 (F-Measure).

Cite as

Gonçalo Carnaz, Vitor Beires Nogueira, and Mário Antunes. Knowledge Representation of Crime-Related Events: a Preliminary Approach. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 13:1-13:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{carnaz_et_al:OASIcs.SLATE.2019.13,
  author =	{Carnaz, Gon\c{c}alo and Nogueira, Vitor Beires and Antunes, M\'{a}rio},
  title =	{{Knowledge Representation of Crime-Related Events: a Preliminary Approach}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{13:1--13:8},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.13},
  URN =		{urn:nbn:de:0030-drops-108809},
  doi =		{10.4230/OASIcs.SLATE.2019.13},
  annote =	{Keywords: SEM Ontology, Relation Extraction, Crime-Related Events, SVO Algorithm, Ontology Population}
}
Document
Distinguishing Different Classes of Utterances - the UC-PT Corpus

Authors: Mariana Gaspar Fernandes, Cátia Dias, and Luísa Coheur


Abstract
Conversational bots are being used in many scenarios and we can find them playing museum guides or providing customer support, for instance. These bots base their answers in specific information related with their domain of expertise, but there is general information, presented in each user request that, when properly identified, could also be useful for the agent to decide what to answer. As an example, if the user is asking a question or uttering a statement, the bot’s action in its search for a response will probably differ. In this paper we present three corpora for the Portuguese language - the UC-PT corpus - that can be used to help conversational bots to distinguish: a) questions from non questions, b) yes-no-questions from other types of questions; and c) personal from non-personal questions. With this information, the agent can decide, for instance, not to answer, redirect the question to a persona chatbot or decide to answer it with a simple "yes", "no" or "maybe". In addition, we benchmark the classification process in these corpora. This corpora will be made publicly available.

Cite as

Mariana Gaspar Fernandes, Cátia Dias, and Luísa Coheur. Distinguishing Different Classes of Utterances - the UC-PT Corpus. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 14:1-14:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{fernandes_et_al:OASIcs.SLATE.2019.14,
  author =	{Fernandes, Mariana Gaspar and Dias, C\'{a}tia and Coheur, Lu{\'\i}sa},
  title =	{{Distinguishing Different Classes of Utterances - the UC-PT Corpus}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{14:1--14:8},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.14},
  URN =		{urn:nbn:de:0030-drops-108817},
  doi =		{10.4230/OASIcs.SLATE.2019.14},
  annote =	{Keywords: Corpora, Questions, Conversational Agents, Portuguese Language}
}
Document
Digital Collection Creator, Visualizer and Explorer

Authors: Luís F. Martins, Cristiana Araújo, and Pedro Rangel Henriques


Abstract
In this paper we introduce and discuss a recent project, called CortaColaEspia, aimed at extending with some extra relevant features the 'Ontology-based Collection Processor' developed previously in the context of a Compilers course. The basic processor, based on the OntoDL tool, was able to read the ontological description of a small collection of objects (cards, pencils, toys, etc.) and produce automatically a web-based exhibition space to display the objects, providing a conceptual navigation through them. The extension under discussion is intended to create a new DSL to describe the details of the exhibition room organization (what concepts and relations to show; where and how to show them; etc.). A second objective consists of a new module to merge two collections, or to enrich a collection with extra information about the collected objects. The last requirement is the incorporation of a natural language processor to analyze the objects' captions or short inscriptions in order to extract information that can create knowledge about a specific domain, a society or an epoch.

Cite as

Luís F. Martins, Cristiana Araújo, and Pedro Rangel Henriques. Digital Collection Creator, Visualizer and Explorer. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 15:1-15:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{martins_et_al:OASIcs.SLATE.2019.15,
  author =	{Martins, Lu{\'\i}s F. and Ara\'{u}jo, Cristiana and Henriques, Pedro Rangel},
  title =	{{Digital Collection Creator, Visualizer and Explorer}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{15:1--15:8},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.15},
  URN =		{urn:nbn:de:0030-drops-108829},
  doi =		{10.4230/OASIcs.SLATE.2019.15},
  annote =	{Keywords: Digital Collections, Ontology, DSL, Program Generation}
}
Document
Urban Evolution of Fafe in the Last Two Centuries

Authors: João Filipe C. Lameiras, Mónica Guimarães, and Pedro Rangel Henriques


Abstract
Human Beings love to collect, store and preserve documents for later exploration leading to the creation of Archives. Actually, to consult municipal archives' asset, seeking information in order to explore the knowledge implicit in their documents, is the main reason for the existence of those memory institutions. On the other hand, it is known that the movement of people from dispersed living to concentration in urban environments has a strong impact both in human civilization and in the environment. This statement motivates Social Science researchers to study of urban evolution of cities. In this context and having noticed that Fafe’s Archive holds an important collection of municipal records (since XIX Century) concerning the application for authorization to construct or reconstruct private or public buildings, it came up to our minds to create a digital repository with those documents enabling their analysis. An information system shall be developed around it for information retrieval and knowledge exploration; it is also desirable that this application provides features to visualize the information extracted in convenient ways, like positioning buildings over a map. This paper discusses the development of the referred Web-based system to study the Urban Evolution of Fafe in the XIX and XX Centuries, focussing on the ontology created to understand the domain to be explored. The definition of a markup language (as a XML dialect), to annotate the Archive documents in order to enable the automatic data extraction and the semantic search, is also one of the paper topics. It will be discussed that this annotation was not defined from the scratch; instead, its design followed the ontology. It is actually an ontology-driven system. At last, the state of the Web interface (the system front-end) so far developed will be presented.

Cite as

João Filipe C. Lameiras, Mónica Guimarães, and Pedro Rangel Henriques. Urban Evolution of Fafe in the Last Two Centuries. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 16:1-16:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{lameiras_et_al:OASIcs.SLATE.2019.16,
  author =	{Lameiras, Jo\~{a}o Filipe C. and Guimar\~{a}es, M\'{o}nica and Henriques, Pedro Rangel},
  title =	{{Urban Evolution of Fafe in the Last Two Centuries}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{16:1--16:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.16},
  URN =		{urn:nbn:de:0030-drops-108836},
  doi =		{10.4230/OASIcs.SLATE.2019.16},
  annote =	{Keywords: Urban Evolution, Urban Research, Urban morphology, Ontology, XML}
}
Document
Alexa, How Can I Reason with Prolog?

Authors: Falco Nogatz, Julia Kübert, Dietmar Seipel, and Salvador Abreu


Abstract
As with Amazon’s Echo and its conversational agent Alexa, smart voice-controlled devices become ever more present in daily life, and many different applications can be integrated into this platform. In this paper, we present a framework that eases the development of skills in Prolog. As Prolog has a long history in natural language processing, we may integrate well-established techniques, such as reasoning about knowledge with Attempto Controlled English, instead of depending on example phrases and pre-defined slots.

Cite as

Falco Nogatz, Julia Kübert, Dietmar Seipel, and Salvador Abreu. Alexa, How Can I Reason with Prolog?. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 17:1-17:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{nogatz_et_al:OASIcs.SLATE.2019.17,
  author =	{Nogatz, Falco and K\"{u}bert, Julia and Seipel, Dietmar and Abreu, Salvador},
  title =	{{Alexa, How Can I Reason with Prolog?}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{17:1--17:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.17},
  URN =		{urn:nbn:de:0030-drops-108841},
  doi =		{10.4230/OASIcs.SLATE.2019.17},
  annote =	{Keywords: Prolog, Attempto Controlled English, Voice-Controlled Agents, Controlled Natural Language}
}
Document
Improving NLTK for Processing Portuguese

Authors: João Ferreira, Hugo Gonçalo Oliveira, and Ricardo Rodrigues


Abstract
Python has a growing community of users, especially in the AI and ML fields. Yet, Computational Processing of Portuguese in this programming language is limited, in both available tools and results. This paper describes NLPyPort, a NLP pipeline in Python, primarily based on NLTK, and focused on Portuguese. It is mostly assembled from pre-existent resources or their adaptations, but improves over the performance of existing alternatives in Python, namely in the tasks of tokenization, PoS tagging, lemmatization and NER.

Cite as

João Ferreira, Hugo Gonçalo Oliveira, and Ricardo Rodrigues. Improving NLTK for Processing Portuguese. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 18:1-18:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{ferreira_et_al:OASIcs.SLATE.2019.18,
  author =	{Ferreira, Jo\~{a}o and Gon\c{c}alo Oliveira, Hugo and Rodrigues, Ricardo},
  title =	{{Improving NLTK for Processing Portuguese}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{18:1--18:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.18},
  URN =		{urn:nbn:de:0030-drops-108852},
  doi =		{10.4230/OASIcs.SLATE.2019.18},
  annote =	{Keywords: NLP, Tokenization, PoS tagging, Lemmatization, Named Entity Recognition}
}
Document
Quarmic: A Data-Driven Web Development Framework

Authors: Pedro Miguel Pereira Cunha and José Paulo Leal


Abstract
Quarmic is a web framework for rapid prototyping of web applications. Its main goal is to facilitate the development of web applications by providing a high level of abstraction that hides Web communication complexities. This framework allows developers to build scalable applications capable of handling data communication in different models, data persistence and authentication, requiring them just to use simple annotations. Quarmic’s approach consists of the replication of the shared object among clients and server in order to communicate through its methods execution. Where the annotations, namely decorators, are used to indicate the concern (model or view) that each method addresses and to implement the framework’s inversion of control. By indicating the method concern, it enables the separation of its execution across the clients (responsible for the view) and the server (responsible for the model) which facilitates the state management and code maintenance.

Cite as

Pedro Miguel Pereira Cunha and José Paulo Leal. Quarmic: A Data-Driven Web Development Framework. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 19:1-19:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{cunha_et_al:OASIcs.SLATE.2019.19,
  author =	{Cunha, Pedro Miguel Pereira and Leal, Jos\'{e} Paulo},
  title =	{{Quarmic: A Data-Driven Web Development Framework}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{19:1--19:8},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.19},
  URN =		{urn:nbn:de:0030-drops-108869},
  doi =		{10.4230/OASIcs.SLATE.2019.19},
  annote =	{Keywords: web development, framework, data-driven}
}
Document
Identifying Causal Relations in Legal Documents with Dependency Syntactic Analysis

Authors: Pablo Gamallo, Patricia Martín-Rodilla, and Beatriz Calderón


Abstract
This article describes a method for enriching a dependency-based parser with causal connectors. Our specific objective is to identify causal relationships between elementary discourse units in Spanish legal texts. For this purpose, the approach we follow is to search for specific discourse connectives which are taken as causal dependencies relating an effect event (head) with a verbal or nominal cause (dependent). As a result, we turn a specific syntactic parser into a discourse parser aimed at recognizing causal structures.

Cite as

Pablo Gamallo, Patricia Martín-Rodilla, and Beatriz Calderón. Identifying Causal Relations in Legal Documents with Dependency Syntactic Analysis. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 20:1-20:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{gamallo_et_al:OASIcs.SLATE.2019.20,
  author =	{Gamallo, Pablo and Mart{\'\i}n-Rodilla, Patricia and Calder\'{o}n, Beatriz},
  title =	{{Identifying Causal Relations in Legal Documents with Dependency Syntactic Analysis}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{20:1--20:6},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.20},
  URN =		{urn:nbn:de:0030-drops-108870},
  doi =		{10.4230/OASIcs.SLATE.2019.20},
  annote =	{Keywords: Dependency Analysis, Discourse Analysis, Causal Markers, Legal Documents}
}
Document
Quantitative Analysis of Suffix Variability of Comparative Adjectives in Russian

Authors: Timur I. Galeev and Vladimir V. Bochkarev


Abstract
There are two variants of the productive suffix of comparative adjectives used in modern Russian. They are a full two-syllable form and a reduced one-syllable suffix. Both variants are normative. However, they are slightly different in terms of stylistics. The suffix -ee makes the word sound neutral and the word with the suffix -ei sounds more colloquial. The article presents a quantitative study of variability of the suffixes of comparative adjectives and analyzes linguistic and extralinguistic factors that influence the frequency of the variants. The authors concluded that there is no previously anticipated influence of phonetic and morphological factors on the choice of the suffix of an adjective in a bookish speech.

Cite as

Timur I. Galeev and Vladimir V. Bochkarev. Quantitative Analysis of Suffix Variability of Comparative Adjectives in Russian. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 21:1-21:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{galeev_et_al:OASIcs.SLATE.2019.21,
  author =	{Galeev, Timur I. and Bochkarev, Vladimir V.},
  title =	{{Quantitative Analysis of Suffix Variability of Comparative Adjectives in Russian}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{21:1--21:6},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.21},
  URN =		{urn:nbn:de:0030-drops-108886},
  doi =		{10.4230/OASIcs.SLATE.2019.21},
  annote =	{Keywords: Adjectives, language change, variability, Google Books Ngram, Russian language}
}
Document
Hunting Ancestors: A Unified Approach for Discovering Genealogical Information

Authors: José João Almeida and Rui Castro Mendes


Abstract
This paper presents an unified approach for discovering genealogical information. It presents a frameworks for storing information concerning ancestors, locations, dates and documents. It also intends to provide a framework that is able to perform inference concerning dates by using constraints and for handling relations, locations and sources. The DSL presented also aims to help users store information from heterogeneous sources along with the evidence contained therein.

Cite as

José João Almeida and Rui Castro Mendes. Hunting Ancestors: A Unified Approach for Discovering Genealogical Information. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 22:1-22:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{almeida_et_al:OASIcs.SLATE.2019.22,
  author =	{Almeida, Jos\'{e} Jo\~{a}o and Mendes, Rui Castro},
  title =	{{Hunting Ancestors: A Unified Approach for Discovering Genealogical Information}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{22:1--22:6},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.22},
  URN =		{urn:nbn:de:0030-drops-108890},
  doi =		{10.4230/OASIcs.SLATE.2019.22},
  annote =	{Keywords: Genealogy, Domain Specific Language, Temporal Constraints}
}
Document
SeCoGen - A Service Code Generator

Authors: Ricardo Queirós


Abstract
The architectural pattern of micro-services is being increasingly adopted by developers, facilitating the maintenance and scalability of the systems' code. The adoption and consumption of these micro-services are often seen on the front-end code of the Web applications. Nevertheless, this adoption obliges web designers/developers to know where to look for those web services, to read their documentation and to write the request/response code as well to control the corresponding UI rendering. This whole process is time-consuming and error-prone. This article introduces SeCoGen as an interactive code generator for Web service parsing and consumption. The generator benefits from an HTTP request template, a query normalizer and dynamic UI templates. In order, to validate the generator feasibility and usefulness, a REST API to search for countries is used.

Cite as

Ricardo Queirós. SeCoGen - A Service Code Generator. In 8th Symposium on Languages, Applications and Technologies (SLATE 2019). Open Access Series in Informatics (OASIcs), Volume 74, pp. 23:1-23:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{queiros:OASIcs.SLATE.2019.23,
  author =	{Queir\'{o}s, Ricardo},
  title =	{{SeCoGen - A Service Code Generator}},
  booktitle =	{8th Symposium on Languages, Applications and Technologies (SLATE 2019)},
  pages =	{23:1--23:8},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-114-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{74},
  editor =	{Rodrigues, Ricardo and Janou\v{s}ek, Jan and Ferreira, Lu{\'\i}s and Coheur, Lu{\'\i}sa and Batista, Fernando and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2019.23},
  URN =		{urn:nbn:de:0030-drops-108905},
  doi =		{10.4230/OASIcs.SLATE.2019.23},
  annote =	{Keywords: Code Generation, Web services, micro-services}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail