OASIcs, Volume 51

5th Symposium on Languages, Applications and Technologies (SLATE'16)



Thumbnail PDF

Event

SLATE 2016, June 20-21, 2016, Maribor, Slovenia

Editors

Marjan Mernik
José Paulo Leal
Hugo Gonçalo Oliveira

Publication Details

  • published at: 2016-06-21
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-006-4
  • DBLP: db/conf/slate/slate2016

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
OASIcs, Volume 51, SLATE'16, Complete Volume

Authors: Marjan Mernik, José Paulo Leal, and Hugo Gonçalo Oliveira


Abstract
OASIcs, Volume 51, SLATE'16, Complete Volume

Cite as

5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Proceedings{mernik_et_al:OASIcs.SLATE.2016,
  title =	{{OASIcs, Volume 51, SLATE'16, Complete Volume}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016},
  URN =		{urn:nbn:de:0030-drops-60617},
  doi =		{10.4230/OASIcs.SLATE.2016},
  annote =	{Keywords: Natural Language Processing, Programming Languages}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Program Committee, List of Authors

Authors: Marjan Mernik, José Paulo Leal, and Hugo Gonçalo Oliveira


Abstract
Front Matter, Table of Contents, Preface, Program Committee, List of Authors

Cite as

5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 0:i-0:xiv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{mernik_et_al:OASIcs.SLATE.2016.0,
  author =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  title =	{{Front Matter, Table of Contents, Preface, Program Committee, List of Authors}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{0:i--0:xiv},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.0},
  URN =		{urn:nbn:de:0030-drops-60052},
  doi =		{10.4230/OASIcs.SLATE.2016.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Program Committee, List of Authors}
}
Document
Co-Bidding Graphs for Constrained Paper Clustering

Authors: Tadej Škvorc, Nada Lavrač, and Marko Robnik-Šikonja


Abstract
The information for many important problems can be found in various formats and modalities. Besides standard tabular form, these include also text and graphs. To solve such problems fusion of different data sources is required. We demonstrate a methodology which is capable to enrich textual information with graph based data and utilize both in an innovative machine learning application of clustering. The proposed solution is helpful in organization of academic conferences and automates one of its time consuming tasks. Conference organizers can currently use a small number of software tools that allow managing of the paper review process with no/little support for automated conference scheduling. We present a two-tier constrained clustering method for automatic conference scheduling that can automatically assign paper presentations into predefined schedule slots instead of requiring the program chairs to assign them manually. The method uses clustering algorithms to group papers into clusters based on similarities between papers. We use two types of similarities: text similarities (paper similarity with respect to their abstract and title), together with graph similarity based on reviewers' co-bidding information collected during the conference reviewing phase. In this way reviewers' preferences serve as a proxy for preferences of conference attendees. As a result of the proposed two-tier clustering process similar papers are assigned to predefined conference schedule slots. We show that using graph based information in addition to text based similarity increases clustering performance. The source code of the solution is freely available.

Cite as

Tadej Škvorc, Nada Lavrač, and Marko Robnik-Šikonja. Co-Bidding Graphs for Constrained Paper Clustering. In 5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 1:1-1:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{skvorc_et_al:OASIcs.SLATE.2016.1,
  author =	{\v{S}kvorc, Tadej and Lavrač, Nada and Robnik-\v{S}ikonja, Marko},
  title =	{{Co-Bidding Graphs for Constrained Paper Clustering}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{1:1--1:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.1},
  URN =		{urn:nbn:de:0030-drops-60062},
  doi =		{10.4230/OASIcs.SLATE.2016.1},
  annote =	{Keywords: Text mining, data fusion, scheduling, constrained clustering, conference}
}
Document
A Re-Ranking Method Based on Irrelevant Documents in Ad-Hoc Retrieval

Authors: Rabeb Mbarek, Mohamed Tmar, Hawete Hattab, and Mohand Boughanem


Abstract
In this paper, we propose a novel approach for document re-ranking, which relies on the concept of negative feedback represented by irrelevant documents. In a previous paper, a pseudo-relevance feedback method is introduced using an absorbing document ~d which best fits the user's need. The document ~d is orthogonal to the majority of irrelevant documents. In this paper, this document is used to re-rank the initial set of ranked documents in Ad-hoc retrieval. The evaluation carried out on a standard document collection shows the effectiveness of the proposed approach.

Cite as

Rabeb Mbarek, Mohamed Tmar, Hawete Hattab, and Mohand Boughanem. A Re-Ranking Method Based on Irrelevant Documents in Ad-Hoc Retrieval. In 5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 2:1-2:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{mbarek_et_al:OASIcs.SLATE.2016.2,
  author =	{Mbarek, Rabeb and Tmar, Mohamed and Hattab, Hawete and Boughanem, Mohand},
  title =	{{A Re-Ranking Method Based on Irrelevant Documents in Ad-Hoc Retrieval}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{2:1--2:10},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.2},
  URN =		{urn:nbn:de:0030-drops-60072},
  doi =		{10.4230/OASIcs.SLATE.2016.2},
  annote =	{Keywords: Re-ranking, absorption of irrelevance, vector product}
}
Document
Comparing the Performance of Different NLP Toolkits in Formal and Social Media Text

Authors: Alexandre Pinto, Hugo Gonçalo Oliveira, and Ana Oliveira Alves


Abstract
Nowadays, there are many toolkits available for performing common natural language processing tasks, which enable the development of more powerful applications without having to start from scratch. In fact, for English, there is no need to develop tools such as tokenizers, part-of-speech (POS) taggers, chunkers or named entity recognizers (NER). The current challenge is to select which one to use, out of the range of available tools. This choice may depend on several aspects, including the kind and source of text, where the level, formal or informal, may influence the performance of such tools. In this paper, we assess a range of natural language processing toolkits with their default configuration, while performing a set of standard tasks (e.g. tokenization, POS tagging, chunking and NER), in popular datasets that cover newspaper and social network text. The obtained results are analyzed and, while we could not decide on a single toolkit, this exercise was very helpful to narrow our choice.

Cite as

Alexandre Pinto, Hugo Gonçalo Oliveira, and Ana Oliveira Alves. Comparing the Performance of Different NLP Toolkits in Formal and Social Media Text. In 5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 3:1-3:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{pinto_et_al:OASIcs.SLATE.2016.3,
  author =	{Pinto, Alexandre and Gon\c{c}alo Oliveira, Hugo and Oliveira Alves, Ana},
  title =	{{Comparing the Performance of Different NLP Toolkits in Formal and Social Media Text}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{3:1--3:16},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.3},
  URN =		{urn:nbn:de:0030-drops-60086},
  doi =		{10.4230/OASIcs.SLATE.2016.3},
  annote =	{Keywords: Natural language processing, toolkits, formal text, social media, benchmark}
}
Document
Comparing and Benchmarking Semantic Measures Using SMComp

Authors: Teresa Costa and José Paulo Leal


Abstract
The goal of the semantic measures is to compare pairs of concepts, words, sentences or named entities. Their categorization depends on what they measure. If a measure only considers taxonomy relationships is a similarity measure; if it considers all type of relationships it is a relatedness measure. The evaluation process of these measures usually relies on semantic gold standards. These datasets, with several pairs of words with a rating assigned by persons, are used to assess how well a semantic measure performs. There are a few frameworks that provide tools to compute and analyze several well-known measures. This paper presents a novel tool - SMComp - a testbed designed for path-based semantic measures. At its current state, it is a domain-specific tool using three different versions of WordNet. SMComp has two views: one to compute semantic measures of a pair of words and another to assess a semantic measure using a dataset. On the first view, it offers several measures described in the literature as well as the possibility of creating a new measure, by introducing Java code snippets on the GUI. The other view offers a large set of semantic benchmarks to use in the assessment process. It also offers the possibility of uploading a custom dataset to be used in the assessment.

Cite as

Teresa Costa and José Paulo Leal. Comparing and Benchmarking Semantic Measures Using SMComp. In 5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 4:1-4:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{costa_et_al:OASIcs.SLATE.2016.4,
  author =	{Costa, Teresa and Leal, Jos\'{e} Paulo},
  title =	{{Comparing and Benchmarking Semantic Measures Using SMComp}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{4:1--4:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.4},
  URN =		{urn:nbn:de:0030-drops-60090},
  doi =		{10.4230/OASIcs.SLATE.2016.4},
  annote =	{Keywords: Semantic similarity, semantic relatedness, testbed, web application}
}
Document
LLLR Parsing: a Combination of LL and LR Parsing

Authors: Boštjan Slivnik


Abstract
A new parsing method called LLLR parsing is defined and a method for producing LLLR parsers is described. An LLLR parser uses an LL parser as its backbone and parses as much of its input string using LL parsing as possible. To resolve LL conflicts it triggers small embedded LR parsers. An embedded LR parser starts parsing the remaining input and once the LL conflict is resolved, the LR parser produces the left parse of the substring it has just parsed and passes the control back to the backbone LL parser. The LLLR(k) parser can be constructed for any LR(k) grammar. It produces the left parse of the input string without any backtracking and, if used for a syntax-directed translation, it evaluates semantic actions using the top-down strategy just like the canonical LL(k) parser. An LLLR(k) parser is appropriate for grammars where the LL(k) conflicting nonterminals either appear relatively close to the bottom of the derivation trees or produce short substrings. In such cases an LLLR parser can perform a significantly better error recovery than an LR parser since the most part of the input string is parsed with the backbone LL parser. LLLR parsing is similar to LL(^*) parsing except that it (a) uses LR(k) parsers instead of finite automata to resolve the LL(k) conflicts and (b) does not perform any backtracking.

Cite as

Boštjan Slivnik. LLLR Parsing: a Combination of LL and LR Parsing. In 5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 5:1-5:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{slivnik:OASIcs.SLATE.2016.5,
  author =	{Slivnik, Bo\v{s}tjan},
  title =	{{LLLR Parsing: a Combination of LL and LR Parsing}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{5:1--5:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.5},
  URN =		{urn:nbn:de:0030-drops-60106},
  doi =		{10.4230/OASIcs.SLATE.2016.5},
  annote =	{Keywords: LL parsing, LR languages, left parse}
}
Document
Locating User Interface Concepts in Source Code

Authors: Matúš Sulír and Jaroslav Porubän


Abstract
Developers often start their work by exploring a graphical user interface (GUI) of a program. They spot a textual label of interest in the GUI and try to find it in the source code, as a straightforward way of feature location. We performed a study on four Java applications, asking a simple question: Are strings displayed in the GUI of a running program present in its source code? We came to a conclusion that the majority of strings are present there; they occur mainly in Java and "properties" files.

Cite as

Matúš Sulír and Jaroslav Porubän. Locating User Interface Concepts in Source Code. In 5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 6:1-6:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{sulir_et_al:OASIcs.SLATE.2016.6,
  author =	{Sul{\'\i}r, Mat\'{u}\v{s} and Porub\"{a}n, Jaroslav},
  title =	{{Locating User Interface Concepts in Source Code}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{6:1--6:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.6},
  URN =		{urn:nbn:de:0030-drops-60110},
  doi =		{10.4230/OASIcs.SLATE.2016.6},
  annote =	{Keywords: Source code, graphical user interfaces, feature location}
}
Document
Declarative Rules for Annotated Expert Knowledge in Change Management

Authors: Dietmar Seipel, Rüdiger von der Weth, Salvador Abreu, Falco Nogatz, and Alexander Werner


Abstract
In this paper, we use declarative and domain-specific languages for representing expert knowledge in the field of change management in organisational psychology. Expert rules obtained in practical case studies are represented as declarative rules in a deductive database. The expert rules are annotated by information describing their provenance and confidence. Additional provenance information for the whole - or parts of the - rule base can be given by ontologies. Deductive databases allow for declaratively defining the semantics of the expert knowledge with rules; the evaluation of the rules can be optimised and the inference mechanisms could be changed, since they are specified in an abstract way. As the logical syntax of rules had been a problem in previous applications of deductive databases, we use specially designed domain-specific languages to make the rule syntax easier for non-programmers. The semantics of the whole knowledge base is declarative. The rules are written declaratively in an extension datalogs of the well-known deductive database language datalog on the data level, and additional datalogs rules can configure the processing of the annotated rules and the ontologies.

Cite as

Dietmar Seipel, Rüdiger von der Weth, Salvador Abreu, Falco Nogatz, and Alexander Werner. Declarative Rules for Annotated Expert Knowledge in Change Management. In 5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 7:1-7:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{seipel_et_al:OASIcs.SLATE.2016.7,
  author =	{Seipel, Dietmar and von der Weth, R\"{u}diger and Abreu, Salvador and Nogatz, Falco and Werner, Alexander},
  title =	{{Declarative Rules for Annotated Expert Knowledge in Change Management}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{7:1--7:16},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.7},
  URN =		{urn:nbn:de:0030-drops-60124},
  doi =		{10.4230/OASIcs.SLATE.2016.7},
  annote =	{Keywords: declarative, datalog, prolog, domain-specific, change management}
}
Document
A Metamodel for Jason BDI Agents

Authors: Baris Tekin Tezel, Moharram Challenger, and Geylani Kardas


Abstract
In this paper, a metamodel, which can be used for modeling Belief-Desire-Intention (BDI) agents working on Jason platform, is introduced. The metamodel provides the modeling of agents with including their belief bases, plans, sets of events, rules and actions respectively. We believe that the work presented herein contributes to the current multi-agent system (MAS) metamodeling efforts by taking into account another BDI agent platform which is not considered in the existing platform-specific MAS modeling approaches. A graphical concrete syntax and a modeling tool based on the proposed metamodel are also developed in this study. MAS models can be checked according to the constraints originated from the Jason metamodel definitions and hence conformance of the instance models is supplied by utilizing the tool. Use of the syntax and the modeling tool are demonstrated with the design of a cleaning robot which is a well-known example of Jason BDI architecture.

Cite as

Baris Tekin Tezel, Moharram Challenger, and Geylani Kardas. A Metamodel for Jason BDI Agents. In 5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 8:1-8:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{tezel_et_al:OASIcs.SLATE.2016.8,
  author =	{Tezel, Baris Tekin and Challenger, Moharram and Kardas, Geylani},
  title =	{{A Metamodel for Jason BDI Agents}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{8:1--8:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.8},
  URN =		{urn:nbn:de:0030-drops-60132},
  doi =		{10.4230/OASIcs.SLATE.2016.8},
  annote =	{Keywords: metamodel, BDI agent, multi-agent system, Jason}
}
Document
Profile Detection Through Source Code Static Analysis

Authors: Daniel Ferreira Novais, Maria João Varanda Pereira, and Pedro Rangel Henriques


Abstract
The present article reflects the progress of an ongoing master's dissertation on language engineering. The main goal of the work here described, is to infer a programmer's profile through the analysis of his source code. After such analysis the programmer shall be placed on a scale that characterizes him on his language abilities. There are several potential applications for such profiling, namely, the evaluation of a programmer's skills and proficiency on a given language or the continuous evaluation of a student's progress on a programming course. Throughout the course of this project and as a proof of concept, a tool that allows the automatic profiling of a Java programmer is under development. This tool is also introduced in the paper and its preliminary outcomes are discussed.

Cite as

Daniel Ferreira Novais, Maria João Varanda Pereira, and Pedro Rangel Henriques. Profile Detection Through Source Code Static Analysis. In 5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 9:1-9:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{ferreiranovais_et_al:OASIcs.SLATE.2016.9,
  author =	{Ferreira Novais, Daniel and Varanda Pereira, Maria Jo\~{a}o and Rangel Henriques, Pedro},
  title =	{{Profile Detection Through Source Code Static Analysis}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{9:1--9:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.9},
  URN =		{urn:nbn:de:0030-drops-60142},
  doi =		{10.4230/OASIcs.SLATE.2016.9},
  annote =	{Keywords: Static analysis, metrics, programmer profiling}
}
Document
Context-Free Grammars: Exercise Generation and Probabilistic Assessment

Authors: José João Almeida, Eliana Grande, and Georgi Smirnov


Abstract
In this paper we present a metagrammar based algorithm for exercise generation in the domain of context-free grammars. We also propose a probabilistic assessment algorithm based on a new identity theorem for formal series, a matrix version of the well-known identity theorem from the theory of analytic functions.

Cite as

José João Almeida, Eliana Grande, and Georgi Smirnov. Context-Free Grammars: Exercise Generation and Probabilistic Assessment. In 5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 10:1-10:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{almeida_et_al:OASIcs.SLATE.2016.10,
  author =	{Almeida, Jos\'{e} Jo\~{a}o and Grande, Eliana and Smirnov, Georgi},
  title =	{{Context-Free Grammars: Exercise Generation and Probabilistic Assessment}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{10:1--10:8},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.10},
  URN =		{urn:nbn:de:0030-drops-60159},
  doi =		{10.4230/OASIcs.SLATE.2016.10},
  annote =	{Keywords: Exercise generation, context-free grammars, assessment}
}
Document
A Model-Driven Engineering Technique for Developing Composite Content Applications

Authors: Moharram Challenger, Ferhat Erata, Mehmet Onat, Hale Gezgen, and Geylani Kardas


Abstract
Composite Content Applications (CCA) are cross-functional process solutions built on top of Enterprise Content Management systems assembled from pre-built components. Considering the complexity of CCAs, their analysis and development need higher level of abstraction. Model-driven engineering techniques covering the use of Domain-specific Modeling Languages (DSMLs), can provide the abstraction in question by moving software development from code to models which may increase productivity and reduce development costs. Hence, in this paper, we present MDD4CCA, a DSML for developing CCAs. The DSML presents an abstract syntax, a concrete syntax, and an operational semantics, including model-to-model and model-to-code transformations for CCA implementations. Use of the proposed language is evaluated within an industrial case study.

Cite as

Moharram Challenger, Ferhat Erata, Mehmet Onat, Hale Gezgen, and Geylani Kardas. A Model-Driven Engineering Technique for Developing Composite Content Applications. In 5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 11:1-11:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{challenger_et_al:OASIcs.SLATE.2016.11,
  author =	{Challenger, Moharram and Erata, Ferhat and Onat, Mehmet and Gezgen, Hale and Kardas, Geylani},
  title =	{{A Model-Driven Engineering Technique for Developing Composite Content Applications}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{11:1--11:10},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.11},
  URN =		{urn:nbn:de:0030-drops-60168},
  doi =		{10.4230/OASIcs.SLATE.2016.11},
  annote =	{Keywords: Domain-specific modelling languages, composite content applications, model transformation, code generation}
}
Document
Eshu: An Extensible Web Editor for Diagrammatic Languages

Authors: José Paulo Leal, Helder Correia, and José Carlos Paiva


Abstract
The corner stone of a language development environment is an editor. For programming languages, several code editors are readily available to be integrated in Web applications. However, only few editors exist for diagrammatic languages. Eshu is an extensible diagram editor, embeddable in Web applications that require diagram interaction, such as modeling tools or e-learning environments. Eshu is a JavaScript library with an API that supports its integration with other components, including importing/exporting diagrams in JSON. Eshu was already integrated in a pedagogical environment with automated diagram assessment, configured for extended entity-relationship diagrams, that served as basis for an usability evaluation.

Cite as

José Paulo Leal, Helder Correia, and José Carlos Paiva. Eshu: An Extensible Web Editor for Diagrammatic Languages. In 5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 12:1-12:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{leal_et_al:OASIcs.SLATE.2016.12,
  author =	{Leal, Jos\'{e} Paulo and Correia, Helder and Paiva, Jos\'{e} Carlos},
  title =	{{Eshu: An Extensible Web Editor for Diagrammatic Languages}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{12:1--12:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.12},
  URN =		{urn:nbn:de:0030-drops-60178},
  doi =		{10.4230/OASIcs.SLATE.2016.12},
  annote =	{Keywords: Diagram assessment, language environments, automated assessment, e-learning}
}
Document
Sni'per: a Code Snippet RESTful API

Authors: Ricardo Queirós and Alberto Simões


Abstract
Today we use the Web for almost everything, even to program. There are several specialized code editors gravitating on the Web and emulating most of the features inherited from traditional IDEs, such as, syntax highlight, code folding, autocompletion and even code refactorization. One of the techniques to speed the code development is the use of snippets as predefined code blocks that can be automatically included in the code. Although several Web editors support this functionality, they come with a limited set of snippets, not allowing the contribution of new blocks of code. Even if that would be possible, they would be available only to the code's owner or to the editors' users through a private cloud repository. This paper describes the design and implementation of Sni'per, a RESTful API that allows public access for multi-language programming code-blocks ordered by popularity. Besides being able to access code snippets from other users and score them, we can also contribute with our own snippets creating a global network of shared code. In order to make coding against this API easier, we create a client library that reduces the amount of code required to write and make the code more robust.

Cite as

Ricardo Queirós and Alberto Simões. Sni'per: a Code Snippet RESTful API. In 5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 13:1-13:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{queiros_et_al:OASIcs.SLATE.2016.13,
  author =	{Queir\'{o}s, Ricardo and Sim\~{o}es, Alberto},
  title =	{{Sni'per: a Code Snippet RESTful API}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{13:1--13:11},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.13},
  URN =		{urn:nbn:de:0030-drops-60180},
  doi =		{10.4230/OASIcs.SLATE.2016.13},
  annote =	{Keywords: Programming languages, interoperability, web services, code snippets}
}
Document
Building a Dictionary using XML Technology

Authors: Alberto Simões, José João Almeida, and Ana Salgado


Abstract
In this article we describe the workflow implemented to convert a dictionary saved as a PDF file into an XML document and posterior importation into an XML aware database, and the process to edit, add and delete new entries. The conversion process was challenging given the format of the PDF file, and the fine grained detail of the XML schema that was used. For that, an iterative filtering approach was used. To store the dictionary we decided to use an XML aware database (eXist-DB), that stores each dictionary entry as a separate resource. It can be queried used a web interface developed using XQuery. The lexicographers can edit entries using the oXygen XML editor, reading and storing them directly in the database. In order to guarantee incremental backups, it was defined a mechanism to import the XML database into a GIT repository. Finally, a couple of programs were created in order to prepare regular reports on the dictionary revision process, as well as to backup it in a GIT repository.

Cite as

Alberto Simões, José João Almeida, and Ana Salgado. Building a Dictionary using XML Technology. In 5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 14:1-14:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{simoes_et_al:OASIcs.SLATE.2016.14,
  author =	{Sim\~{o}es, Alberto and Almeida, Jos\'{e} Jo\~{a}o and Salgado, Ana},
  title =	{{Building a Dictionary using XML Technology}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{14:1--14:8},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.14},
  URN =		{urn:nbn:de:0030-drops-60193},
  doi =		{10.4230/OASIcs.SLATE.2016.14},
  annote =	{Keywords: XML databases, dictionaries, XQuery, PDF files}
}
Document
Automata Serialization for Manipulation and Drawing

Authors: Miguel Ferreira, Nelma Moreira, and Rogério Reis


Abstract
GUItar is a GPL-licensed, cross-platform, graphical user interface for automata drawing and manipulation, written in C++ and Qt5. This tool offers support for styling, automatic layouts, several format exports and interface with any foreign finite automata manipulation library that can parse the serialized XML or JSON produced. In this paper we describe a new redesign of the GUItar framework and specially the method used to interface GUItar with automata manipulation libraries.

Cite as

Miguel Ferreira, Nelma Moreira, and Rogério Reis. Automata Serialization for Manipulation and Drawing. In 5th Symposium on Languages, Applications and Technologies (SLATE'16). Open Access Series in Informatics (OASIcs), Volume 51, pp. 15:1-15:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{ferreira_et_al:OASIcs.SLATE.2016.15,
  author =	{Ferreira, Miguel and Moreira, Nelma and Reis, Rog\'{e}rio},
  title =	{{Automata Serialization for Manipulation and Drawing}},
  booktitle =	{5th Symposium on Languages, Applications and Technologies (SLATE'16)},
  pages =	{15:1--15:7},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-006-4},
  ISSN =	{2190-6807},
  year =	{2016},
  volume =	{51},
  editor =	{Mernik, Marjan and Leal, Jos\'{e} Paulo and Gon\c{c}alo Oliveira, Hugo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2016.15},
  URN =		{urn:nbn:de:0030-drops-60209},
  doi =		{10.4230/OASIcs.SLATE.2016.15},
  annote =	{Keywords: automata, serialization, visualization}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail