39 Search Results for "Martin, Jos"


Document
On the Use of Concept Maps to Improve Student Skills in an Introductory Object-Oriented Analysis and Design Course

Authors: José F. Vélez, A. Belén Moreno, Victoria Ruiz-Parrado, and Ángel Sánchez

Published in: OASIcs, Volume 133, 6th International Computer Programming Education Conference (ICPEC 2025)


Abstract
This paper presents an ongoing work on the application of concept maps to teaching an introductory Object-Oriented Analysis and Design course for Computer Science students. There exist previous works that introduce the concept map model in these object-oriented courses. However, these works do not usually go deeply enough into the transition from concept maps to static class diagrams. Although concept maps present some clear advantages when defining the abstractions present in object-oriented software modeling, some drawbacks may also appear if the transformation from these maps to class diagrams when the task is carried out through simplistic rules. In this paper we propose an approach, which is illustrated through a use case, to transition from a concept map to a static class diagram in a more realistic way.

Cite as

José F. Vélez, A. Belén Moreno, Victoria Ruiz-Parrado, and Ángel Sánchez. On the Use of Concept Maps to Improve Student Skills in an Introductory Object-Oriented Analysis and Design Course. In 6th International Computer Programming Education Conference (ICPEC 2025). Open Access Series in Informatics (OASIcs), Volume 133, pp. 3:1-3:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{velez_et_al:OASIcs.ICPEC.2025.3,
  author =	{V\'{e}lez, Jos\'{e} F. and Moreno, A. Bel\'{e}n and Ruiz-Parrado, Victoria and S\'{a}nchez, \'{A}ngel},
  title =	{{On the Use of Concept Maps to Improve Student Skills in an Introductory Object-Oriented Analysis and Design Course}},
  booktitle =	{6th International Computer Programming Education Conference (ICPEC 2025)},
  pages =	{3:1--3:8},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-393-5},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{133},
  editor =	{Queir\'{o}s, Ricardo and Pinto, M\'{a}rio and Portela, Filipe and Sim\~{o}es, Alberto},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICPEC.2025.3},
  URN =		{urn:nbn:de:0030-drops-240337},
  doi =		{10.4230/OASIcs.ICPEC.2025.3},
  annote =	{Keywords: Object-Oriented Programming, Concept Map, Abstraction, Unified Modeling Language (UML), Static Class Diagram}
}
Document
Enhancing Creative Thinking Through Gamification in LMS Environments

Authors: Maria João Varanda Pereira, Luís M. Alves, Adina Cocu, Sandra M. Dingli, Montse Meneses, and Ramon Vilanova

Published in: OASIcs, Volume 133, 6th International Computer Programming Education Conference (ICPEC 2025)


Abstract
Gamification in educational context involves applying game design elements and principles to enhance the learning experience. By incorporating the motivational features of games, it aims to engage students and support educational goals. The work presented in this article is part of ThinkGame Erasmus+ project. The project’s goal is to encourage the use of Learning Management System (LMS) tools, such as lessons, wikis, and online tests to create gamified experiences in programming classes. These innovative strategies are intended to boost student motivation and creativity by incorporating compelling narratives, adaptable challenges, collaborative tasks, and continuous feedback. Another important challenge addressed in the project was fostering creativity among teachers, encouraging them to transform conventional, non-gamified resources into engaging and thought-provoking activities for students. A case study composed of ten gamified experiences was developed at Polytechnic Institute of Bragança, one of the project partners, during one semester in Imperative Programming subject.

Cite as

Maria João Varanda Pereira, Luís M. Alves, Adina Cocu, Sandra M. Dingli, Montse Meneses, and Ramon Vilanova. Enhancing Creative Thinking Through Gamification in LMS Environments. In 6th International Computer Programming Education Conference (ICPEC 2025). Open Access Series in Informatics (OASIcs), Volume 133, pp. 6:1-6:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{pereira_et_al:OASIcs.ICPEC.2025.6,
  author =	{Pereira, Maria Jo\~{a}o Varanda and Alves, Lu{\'\i}s M. and Cocu, Adina and Dingli, Sandra M. and Meneses, Montse and Vilanova, Ramon},
  title =	{{Enhancing Creative Thinking Through Gamification in LMS Environments}},
  booktitle =	{6th International Computer Programming Education Conference (ICPEC 2025)},
  pages =	{6:1--6:17},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-393-5},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{133},
  editor =	{Queir\'{o}s, Ricardo and Pinto, M\'{a}rio and Portela, Filipe and Sim\~{o}es, Alberto},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICPEC.2025.6},
  URN =		{urn:nbn:de:0030-drops-240369},
  doi =		{10.4230/OASIcs.ICPEC.2025.6},
  annote =	{Keywords: Creative Thinking, Gamification, LMS, Teaching Programming}
}
Document
The Complexity of Separability for Semilinear Sets and Parikh Automata

Authors: Elias Rojas Collins, Chris Köcher, and Georg Zetzsche

Published in: LIPIcs, Volume 345, 50th International Symposium on Mathematical Foundations of Computer Science (MFCS 2025)


Abstract
In a separability problem, we are given two sets K and L from a class 𝒞, and we want to decide whether there exists a set S from a class 𝒮 such that K ⊆ S and S ∩ L = ∅. In this case, we speak of separability of sets in 𝒞 by sets in 𝒮. We study two types of separability problems. First, we consider separability of semilinear sets (i.e. subsets of ℕ^d for some d) by sets definable by quantifier-free monadic Presburger formulas (or equivalently, the recognizable subsets of ℕ^d). Here, a formula is monadic if each atom uses at most one variable. Second, we consider separability of languages of Parikh automata by regular languages. A Parikh automaton is a machine with access to counters that can only be incremented, and have to meet a semilinear constraint at the end of the run. Both of these separability problems are known to be decidable with elementary complexity. Our main results are that both problems are coNP-complete. In the case of semilinear sets, coNP-completeness holds regardless of whether the input sets are specified by existential Presburger formulas, quantifier-free formulas, or semilinear representations. Our results imply that recognizable separability of rational subsets of Σ* × ℕ^d (shown decidable by Choffrut and Grigorieff) is coNP-complete as well. Another application is that regularity of deterministic Parikh automata (where the target set is specified using a quantifier-free Presburger formula) is coNP-complete as well.

Cite as

Elias Rojas Collins, Chris Köcher, and Georg Zetzsche. The Complexity of Separability for Semilinear Sets and Parikh Automata. In 50th International Symposium on Mathematical Foundations of Computer Science (MFCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 345, pp. 38:1-38:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{collins_et_al:LIPIcs.MFCS.2025.38,
  author =	{Collins, Elias Rojas and K\"{o}cher, Chris and Zetzsche, Georg},
  title =	{{The Complexity of Separability for Semilinear Sets and Parikh Automata}},
  booktitle =	{50th International Symposium on Mathematical Foundations of Computer Science (MFCS 2025)},
  pages =	{38:1--38:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-388-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{345},
  editor =	{Gawrychowski, Pawe{\l} and Mazowiecki, Filip and Skrzypczak, Micha{\l}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2025.38},
  URN =		{urn:nbn:de:0030-drops-241457},
  doi =		{10.4230/LIPIcs.MFCS.2025.38},
  annote =	{Keywords: Vector Addition System, Separability, Regular Language}
}
Document
Explainability is a Game for Probabilistic Bisimilarity Distances

Authors: Emily Vlasman, Anto Nanah Ji, James Worrell, and Franck van Breugel

Published in: LIPIcs, Volume 348, 36th International Conference on Concurrency Theory (CONCUR 2025)


Abstract
We revisit a game from the literature that characterizes the probabilistic bisimilarity distances of a labelled Markov chain. We illustrate how an optimal policy of the game can explain these distances. Like the games that characterize bisimilarity and probabilistic bisimilarity, the game is played on pairs of states and matches transitions of those states. To obtain more convincing and interpretable explanations than those provided by generic optimal policies, we restrict to optimal policies that delay reaching observably inequivalent state pairs for as long as possible (called 1-maximal) while quickly reaching equivalent ones (called 0-minimal). We present iterative algorithms that compute 1-maximal and 0-minimal policies and prove an exponential lower bound for the number of iterations of the algorithm that computes 1-maximal policies.

Cite as

Emily Vlasman, Anto Nanah Ji, James Worrell, and Franck van Breugel. Explainability is a Game for Probabilistic Bisimilarity Distances. In 36th International Conference on Concurrency Theory (CONCUR 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 348, pp. 36:1-36:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{vlasman_et_al:LIPIcs.CONCUR.2025.36,
  author =	{Vlasman, Emily and Nanah Ji, Anto and Worrell, James and van Breugel, Franck},
  title =	{{Explainability is a Game for Probabilistic Bisimilarity Distances}},
  booktitle =	{36th International Conference on Concurrency Theory (CONCUR 2025)},
  pages =	{36:1--36:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-389-8},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{348},
  editor =	{Bouyer, Patricia and van de Pol, Jaco},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2025.36},
  URN =		{urn:nbn:de:0030-drops-239861},
  doi =		{10.4230/LIPIcs.CONCUR.2025.36},
  annote =	{Keywords: probabilistic bisimilarity distance, labelled Markov chain, game, policy, explainability}
}
Document
An Efficient Data Structure and Algorithm for Long-Match Query in Run-Length Compressed BWT

Authors: Ahsan Sanaullah, Degui Zhi, and Shaojie Zhang

Published in: LIPIcs, Volume 344, 25th International Conference on Algorithms for Bioinformatics (WABI 2025)


Abstract
String matching problems in bioinformatics are typically for finding exact substring matches between a query and a reference text. Previous formulations often focus on maximum exact matches (MEMs). However, multiple occurrences of substrings of the query in the text that are long enough but not maximal may not be captured by MEMs. Such long matches can be informative, especially when the text is a collection of similar sequences such as genomes. In this paper, we describe a new type of match between a pattern and a text that aren't necessarily maximal in the query, but still contain useful matching information: locally maximal exact matches (LEMs). There are usually a large amount of LEMs, so we only consider those above some length threshold ℒ. These are referred to as long LEMs. The purpose of long LEMs is to capture substring matches between a query and a text that are not necessarily maximal in the pattern but still long enough to be important. Therefore efficient long LEMs finding algorithms are desired for these datasets. However, these datasets are too large to query on traditional string indexes. Fortunately, these datasets are very repetitive. Recently, compressed string indexes that take advantage of the redundancy in the data but retain efficient querying capability have been proposed as a solution. We therefore give an efficient algorithm for computing all the long LEMs of a query and a text in a BWT runs compressed string index. We describe an O(m+occ) expected time algorithm that relies on an O(r) words space string index for outputting all long LEMs of a pattern with respect to a text given the matching statistics of the pattern with respect to the text. Here m is the length of the query, occ is the number of long LEMs outputted, and r is the number of runs in the BWT of the text. The O(r) space string index we describe relies on an adaptation of the move data structure by Nishimoto and Tabei. We are able to support LCP[i] queries in constant time given SA[i]. In other words, we answer PLCP[i] queries in constant time. These PLCP queries enable the efficient long LEM query. Long LEMs may provide useful similarity information between a pattern and a text that MEMs may ignore. This information is particularly useful in pangenome and biobank scale haplotype panel contexts.

Cite as

Ahsan Sanaullah, Degui Zhi, and Shaojie Zhang. An Efficient Data Structure and Algorithm for Long-Match Query in Run-Length Compressed BWT. In 25th International Conference on Algorithms for Bioinformatics (WABI 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 344, pp. 17:1-17:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{sanaullah_et_al:LIPIcs.WABI.2025.17,
  author =	{Sanaullah, Ahsan and Zhi, Degui and Zhang, Shaojie},
  title =	{{An Efficient Data Structure and Algorithm for Long-Match Query in Run-Length Compressed BWT}},
  booktitle =	{25th International Conference on Algorithms for Bioinformatics (WABI 2025)},
  pages =	{17:1--17:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-386-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{344},
  editor =	{Brejov\'{a}, Bro\v{n}a and Patro, Rob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2025.17},
  URN =		{urn:nbn:de:0030-drops-239433},
  doi =		{10.4230/LIPIcs.WABI.2025.17},
  annote =	{Keywords: BWT, LEM, Long LEM, MEM, Run Length Compressed BWT, Move Data Structure, Pangenome}
}
Document
Elements for Weighted Answer-Set Programming

Authors: Francisco Coelho, Bruno Dinis, Dietmar Seipel, and Salvador Abreu

Published in: OASIcs, Volume 135, 14th Symposium on Languages, Applications and Technologies (SLATE 2025)


Abstract
Logic programs, more specifically, answer-set programs, can be annotated with probabilities on facts to express uncertainty. We address the problem of propagating weight annotations on facts (e.g. probabilities) of an answer-set program to its stable models, and from there to events (defined as sets of atoms) in a dataset over the program’s domain. We propose a novel approach which is algebraic in the sense that it relies on an equivalence relation over the set of events. Uncertainty is then described as polynomial expressions over variables. We propagate the weight function in the space of models and events, rather than doing so within the syntax of the program. As evidence that our approach is sound, we show that certain facts behave as expected. Our approach allows us to investigate weight annotated programs and to determine how suitable a given one is for modeling a given dataset containing events. It’s core is illustrated by a running example and the encoding of a Bayesian network.

Cite as

Francisco Coelho, Bruno Dinis, Dietmar Seipel, and Salvador Abreu. Elements for Weighted Answer-Set Programming. In 14th Symposium on Languages, Applications and Technologies (SLATE 2025). Open Access Series in Informatics (OASIcs), Volume 135, pp. 3:1-3:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{coelho_et_al:OASIcs.SLATE.2025.3,
  author =	{Coelho, Francisco and Dinis, Bruno and Seipel, Dietmar and Abreu, Salvador},
  title =	{{Elements for Weighted Answer-Set Programming}},
  booktitle =	{14th Symposium on Languages, Applications and Technologies (SLATE 2025)},
  pages =	{3:1--3:16},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-387-4},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{135},
  editor =	{Baptista, Jorge and Barateiro, Jos\'{e}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2025.3},
  URN =		{urn:nbn:de:0030-drops-236836},
  doi =		{10.4230/OASIcs.SLATE.2025.3},
  annote =	{Keywords: Answer-Set Programming, Stable Models, Probabilistic Logic Programming}
}
Document
Beyond the Score: Exploring the Intersection Between Sociodemographics and Linguistic Features in English (L1) Writing Placement

Authors: Miguel Da Corte and Jorge Baptista

Published in: OASIcs, Volume 135, 14th Symposium on Languages, Applications and Technologies (SLATE 2025)


Abstract
This study examines the intersection of sociodemographic characteristics, linguistic features, and writing placement outcomes at a community college in the United States of America. It focuses on 210 anonymized writing samples from native English speakers (L1) that were automatically classified by Accuplacer and independently assessed by two trained raters. Disparities across gender and race using 40 top-ranked linguistic features selected from Coh-Metrix, CTAP, and Developmental Education-Specific (DES) sets were analyzed. Three statistical tests were used: one-way ANOVA, Tukey’s HSD, and Chi-square. ANOVA results showed racial differences in nine linguistic features, especially those tied to syntactic complexity, discourse markers, and lexical precision. Gender differences were more limited, with only one feature reaching significance (Positive Connectives, p = 0.007). Tukey’s HSD pairwise tests showed no significant gender group variation but revealed sensitivity in DES features when comparing racial groups. Chi-square analysis indicated no significant association between gender and placement outcomes but suggested a possible link between race and human-assigned levels (χ² = 9.588, p = 0.048). These findings suggest that while automated systems assess general writing skills, human-devised linguistic features and demographic insights can support more equitable placement practices for all students entering college-level programs.

Cite as

Miguel Da Corte and Jorge Baptista. Beyond the Score: Exploring the Intersection Between Sociodemographics and Linguistic Features in English (L1) Writing Placement. In 14th Symposium on Languages, Applications and Technologies (SLATE 2025). Open Access Series in Informatics (OASIcs), Volume 135, pp. 6:1-6:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dacorte_et_al:OASIcs.SLATE.2025.6,
  author =	{Da Corte, Miguel and Baptista, Jorge},
  title =	{{Beyond the Score: Exploring the Intersection Between Sociodemographics and Linguistic Features in English (L1) Writing Placement}},
  booktitle =	{14th Symposium on Languages, Applications and Technologies (SLATE 2025)},
  pages =	{6:1--6:18},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-387-4},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{135},
  editor =	{Baptista, Jorge and Barateiro, Jos\'{e}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2025.6},
  URN =		{urn:nbn:de:0030-drops-236861},
  doi =		{10.4230/OASIcs.SLATE.2025.6},
  annote =	{Keywords: Developmental Education (DevEd), sociolinguistic variation, text classification, Machine Learning, placement equity}
}
Document
Mining GitHub Software Repositories to Look for Programming Language Cocktails

Authors: João Loureiro, Alvaro Costa Neto, Maria João Varanda Pereira, and Pedro Rangel Henriques

Published in: OASIcs, Volume 135, 14th Symposium on Languages, Applications and Technologies (SLATE 2025)


Abstract
In light of specific development needs, it is common to concurrently apply different technologies to build complex applications. Given that lowering risks, costs, and other negative factors, while improving their positive counterparts is paramount to a better development environment, it becomes relevant to find out what technologies work best for each intended purpose in a project. In order to reach these findings, it is necessary to analyse and study the technologies applied in these projects and how they interconnect and relate to each other. The theory behind Programming Cocktails (meaning the set of programming technologies - Ingredients - that are used to develop complex systems) can support these analysis. However, due to the sheer amount of data that is required to construct and analyse these Cocktails, it becomes unsustainable to manually obtain them. From the desire to accelerate this process comes the need for a tool that automates the data collection and its conversion into an appropriate format for analysis. As such, the project proposed in this paper revolves around the development of a web-scraping application that can generate Cocktail Identity Cards (CIC) from source code repositories hosted on GitHub. Said CICs contain the Ingredients (programming languages, libraries and frameworks) used in the corresponding GitHub repository and follow the ontology previously established in a larger research project to model each Programming Cocktail. This paper presents a survey of current Source Version Control Systems (SVCSs) and web-scrapping technologies, an overview of Programming Cocktails and its current foundations, and the design of a tool that can automate the gathering of CICs from GitHub repositories.

Cite as

João Loureiro, Alvaro Costa Neto, Maria João Varanda Pereira, and Pedro Rangel Henriques. Mining GitHub Software Repositories to Look for Programming Language Cocktails. In 14th Symposium on Languages, Applications and Technologies (SLATE 2025). Open Access Series in Informatics (OASIcs), Volume 135, pp. 13:1-13:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{loureiro_et_al:OASIcs.SLATE.2025.13,
  author =	{Loureiro, Jo\~{a}o and Costa Neto, Alvaro and Pereira, Maria Jo\~{a}o Varanda and Henriques, Pedro Rangel},
  title =	{{Mining GitHub Software Repositories to Look for Programming Language Cocktails}},
  booktitle =	{14th Symposium on Languages, Applications and Technologies (SLATE 2025)},
  pages =	{13:1--13:16},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-387-4},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{135},
  editor =	{Baptista, Jorge and Barateiro, Jos\'{e}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2025.13},
  URN =		{urn:nbn:de:0030-drops-236933},
  doi =		{10.4230/OASIcs.SLATE.2025.13},
  annote =	{Keywords: Software Repository Mining, Source Version Control, GitHub Scraping, Programming Cocktails}
}
Document
A DSL for Swarm Intelligence Algorithms

Authors: Kevin Martins and Rui Mendes

Published in: OASIcs, Volume 135, 14th Symposium on Languages, Applications and Technologies (SLATE 2025)


Abstract
We propose a domain-specific language to simplify the expression of Swarm Intelligence algorithms. These algorithms are typically introduced through metaphors, requiring practitioners to manually translate them into low-level implementations.This process can obscure intent and hinder reproducibility. The proposed DSL bridges this gap by capturing algorithmic behavior at a higher level of abstraction. We demonstrate its expressiveness in a few lines of code and evaluate its feasibility through a reference implementation. A discussion is presented that includes empirical comparisons with traditional implementations and future directions of the proposed DSL.

Cite as

Kevin Martins and Rui Mendes. A DSL for Swarm Intelligence Algorithms. In 14th Symposium on Languages, Applications and Technologies (SLATE 2025). Open Access Series in Informatics (OASIcs), Volume 135, pp. 2:1-2:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{martins_et_al:OASIcs.SLATE.2025.2,
  author =	{Martins, Kevin and Mendes, Rui},
  title =	{{A DSL for Swarm Intelligence Algorithms}},
  booktitle =	{14th Symposium on Languages, Applications and Technologies (SLATE 2025)},
  pages =	{2:1--2:17},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-387-4},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{135},
  editor =	{Baptista, Jorge and Barateiro, Jos\'{e}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2025.2},
  URN =		{urn:nbn:de:0030-drops-236826},
  doi =		{10.4230/OASIcs.SLATE.2025.2},
  annote =	{Keywords: Domain Specific Languages, Swarm Intelligence, Global Optimization}
}
Document
CVTool: Automating Content Variants of CVs

Authors: Julio Beites Gonçalves, Maria João Varanda Pereira, and Pedro Rangel Henriques

Published in: OASIcs, Volume 135, 14th Symposium on Languages, Applications and Technologies (SLATE 2025)


Abstract
As academic professionals, we frequently need to create different versions of our CVs for project applications, career evaluations, or competitions. These versions may be chronologically structured or skill-oriented, covering specific periods and written in various languages. Even when using a LaTeX document as a base, numerous modifications are required each time an updated CV version is requested for a specific purpose. The primary objective of the project reported in this paper is to design and implement a web-based system (CVTool) that simplifies the management of LaTeX CV content while ensuring flexibility. The CVTool is built on a domain-specific language that enables the creation of various filters, allowing for the automatic content adjustment while preserving the original format. Information is extracted from the LaTeX document, and users can specify the sections, dates, skills they want to highlight, and the language in which the CV should be generated. Since the approach relies on an internal data representation derived from the original LaTeX document, it offers users the flexibility to manage content efficiently and extract the necessary information with ease.

Cite as

Julio Beites Gonçalves, Maria João Varanda Pereira, and Pedro Rangel Henriques. CVTool: Automating Content Variants of CVs. In 14th Symposium on Languages, Applications and Technologies (SLATE 2025). Open Access Series in Informatics (OASIcs), Volume 135, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{goncalves_et_al:OASIcs.SLATE.2025.5,
  author =	{Gon\c{c}alves, Julio Beites and Pereira, Maria Jo\~{a}o Varanda and Henriques, Pedro Rangel},
  title =	{{CVTool: Automating Content Variants of CVs}},
  booktitle =	{14th Symposium on Languages, Applications and Technologies (SLATE 2025)},
  pages =	{5:1--5:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-387-4},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{135},
  editor =	{Baptista, Jorge and Barateiro, Jos\'{e}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2025.5},
  URN =		{urn:nbn:de:0030-drops-236855},
  doi =		{10.4230/OASIcs.SLATE.2025.5},
  annote =	{Keywords: Latex CV, CV Versioning, DSL, CV Parsing, CV Templates}
}
Document
An Architecture for Composite Combinatorial Optimization Solvers

Authors: Khalil Chrit, Jean-François Baffier, Pedro Patinho, and Salvador Abreu

Published in: OASIcs, Volume 135, 14th Symposium on Languages, Applications and Technologies (SLATE 2025)


Abstract
In this paper, we introduce elements for MoSCO, a framework for building hybrid metaheuristic-based solvers from a collection of reusable base components. The framework is implemented in Julia and provides a modular architecture for composing solvers through a pipeline-based approach. The modular design of MoSCO supports the creation of reusable components and adaptable solver strategies for various Constraint Satisfaction Problems (CSPs) and Constraint Optimization Problems (COPs). We validate MoSCO’s utility through practical examples, demonstrating its effectiveness in reconstructing established metaheuristics and enabling the creation of novel solver configurations. This work lays the foundation for future developments in automated solver construction and parameter optimization.

Cite as

Khalil Chrit, Jean-François Baffier, Pedro Patinho, and Salvador Abreu. An Architecture for Composite Combinatorial Optimization Solvers. In 14th Symposium on Languages, Applications and Technologies (SLATE 2025). Open Access Series in Informatics (OASIcs), Volume 135, pp. 8:1-8:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chrit_et_al:OASIcs.SLATE.2025.8,
  author =	{Chrit, Khalil and Baffier, Jean-Fran\c{c}ois and Patinho, Pedro and Abreu, Salvador},
  title =	{{An Architecture for Composite Combinatorial Optimization Solvers}},
  booktitle =	{14th Symposium on Languages, Applications and Technologies (SLATE 2025)},
  pages =	{8:1--8:16},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-387-4},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{135},
  editor =	{Baptista, Jorge and Barateiro, Jos\'{e}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2025.8},
  URN =		{urn:nbn:de:0030-drops-236885},
  doi =		{10.4230/OASIcs.SLATE.2025.8},
  annote =	{Keywords: Hybrid Metaheuristics, DSL}
}
Document
Invited Talk
Unsolvable Terms in Filter Models (Invited Talk)

Authors: Mariangiola Dezani-Ciancaglini, Paola Giannini, and Furio Honsell

Published in: LIPIcs, Volume 337, 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)


Abstract
Intersection type theories (itt’s) and filter models, i.e. λ-calculus models generated by itt’s, are reviewed in full generality. In this framework, which subsumes most λ-calculus models in the literature based on Scott-continuous functions, we discuss the interpretation of unsolvable terms. We give a necessary, but not sufficient, condition on an itt for the interpretation of some unsolvable term to be non-trivial in the filter model it generates. This result is obtained building on a type theoretic characterisation of the fine structure of unsolvables.

Cite as

Mariangiola Dezani-Ciancaglini, Paola Giannini, and Furio Honsell. Unsolvable Terms in Filter Models (Invited Talk). In 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 337, pp. 3:1-3:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dezaniciancaglini_et_al:LIPIcs.FSCD.2025.3,
  author =	{Dezani-Ciancaglini, Mariangiola and Giannini, Paola and Honsell, Furio},
  title =	{{Unsolvable Terms in Filter Models}},
  booktitle =	{10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)},
  pages =	{3:1--3:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-374-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{337},
  editor =	{Fern\'{a}ndez, Maribel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2025.3},
  URN =		{urn:nbn:de:0030-drops-236181},
  doi =		{10.4230/LIPIcs.FSCD.2025.3},
  annote =	{Keywords: \lambda-calculus, Intersection Types, Unsolvable Terms, Filter Models}
}
Document
Quantitative Graded Semantics and Spectra of Behavioural Metrics

Authors: Jonas Forster, Lutz Schröder, Paul Wild, Harsh Beohar, Sebastian Gurke, Barbara König, and Karla Messing

Published in: LIPIcs, Volume 326, 33rd EACSL Annual Conference on Computer Science Logic (CSL 2025)


Abstract
Behavioural metrics provide a quantitative refinement of classical two-valued behavioural equivalences on systems with quantitative data, such as metric or probabilistic transition systems. In analogy to the linear-time/ branching-time spectrum of two-valued behavioural equivalences on transition systems, behavioural metrics vary in granularity, and are often characterized by fragments of suitable modal logics. In the latter respect, the quantitative case is, however, more involved than the two-valued one; in fact, we show that probabilistic metric trace distance cannot be characterized by any compositionally defined modal logic with unary modalities. We go on to provide a unifying treatment of spectra of behavioural metrics in the emerging framework of graded monads, working in coalgebraic generality, that is, parametrically in the system type. In the ensuing development of quantitative graded semantics, we introduce algebraic presentations of graded monads on the category of metric spaces. Moreover, we provide a general criterion for a given real-valued modal logic to characterize a given behavioural distance. As a case study, we apply this criterion to obtain a new characteristic modal logic for trace distance in fuzzy metric transition systems.

Cite as

Jonas Forster, Lutz Schröder, Paul Wild, Harsh Beohar, Sebastian Gurke, Barbara König, and Karla Messing. Quantitative Graded Semantics and Spectra of Behavioural Metrics. In 33rd EACSL Annual Conference on Computer Science Logic (CSL 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 326, pp. 33:1-33:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{forster_et_al:LIPIcs.CSL.2025.33,
  author =	{Forster, Jonas and Schr\"{o}der, Lutz and Wild, Paul and Beohar, Harsh and Gurke, Sebastian and K\"{o}nig, Barbara and Messing, Karla},
  title =	{{Quantitative Graded Semantics and Spectra of Behavioural Metrics}},
  booktitle =	{33rd EACSL Annual Conference on Computer Science Logic (CSL 2025)},
  pages =	{33:1--33:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-362-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{326},
  editor =	{Endrullis, J\"{o}rg and Schmitz, Sylvain},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2025.33},
  URN =		{urn:nbn:de:0030-drops-227907},
  doi =		{10.4230/LIPIcs.CSL.2025.33},
  annote =	{Keywords: transition systems, modal logics, coalgebras, behavioural metrics}
}
Document
Resource Paper
NEOntometrics - A Public Endpoint for Calculating Ontology Metrics

Authors: Achim Reiz and Kurt Sandkuhl

Published in: TGDK, Volume 2, Issue 2 (2024): Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 2, Issue 2


Abstract
Ontologies are the cornerstone of the semantic web and knowledge graphs. They are available from various sources, come in many shapes and sizes, and differ widely in attributes like expressivity, degree of interconnection, or the number of individuals. As sharing knowledge and meaning across human and computational actors emphasizes the reuse of existing ontologies, how can we select the ontology that best fits the individual use case? How do we compare two ontologies or assess their different versions? Automatically calculated ontology metrics offer a starting point for an objective assessment. In the past years, a multitude of metrics have been proposed. However, metric implementations and validations for real-world data are scarce. For most of these proposed metrics, no software for their calculation is available (anymore). This work aims at solving this implementation gap. We present the emerging resource NEOntometrics, an open-source, flexible metric endpoint that offers (1.) an explorative help page that assists in understanding and selecting ontology metrics, (2.) a public metric calculation service that allows assessing ontologies from online resources, including GIT-based repositories for calculating evolutional data, with (3.) a scalable and adaptable architecture. In this paper, we first evaluate the state of the art, then show the software and its underlying architecture, followed by an evaluation. NEOntometrics is today the most extensive software for calculating ontology metrics.

Cite as

Achim Reiz and Kurt Sandkuhl. NEOntometrics - A Public Endpoint for Calculating Ontology Metrics. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 2:1-2:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{reiz_et_al:TGDK.2.2.2,
  author =	{Reiz, Achim and Sandkuhl, Kurt},
  title =	{{NEOntometrics - A Public Endpoint for Calculating Ontology Metrics}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{2:1--2:22},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.2},
  URN =		{urn:nbn:de:0030-drops-225869},
  doi =		{10.4230/TGDK.2.2.2},
  annote =	{Keywords: Ontology Metrics, Ontology Quality, Knowledge Graph Semantic Web, OWL, RDF}
}
Document
Survey
Towards Representing Processes and Reasoning with Process Descriptions on the Web

Authors: Andreas Harth, Tobias Käfer, Anisa Rula, Jean-Paul Calbimonte, Eduard Kamburjan, and Martin Giese

Published in: TGDK, Volume 2, Issue 1 (2024): Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge, Volume 2, Issue 1


Abstract
We work towards a vocabulary to represent processes and temporal logic specifications as graph-structured data. Different fields use incompatible terminologies for describing essentially the same process-related concepts. In addition, processes can be represented from different perspectives and levels of abstraction: both state-centric and event-centric perspectives offer distinct insights into the underlying processes. In this work, we strive to unify the representation of processes and related concepts by leveraging the power of knowledge graphs. We survey approaches to representing processes and reasoning with process descriptions from different fields and provide a selection of scenarios to help inform the scope of a unified representation of processes. We focus on processes that can be executed and observed via web interfaces. We propose to provide a representation designed to combine state-centric and event-centric perspectives while incorporating temporal querying and reasoning capabilities on temporal logic specifications. A standardised vocabulary and representation for processes and temporal specifications would contribute towards bridging the gap between the terminologies from different fields and fostering the broader application of methods involving temporal logics, such as formal verification and program synthesis.

Cite as

Andreas Harth, Tobias Käfer, Anisa Rula, Jean-Paul Calbimonte, Eduard Kamburjan, and Martin Giese. Towards Representing Processes and Reasoning with Process Descriptions on the Web. In Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 1, pp. 1:1-1:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{harth_et_al:TGDK.2.1.1,
  author =	{Harth, Andreas and K\"{a}fer, Tobias and Rula, Anisa and Calbimonte, Jean-Paul and Kamburjan, Eduard and Giese, Martin},
  title =	{{Towards Representing Processes and Reasoning with Process Descriptions on the Web}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1:1--1:32},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.1.1},
  URN =		{urn:nbn:de:0030-drops-198583},
  doi =		{10.4230/TGDK.2.1.1},
  annote =	{Keywords: Process modelling, Process ontology, Temporal logic, Web services}
}
  • Refine by Type
  • 39 Document/PDF
  • 16 Document/HTML

  • Refine by Publication Year
  • 13 2025
  • 2 2024
  • 1 2023
  • 1 2022
  • 1 2020
  • Show More...

  • Refine by Author
  • 3 Meseguer, José
  • 3 Pereira, Maria João Varanda
  • 3 Wirsing, Martin
  • 2 Abreu, Salvador
  • 2 Ehrig, Hartmut
  • Show More...

  • Refine by Series/Journal
  • 13 LIPIcs
  • 8 OASIcs
  • 2 LITES
  • 3 TGDK
  • 1 DagRep
  • Show More...

  • Refine by Classification
  • 2 Computing methodologies → Ontology engineering
  • 2 Software and its engineering → Domain specific languages
  • 2 Theory of computation → Modal and temporal logics
  • 2 Theory of computation → Program semantics
  • 2 Theory of computation → Type theory
  • Show More...

  • Refine by Keyword
  • 2 Agda
  • 2 DSL
  • 1 Abstraction
  • 1 Anonymity
  • 1 Answer-Set Programming
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail