12 Search Results for "Quick, David"


Document
Multidimensional Usability Assessment in Spaceflight Analog Missions

Authors: Shivang Shelat, Katherine E. Homer, John A. Karasinski, and Jessica J. Marquez

Published in: OASIcs, Volume 130, Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)


Abstract
Software planning tools enable the self-scheduling of operational timelines during spaceflight, reducing reliance on ground support crews. Here, we assess analog crew perceptions of a planning tool’s usability across two space mission analogs with two validated questionnaires: the unidimensional System Usability Scale and the multidimensional User Experience Questionnaire. Critically, half the missions had assistive countermeasures integrated into the planning software interface whereas the other half did not. Correlation tests revealed high convergence between usability measures in the spaceflight analog setting. Group comparisons showed that the interface countermeasures enhanced several dimensions of usability, particularly for perceptions of the tool’s efficiency and dependability. These findings highlight the utility of a multidimensional approach to characterizing usability in order to capture fine-grained shifts in human-software dynamics, especially in spaceflight environments.

Cite as

Shivang Shelat, Katherine E. Homer, John A. Karasinski, and Jessica J. Marquez. Multidimensional Usability Assessment in Spaceflight Analog Missions. In Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025). Open Access Series in Informatics (OASIcs), Volume 130, pp. 3:1-3:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{shelat_et_al:OASIcs.SpaceCHI.2025.3,
  author =	{Shelat, Shivang and Homer, Katherine E. and Karasinski, John A. and Marquez, Jessica J.},
  title =	{{Multidimensional Usability Assessment in Spaceflight Analog Missions}},
  booktitle =	{Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)},
  pages =	{3:1--3:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-384-3},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{130},
  editor =	{Bensch, Leonie and Nilsson, Tommy and Nisser, Martin and Pataranutaporn, Pat and Schmidt, Albrecht and Sumini, Valentina},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SpaceCHI.2025.3},
  URN =		{urn:nbn:de:0030-drops-239934},
  doi =		{10.4230/OASIcs.SpaceCHI.2025.3},
  annote =	{Keywords: space usability, crew autonomy, self-scheduling software}
}
Document
In-Browser C++ Interpreter for Lightweight Intelligent Programming Learning Environments

Authors: Tomas Blažauskas, Arnoldas Rauba, Jakub Swacha, Raffaele Montella, and Rytis Maskeliunas

Published in: OASIcs, Volume 133, 6th International Computer Programming Education Conference (ICPEC 2025)


Abstract
The paper presents a browser native C++ interpreter integrated into an AI-assisted educational platform designed to enhance programming learning in formal education. The interpreter leverages Parsing Expression Grammars (PEG) to generate Abstract Syntax Trees (AST) and executes C++ code using a TypeScript-based runtime. The system supports key C++ features, including pointer arithmetic, function overloading, and namespace resolution, and emulates memory management via reference-counted JavaScript objects. Integrated within a web-based learning environment, it provides automated feedback, error explanations, and code quality evaluations. The evaluation involved 4582 students in three difficulty levels and feedback from 14 teachers. The results include high system usability scale (SUS) scores (avg. 83.5) and WBLT learning effectiveness scores (avg. 4.58/5). Interpreter performance testing in 65 cases averaged under 10 ms per task, confirming its practical applicability to school curricula. The system supports SCORM and PWA deployment, enabling LMS-independent usage. The work introduces a technical innovation in browser-based C++ execution and a scalable framework for LLM-enhanced programming pedagogy.

Cite as

Tomas Blažauskas, Arnoldas Rauba, Jakub Swacha, Raffaele Montella, and Rytis Maskeliunas. In-Browser C++ Interpreter for Lightweight Intelligent Programming Learning Environments. In 6th International Computer Programming Education Conference (ICPEC 2025). Open Access Series in Informatics (OASIcs), Volume 133, pp. 14:1-14:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{blazauskas_et_al:OASIcs.ICPEC.2025.14,
  author =	{Bla\v{z}auskas, Tomas and Rauba, Arnoldas and Swacha, Jakub and Montella, Raffaele and Maskeliunas, Rytis},
  title =	{{In-Browser C++ Interpreter for Lightweight Intelligent Programming Learning Environments}},
  booktitle =	{6th International Computer Programming Education Conference (ICPEC 2025)},
  pages =	{14:1--14:15},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-393-5},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{133},
  editor =	{Queir\'{o}s, Ricardo and Pinto, M\'{a}rio and Portela, Filipe and Sim\~{o}es, Alberto},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICPEC.2025.14},
  URN =		{urn:nbn:de:0030-drops-240449},
  doi =		{10.4230/OASIcs.ICPEC.2025.14},
  annote =	{Keywords: C++ interpreter, browser-based execution, programming education, LLM-assisted learning, PEG, AST, TypeScript runtime}
}
Document
APPROX
Dual Charging for Half-Integral TSP

Authors: Nathan Klein and Mehrshad Taziki

Published in: LIPIcs, Volume 353, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)


Abstract
In this extended abstract, we show that the max entropy algorithm is a randomized 1.49776 approximation for half-integral TSP, improving upon the previous known bound of 1.49993 from Karlin et al. This also improves upon the best-known approximation for half-integral TSP due to Gupta et al. Our improvement results from using the dual, instead of the primal, to analyze the expected cost of the matching. We believe this method of analysis could lead to a simpler proof that max entropy is a better-than-3/2 approximation in the general case.

Cite as

Nathan Klein and Mehrshad Taziki. Dual Charging for Half-Integral TSP. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 353, pp. 21:1-21:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{klein_et_al:LIPIcs.APPROX/RANDOM.2025.21,
  author =	{Klein, Nathan and Taziki, Mehrshad},
  title =	{{Dual Charging for Half-Integral TSP}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)},
  pages =	{21:1--21:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-397-3},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{353},
  editor =	{Ene, Alina and Chattopadhyay, Eshan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2025.21},
  URN =		{urn:nbn:de:0030-drops-243879},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2025.21},
  annote =	{Keywords: Approximation Algorithms, Graph Algorithms, Randomized Rounding, Linear Programming}
}
Document
A Chatbot to Help Promoting Financial Literacy

Authors: Davi Silva Eleuterio, Pedro Filipe Oliveira, Paulo Matos, and Jorge Manuel Afonso Alves

Published in: OASIcs, Volume 135, 14th Symposium on Languages, Applications and Technologies (SLATE 2025)


Abstract
Currently, governments and many other institutions have been making significant efforts to promote financial literacy. However, a considerable portion of the population still lacks basic financial knowledge, highlighting the need for updated strategies to enhance financial education. In today’s digital world - where people often search for quick and convenient solutions - the development of a reliable and intelligent chatbot to answer questions related to financial concepts and decision-making can be very beneficial. This paper proposes the implementation of an automated web scraper to extract content from a trustworthy financial education website with plenty of useful concepts about finances, using this collected data to develop a chatbot which provides accurate and helpful responses to users. The solution is built using technologies such as Streamlit, Langchain, and OpenAI.

Cite as

Davi Silva Eleuterio, Pedro Filipe Oliveira, Paulo Matos, and Jorge Manuel Afonso Alves. A Chatbot to Help Promoting Financial Literacy. In 14th Symposium on Languages, Applications and Technologies (SLATE 2025). Open Access Series in Informatics (OASIcs), Volume 135, pp. 7:1-7:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{eleuterio_et_al:OASIcs.SLATE.2025.7,
  author =	{Eleuterio, Davi Silva and Oliveira, Pedro Filipe and Matos, Paulo and Alves, Jorge Manuel Afonso},
  title =	{{A Chatbot to Help Promoting Financial Literacy}},
  booktitle =	{14th Symposium on Languages, Applications and Technologies (SLATE 2025)},
  pages =	{7:1--7:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-387-4},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{135},
  editor =	{Baptista, Jorge and Barateiro, Jos\'{e}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2025.7},
  URN =		{urn:nbn:de:0030-drops-236870},
  doi =		{10.4230/OASIcs.SLATE.2025.7},
  annote =	{Keywords: chatbot, financial literacy, web scraper, LLM, RAG}
}
Document
Combining Generalization Algorithms in Regular Collapse-Free Theories

Authors: Mauricio Ayala-Rincón, David M. Cerna, Temur Kutsia, and Christophe Ringeissen

Published in: LIPIcs, Volume 337, 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)


Abstract
We look at the generalization problem modulo some equational theories. This problem is dual to the unification problem: given two input terms, we want to find a common term whose respective two instances are equivalent to the original terms modulo the theory. There exist algorithms for finding generalizations over various equational theories. We focus on modular construction of equational generalization algorithms for the union of signature-disjoint theories. Specifically, we consider the class of regular and collapse-free theories, showing how to combine existing generalization algorithms to produce specific solutions in these cases. Additionally, we identify a class of theories that admit a generalization algorithm based on the application of axioms to resolve the problem. To define this class, we rely on the notion of syntactic theories, a concept originally introduced to develop unification procedures similar to the one known for syntactic unification. We demonstrate that syntactic theories are also helpful in developing generalization procedures similar to those used for syntactic generalization.

Cite as

Mauricio Ayala-Rincón, David M. Cerna, Temur Kutsia, and Christophe Ringeissen. Combining Generalization Algorithms in Regular Collapse-Free Theories. In 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 337, pp. 7:1-7:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ayalarincon_et_al:LIPIcs.FSCD.2025.7,
  author =	{Ayala-Rinc\'{o}n, Mauricio and Cerna, David M. and Kutsia, Temur and Ringeissen, Christophe},
  title =	{{Combining Generalization Algorithms in Regular Collapse-Free Theories}},
  booktitle =	{10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)},
  pages =	{7:1--7:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-374-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{337},
  editor =	{Fern\'{a}ndez, Maribel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2025.7},
  URN =		{urn:nbn:de:0030-drops-236228},
  doi =		{10.4230/LIPIcs.FSCD.2025.7},
  annote =	{Keywords: Generalization, Anti-unification, Equational theories, Combination}
}
Document
Track A: Algorithms, Complexity and Games
Robust-Sorting and Applications to Ulam-Median

Authors: Ragesh Jaiswal, Amit Kumar, and Jatin Yadav

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
Sorting is one of the most basic primitives in many algorithms and data analysis tasks. Comparison-based sorting algorithms, like quick-sort and merge-sort, are known to be optimal when the outcome of each comparison is error-free. However, many real-world sorting applications operate in scenarios where the outcome of each comparison can be noisy. In this work, we explore settings where a bounded number of comparisons are potentially corrupted by erroneous agents, resulting in arbitrary, adversarial outcomes. We model the sorting problem as a query-limited tournament graph where edges involving erroneous nodes may yield arbitrary results. Our primary contribution is a randomized algorithm inspired by quick-sort that, in expectation, produces an ordering close to the true total order while only querying Õ(n) edges. We achieve a distance from the target order π within (3 + ε)|B|, where B is the set of erroneous nodes, balancing the competing objectives of minimizing both query complexity and misalignment with π. Our algorithm needs to carefully balance two aspects - identify a pivot that partitions the vertex set evenly and ensure that this partition is "truthful" and yet query as few "triangles" in the graph G as possible. Since the nodes in B can potentially hide in an intricate manner, our algorithm requires several technical steps that ensure that progress is made in each recursive step. Additionally, we demonstrate significant implications for the Ulam-k-Median problem. This is a classical clustering problem where the metric is defined on the set of permutations on a set of d elements. Chakraborty, Das, and Krauthgamer gave a (2-ε) FPT approximation algorithm for this problem, where the running time is super-linear in both n and d. We give the first (2-ε) FPT linear time approximation algorithm for this problem. Our main technical result gives a strengthening of the results in Chakraborty et al. by showing that a good 1-median solution can be obtained from a constant-size random sample of the input. We use our robust sorting framework to find a good solution from such a random sample. We feel that the notion of robust sorting should have applications in several such settings.

Cite as

Ragesh Jaiswal, Amit Kumar, and Jatin Yadav. Robust-Sorting and Applications to Ulam-Median. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 100:1-100:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{jaiswal_et_al:LIPIcs.ICALP.2025.100,
  author =	{Jaiswal, Ragesh and Kumar, Amit and Yadav, Jatin},
  title =	{{Robust-Sorting and Applications to Ulam-Median}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{100:1--100:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.100},
  URN =		{urn:nbn:de:0030-drops-234774},
  doi =		{10.4230/LIPIcs.ICALP.2025.100},
  annote =	{Keywords: Sorting, clustering, query complexity}
}
Document
A Sparse Multicover Bifiltration of Linear Size

Authors: Ángel Javier Alonso

Published in: LIPIcs, Volume 332, 41st International Symposium on Computational Geometry (SoCG 2025)


Abstract
The k-cover of a point cloud X of ℝ^d at radius r is the set of all those points within distance r of at least k points of X. By varying r and k we obtain a two-parameter filtration known as the multicover bifiltration. This bifiltration has received attention recently due to being choice-free and robust to outliers. However, it is hard to compute: the smallest known equivalent simplicial bifiltration has O(|X|^{d+1}) simplices. In this paper we introduce a (1+ε)-approximation of the multicover bifiltration of linear size O(|X|), for fixed d and ε. The methods also apply to the subdivision Rips bifiltration on metric spaces of bounded doubling dimension yielding analogous results.

Cite as

Ángel Javier Alonso. A Sparse Multicover Bifiltration of Linear Size. In 41st International Symposium on Computational Geometry (SoCG 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 332, pp. 6:1-6:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{alonso:LIPIcs.SoCG.2025.6,
  author =	{Alonso, \'{A}ngel Javier},
  title =	{{A Sparse Multicover Bifiltration of Linear Size}},
  booktitle =	{41st International Symposium on Computational Geometry (SoCG 2025)},
  pages =	{6:1--6:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-370-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{332},
  editor =	{Aichholzer, Oswin and Wang, Haitao},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2025.6},
  URN =		{urn:nbn:de:0030-drops-231587},
  doi =		{10.4230/LIPIcs.SoCG.2025.6},
  annote =	{Keywords: Multicover, Approximation, Sparsification, Multiparameter persistence}
}
Document
Resource
KG2Tables: A Domain-Specific Tabular Data Generator to Evaluate Semantic Table Interpretation Systems

Authors: Nora Abdelmageed, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, and Birgitta König-Ries

Published in: TGDK, Volume 3, Issue 1 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 1


Abstract
Tabular data, often in the form of CSV files, plays a pivotal role in data analytics pipelines. Understanding this data semantically, known as Semantic Table Interpretation (STI), is crucial but poses challenges due to several factors such as the ambiguity of labels. As a result, STI has gained increasing attention from the community in the past few years. Evaluating STI systems requires well-established benchmarks. Most of the existing large-scale benchmarks are derived from general domain sources and focus on ambiguity, while domain-specific benchmarks are relatively small in size. This paper introduces KG2Tables, a framework that can construct domain-specific large-scale benchmarks from a Knowledge Graph (KG). KG2Tables leverages the internal hierarchy of the relevant KG concepts and their properties. As a proof of concept, we have built large datasets in the food, biodiversity, and biomedical domains. The resulting datasets, tFood, tBiomed, and tBiodiv, have been made available for the public in the ISWC SemTab challenge (2023 and 2024 editions). We include the evaluation results of top-performing STI systems using tFood Such results underscore its potential as a robust evaluation benchmark for challenging STI systems. We demonstrate the data quality level using a sample-based approach for the generated benchmarks including, for example, realistic tables assessment. Nevertheless, we provide an extensive discussion of KG2Tables explaining how it could be used to create other benchmarks from any domain of interest and including its key features and limitations with suggestions to overcome them.

Cite as

Nora Abdelmageed, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, and Birgitta König-Ries. KG2Tables: A Domain-Specific Tabular Data Generator to Evaluate Semantic Table Interpretation Systems. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 1, pp. 1:1-1:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{abdelmageed_et_al:TGDK.3.1.1,
  author =	{Abdelmageed, Nora and Jim\'{e}nez-Ruiz, Ernesto and Hassanzadeh, Oktie and K\"{o}nig-Ries, Birgitta},
  title =	{{KG2Tables: A Domain-Specific Tabular Data Generator to Evaluate Semantic Table Interpretation Systems}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1:1--1:28},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.1.1},
  URN =		{urn:nbn:de:0030-drops-230104},
  doi =		{10.4230/TGDK.3.1.1},
  annote =	{Keywords: Semantic Table Interpretation (STI), Knowledge Graph (KG), STI Benchmark, Food, Biodiversity, Biomedical}
}
Document
Minimizing the Number of Tardy Jobs with Uniform Processing Times on Parallel Machines

Authors: Klaus Heeger and Hendrik Molter

Published in: LIPIcs, Volume 327, 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)


Abstract
In this work, we study the computational (parameterized) complexity of P∣ r_j, p_j = p ∣∑ w_j U_j. Here, we are given m identical parallel machines and n jobs with equal processing time, each characterized by a release date, a due date, and a weight. The task is to find a feasible schedule, that is, an assignment of the jobs to starting times on machines, such that no job starts before its release date and no machine processes several jobs at the same time, that minimizes the weighted number of tardy jobs. A job is considered tardy if it finishes after its due date. Our main contribution is showing that P∣r_j, p_j = p∣∑ U_j (the unweighted version of the problem) is NP-hard and W[2]-hard when parameterized by the number of machines. The former resolves an open problem in Note 2.1.19 by Kravchenko and Werner [Journal of Scheduling, 2011] and Open Problem 2 by Sgall [ESA, 2012], and the latter resolves Open Problem 7 by Mnich and van Bevern [Computers & Operations Research, 2018]. Furthermore, our result shows that the known XP-algorithm by Baptiste et al. [4OR, 2004] for P∣r_j, p_j = p∣∑ w_j U_j parameterized by the number of machines is optimal from a classification standpoint. On the algorithmic side, we provide alternative running time bounds for the above-mentioned known XP-algorithm. Our analysis shows that P∣r_j, p_j = p∣∑ w_j U_j is contained in XP when parameterized by the processing time, and that it is contained in FPT when parameterized by the combination of the number of machines and the processing time. Finally, we give an FPT-algorithm for P∣r_j, p_j = p∣∑ w_j U_j parameterized by the number of release dates or the number of due dates. With this work, we lay out the foundation for a systematic study of the parameterized complexity of P∣r_j, p_j = p∣∑ w_j U_j.

Cite as

Klaus Heeger and Hendrik Molter. Minimizing the Number of Tardy Jobs with Uniform Processing Times on Parallel Machines. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 47:1-47:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{heeger_et_al:LIPIcs.STACS.2025.47,
  author =	{Heeger, Klaus and Molter, Hendrik},
  title =	{{Minimizing the Number of Tardy Jobs with Uniform Processing Times on Parallel Machines}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{47:1--47:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.47},
  URN =		{urn:nbn:de:0030-drops-228736},
  doi =		{10.4230/LIPIcs.STACS.2025.47},
  annote =	{Keywords: Scheduling, Identical Parallel Machines, Weighted Number of Tardy Jobs, Uniform Processing Times, Release Dates, NP-hard Problems, Parameterized Complexity}
}
Document
Deterministic Approximation for the Volume of the Truncated Fractional Matching Polytope

Authors: Heng Guo and Vishvajeet N

Published in: LIPIcs, Volume 325, 16th Innovations in Theoretical Computer Science Conference (ITCS 2025)


Abstract
We give a deterministic polynomial-time approximation scheme (FPTAS) for the volume of the truncated fractional matching polytope for graphs of maximum degree Δ, where the truncation is by restricting each variable to the interval [0,(1+δ)/Δ], and δ ≤ C/Δ for some constant C > 0. We also generalise our result to the fractional matching polytope for hypergraphs of maximum degree Δ and maximum hyperedge size k, truncated by [0,(1+δ)/Δ] as well, where δ ≤ CΔ^{-(2k-3)/(k-1)}k^{-1} for some constant C > 0. The latter result generalises both the first result for graphs (when k = 2), and a result by Bencs and Regts (2024) for the truncated independence polytope (when Δ = 2). Our approach is based on the cluster expansion technique.

Cite as

Heng Guo and Vishvajeet N. Deterministic Approximation for the Volume of the Truncated Fractional Matching Polytope. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 57:1-57:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{guo_et_al:LIPIcs.ITCS.2025.57,
  author =	{Guo, Heng and N, Vishvajeet},
  title =	{{Deterministic Approximation for the Volume of the Truncated Fractional Matching Polytope}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{57:1--57:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.57},
  URN =		{urn:nbn:de:0030-drops-226858},
  doi =		{10.4230/LIPIcs.ITCS.2025.57},
  annote =	{Keywords: deterministic volume computation, cluster expansion, explicit polytopes}
}
Document
Survey
Towards Representing Processes and Reasoning with Process Descriptions on the Web

Authors: Andreas Harth, Tobias Käfer, Anisa Rula, Jean-Paul Calbimonte, Eduard Kamburjan, and Martin Giese

Published in: TGDK, Volume 2, Issue 1 (2024): Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge, Volume 2, Issue 1


Abstract
We work towards a vocabulary to represent processes and temporal logic specifications as graph-structured data. Different fields use incompatible terminologies for describing essentially the same process-related concepts. In addition, processes can be represented from different perspectives and levels of abstraction: both state-centric and event-centric perspectives offer distinct insights into the underlying processes. In this work, we strive to unify the representation of processes and related concepts by leveraging the power of knowledge graphs. We survey approaches to representing processes and reasoning with process descriptions from different fields and provide a selection of scenarios to help inform the scope of a unified representation of processes. We focus on processes that can be executed and observed via web interfaces. We propose to provide a representation designed to combine state-centric and event-centric perspectives while incorporating temporal querying and reasoning capabilities on temporal logic specifications. A standardised vocabulary and representation for processes and temporal specifications would contribute towards bridging the gap between the terminologies from different fields and fostering the broader application of methods involving temporal logics, such as formal verification and program synthesis.

Cite as

Andreas Harth, Tobias Käfer, Anisa Rula, Jean-Paul Calbimonte, Eduard Kamburjan, and Martin Giese. Towards Representing Processes and Reasoning with Process Descriptions on the Web. In Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 1, pp. 1:1-1:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{harth_et_al:TGDK.2.1.1,
  author =	{Harth, Andreas and K\"{a}fer, Tobias and Rula, Anisa and Calbimonte, Jean-Paul and Kamburjan, Eduard and Giese, Martin},
  title =	{{Towards Representing Processes and Reasoning with Process Descriptions on the Web}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1:1--1:32},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.1.1},
  URN =		{urn:nbn:de:0030-drops-198583},
  doi =		{10.4230/TGDK.2.1.1},
  annote =	{Keywords: Process modelling, Process ontology, Temporal logic, Web services}
}
Document
A First-order Logic for String Diagrams

Authors: Aleks Kissinger and David Quick

Published in: LIPIcs, Volume 35, 6th Conference on Algebra and Coalgebra in Computer Science (CALCO 2015)


Abstract
Equational reasoning with string diagrams provides an intuitive means of proving equations between morphisms in a symmetric monoidal category. This can be extended to proofs of infinite families of equations using a simple graphical syntax called !-box notation. While this does greatly increase the proving power of string diagrams, previous attempts to go beyond equational reasoning have been largely ad hoc, owing to the lack of a suitable logical framework for diagrammatic proofs involving !-boxes. In this paper, we extend equational reasoning with !-boxes to a fully-fledged first order logic with conjunction, implication, and universal quantification over !-boxes. This logic, called !L, is then rich enough to properly formalise an induction principle for !-boxes. We then build a standard model for !L and give an example proof of a theorem for non-commutative bialgebras using !L, which is unobtainable by equational reasoning alone.

Cite as

Aleks Kissinger and David Quick. A First-order Logic for String Diagrams. In 6th Conference on Algebra and Coalgebra in Computer Science (CALCO 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 35, pp. 171-189, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{kissinger_et_al:LIPIcs.CALCO.2015.171,
  author =	{Kissinger, Aleks and Quick, David},
  title =	{{A First-order Logic for String Diagrams}},
  booktitle =	{6th Conference on Algebra and Coalgebra in Computer Science (CALCO 2015)},
  pages =	{171--189},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-84-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{35},
  editor =	{Moss, Lawrence S. and Sobocinski, Pawel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CALCO.2015.171},
  URN =		{urn:nbn:de:0030-drops-55335},
  doi =		{10.4230/LIPIcs.CALCO.2015.171},
  annote =	{Keywords: string diagrams, compact closed monoidal categories, abstract tensor systems, first-order logic}
}
  • Refine by Type
  • 12 Document/PDF
  • 11 Document/HTML

  • Refine by Publication Year
  • 10 2025
  • 1 2024
  • 1 2015

  • Refine by Author
  • 1 Abdelmageed, Nora
  • 1 Alonso, Ángel Javier
  • 1 Alves, Jorge Manuel Afonso
  • 1 Ayala-Rincón, Mauricio
  • 1 Blažauskas, Tomas
  • Show More...

  • Refine by Series/Journal
  • 7 LIPIcs
  • 3 OASIcs
  • 2 TGDK

  • Refine by Classification
  • 2 Theory of computation → Computational geometry
  • 1 Applied computing → Business process modeling
  • 1 Applied computing → Document analysis
  • 1 Applied computing → Event-driven architectures
  • 1 Computing methodologies → Information extraction
  • Show More...

  • Refine by Keyword
  • 1 AST
  • 1 Anti-unification
  • 1 Approximation
  • 1 Approximation Algorithms
  • 1 Biodiversity
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail