14 Search Results for "Liu, Cong"


Document
Use of Programming Aids in Undergraduate Courses

Authors: Ana Rita Peixoto, André Glória, José Luís Silva, Maria Pinto-Albuquerque, Tomás Brandão, and Luís Nunes

Published in: OASIcs, Volume 122, 5th International Computer Programming Education Conference (ICPEC 2024)


Abstract
The use of external tips and applications to help with programming assignments, by novice programmers, is a double-edged sword, it can help by showing examples of problem-solving strategies, but it can also prevent learning because recognizing a good solution is not the same skill as creating one. A study was conducted during the 2superscript{nd} semester of 23/24 in the course of Object Oriented Programming to help understand the impact of the programming aids in learning. The main questions that drove this study were: Which type(s) of assistance do students use when learning to program? When / where do they use it? Does it affect grades? Results, even though with a relatively small sample, seem to indicate that students who used aids have a perception of improved learning when using advice from Colleagues, Copilot-style tools, and Large Language Models. Results of correlating average grades with the usage of tools suggest that experience in using these tools is key for its successful use, but, contrary to students' perceptions, learning gains are marginal in the end result.

Cite as

Ana Rita Peixoto, André Glória, José Luís Silva, Maria Pinto-Albuquerque, Tomás Brandão, and Luís Nunes. Use of Programming Aids in Undergraduate Courses. In 5th International Computer Programming Education Conference (ICPEC 2024). Open Access Series in Informatics (OASIcs), Volume 122, pp. 20:1-20:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{peixoto_et_al:OASIcs.ICPEC.2024.20,
  author =	{Peixoto, Ana Rita and Gl\'{o}ria, Andr\'{e} and Silva, Jos\'{e} Lu{\'\i}s and Pinto-Albuquerque, Maria and Brand\~{a}o, Tom\'{a}s and Nunes, Lu{\'\i}s},
  title =	{{Use of Programming Aids in Undergraduate Courses}},
  booktitle =	{5th International Computer Programming Education Conference (ICPEC 2024)},
  pages =	{20:1--20:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-347-8},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{122},
  editor =	{Santos, Andr\'{e} L. and Pinto-Albuquerque, Maria},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICPEC.2024.20},
  URN =		{urn:nbn:de:0030-drops-209894},
  doi =		{10.4230/OASIcs.ICPEC.2024.20},
  annote =	{Keywords: Teaching Programming, Programming aids}
}
Document
On Connections Between k-Coloring and Euclidean k-Means

Authors: Enver Aman, Karthik C. S., and Sharath Punna

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In the Euclidean k-means problems we are given as input a set of n points in ℝ^d and the goal is to find a set of k points C ⊆ ℝ^d, so as to minimize the sum of the squared Euclidean distances from each point in P to its closest center in C. In this paper, we formally explore connections between the k-coloring problem on graphs and the Euclidean k-means problem. Our results are as follows: - For all k ≥ 3, we provide a simple reduction from the k-coloring problem on regular graphs to the Euclidean k-means problem. Moreover, our technique extends to enable a reduction from a structured max-cut problem (which may be considered as a partial 2-coloring problem) to the Euclidean 2-means problem. Thus, we have a simple and alternate proof of the NP-hardness of Euclidean 2-means problem. - In the other direction, we mimic the O(1.7297ⁿ) time algorithm of Williams [TCS'05] for the max-cut of problem on n vertices to obtain an algorithm for the Euclidean 2-means problem with the same runtime, improving on the naive exhaustive search running in 2ⁿ⋅ poly(n,d) time. - We prove similar results and connections as above for the Euclidean k-min-sum problem.

Cite as

Enver Aman, Karthik C. S., and Sharath Punna. On Connections Between k-Coloring and Euclidean k-Means. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 9:1-9:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{aman_et_al:LIPIcs.ESA.2024.9,
  author =	{Aman, Enver and Karthik C. S. and Punna, Sharath},
  title =	{{On Connections Between k-Coloring and Euclidean k-Means}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{9:1--9:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.9},
  URN =		{urn:nbn:de:0030-drops-210808},
  doi =		{10.4230/LIPIcs.ESA.2024.9},
  annote =	{Keywords: k-means, k-minsum, Euclidean space, fine-grained complexity}
}
Document
Indirection-Bounded Call Graph Analysis

Authors: Madhurima Chakraborty, Aakash Gnanakumar, Manu Sridharan, and Anders Møller

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Call graphs play a crucial role in analyzing the structure and behavior of programs. For JavaScript and other dynamically typed programming languages, static call graph analysis relies on approximating the possible flow of functions and objects, and producing usable call graphs for large, real-world programs remains challenging. In this paper, we propose a simple but effective technique that addresses performance issues encountered in call graph generation. We observe via a dynamic analysis that typical JavaScript program code exhibits small levels of indirection of object pointers and higher-order functions. We demonstrate that a widely used analysis algorithm, wave propagation, closely follows the levels of indirections, so that call edges discovered early are more likely to be true positives. By bounding the number of indirections covered by this analysis, in many cases it can find most true-positive call edges in less time. We also show that indirection-bounded analysis can similarly be incorporated into the field-based call graph analysis algorithm ACG. We have experimentally evaluated the modified wave propagation algorithm on 25 large Node.js-based JavaScript programs. Indirection-bounded analysis on average yields close to a 2X speed-up with only 5% reduction in recall and almost identical precision relative to the baseline analysis, using dynamically generated call graphs for the recall and precision measurements. To demonstrate the robustness of the approach, we also evaluated the modified ACG algorithm on 10 web-based and 4 mobile-based medium sized benchmarks, with similar results.

Cite as

Madhurima Chakraborty, Aakash Gnanakumar, Manu Sridharan, and Anders Møller. Indirection-Bounded Call Graph Analysis. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 10:1-10:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.ECOOP.2024.10,
  author =	{Chakraborty, Madhurima and Gnanakumar, Aakash and Sridharan, Manu and M{\o}ller, Anders},
  title =	{{Indirection-Bounded Call Graph Analysis}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{10:1--10:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.10},
  URN =		{urn:nbn:de:0030-drops-208599},
  doi =		{10.4230/LIPIcs.ECOOP.2024.10},
  annote =	{Keywords: JavaScript, call graphs, points-to analysis}
}
Document
Compiling with Arrays

Authors: David Richter, Timon Böhler, Pascal Weisenburger, and Mira Mezini

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Linear algebra computations are foundational for neural networks and machine learning, often handled through arrays. While many functional programming languages feature lists and recursion, arrays in linear algebra demand constant-time access and bulk operations. To bridge this gap, some languages represent arrays as (eager) functions instead of lists. In this paper, we connect this idea to a formal logical foundation by interpreting functions as the usual negative types from polarized type theory, and arrays as the corresponding dual positive version of the function type. Positive types are defined to have a single elimination form whose computational interpretation is pattern matching. Just like (positive) product types bind two variables during pattern matching, (positive) array types bind variables with multiplicity during pattern matching. We follow a similar approach for Booleans by introducing conditionally-defined variables. The positive formulation for the array type enables us to combine typed partial evaluation and common subexpression elimination into an elegant algorithm whose result enjoys a property we call maximal fission, which we argue can be beneficial for further optimizations. For this purpose, we present the novel intermediate representation indexed administrative normal form (A_{i}NF), which relies on the formal logical foundation of the positive formulation for the array type to facilitate maximal loop fission and subsequent optimizations. A_{i}NF is normal with regard to commuting conversion for both let-bindings and for-loops, leading to flat and maximally fissioned terms. We mechanize the translation and normalization from a simple surface language to A_{i}NF, establishing that the process terminates, preserves types, and produces maximally fissioned terms.

Cite as

David Richter, Timon Böhler, Pascal Weisenburger, and Mira Mezini. Compiling with Arrays. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 33:1-33:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{richter_et_al:LIPIcs.ECOOP.2024.33,
  author =	{Richter, David and B\"{o}hler, Timon and Weisenburger, Pascal and Mezini, Mira},
  title =	{{Compiling with Arrays}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{33:1--33:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.33},
  URN =		{urn:nbn:de:0030-drops-208823},
  doi =		{10.4230/LIPIcs.ECOOP.2024.33},
  annote =	{Keywords: array languages, functional programming, domain-specific languages, normalization by evaluation, common subexpression elimination, polarity, positive function type, intrinsic types}
}
Document
Semantic Perspectives on the Lake District Writing: Spatial Ontology Modeling and Relation Extraction for Deeper Insights

Authors: Erum Haris, Anthony G. Cohn, and John G. Stell

Published in: LIPIcs, Volume 315, 16th International Conference on Spatial Information Theory (COSIT 2024)


Abstract
Extracting spatial details from historical texts can be difficult, hindering our understanding of past landscapes. The study addresses this challenge by analyzing the Corpus of the Lake District Writing, focusing on the English Lake District region. We systematically link the theoretical notions from the core concepts of spatial information to provide basis for the problem domain. The conceptual foundation is further complemented with a spatial ontology and a custom gazetteer, allowing a formal and insightful semantic exploration of the massive unstructured corpus. The other contrasting side of the framework is the usage of LLMs for spatial relation extraction. We formulate prompts leveraging understanding of the LLMs of the intended task, curate a list of spatial relations representing the most recurring proximity or vicinity relations terms and extract semantic triples for the top five place names appearing in the corpus. We compare the extraction capabilities of three benchmark LLMs for a scholarly significant historical archive, representing their potential in a challenging and interdisciplinary research problem. Finally, the network comprising the semantic triples is enhanced by incorporating a gazetteer-based classification of the objects involved thus improving their spatial profiling.

Cite as

Erum Haris, Anthony G. Cohn, and John G. Stell. Semantic Perspectives on the Lake District Writing: Spatial Ontology Modeling and Relation Extraction for Deeper Insights. In 16th International Conference on Spatial Information Theory (COSIT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 315, pp. 11:1-11:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{haris_et_al:LIPIcs.COSIT.2024.11,
  author =	{Haris, Erum and Cohn, Anthony G. and Stell, John G.},
  title =	{{Semantic Perspectives on the Lake District Writing: Spatial Ontology Modeling and Relation Extraction for Deeper Insights}},
  booktitle =	{16th International Conference on Spatial Information Theory (COSIT 2024)},
  pages =	{11:1--11:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-330-0},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{315},
  editor =	{Adams, Benjamin and Griffin, Amy L. and Scheider, Simon and McKenzie, Grant},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.COSIT.2024.11},
  URN =		{urn:nbn:de:0030-drops-208268},
  doi =		{10.4230/LIPIcs.COSIT.2024.11},
  annote =	{Keywords: spatial humanities, spatial narratives, ontology, large language models}
}
Document
Combining Constraint Programming Reasoning with Large Language Model Predictions

Authors: Florian Régin, Elisabetta De Maria, and Alexandre Bonlarron

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Constraint Programming (CP) and Machine Learning (ML) face challenges in text generation due to CP’s struggle with implementing "meaning" and ML’s difficulty with structural constraints. This paper proposes a solution by combining both approaches and embedding a Large Language Model (LLM) in CP. The LLM handles word generation and meaning, while CP manages structural constraints. This approach builds on GenCP, an improved version of On-the-fly Constraint Programming Search (OTFS) using LLM-generated domains. Compared to Beam Search (BS), a standard NLP method, this combined approach (GenCP with LLM) is faster and produces better results, ensuring all constraints are satisfied. This fusion of CP and ML presents new possibilities for enhancing text generation under constraints.

Cite as

Florian Régin, Elisabetta De Maria, and Alexandre Bonlarron. Combining Constraint Programming Reasoning with Large Language Model Predictions. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 25:1-25:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{regin_et_al:LIPIcs.CP.2024.25,
  author =	{R\'{e}gin, Florian and De Maria, Elisabetta and Bonlarron, Alexandre},
  title =	{{Combining Constraint Programming Reasoning with Large Language Model Predictions}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{25:1--25:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.25},
  URN =		{urn:nbn:de:0030-drops-207109},
  doi =		{10.4230/LIPIcs.CP.2024.25},
  annote =	{Keywords: Solver and Tools, ML-augmented CP, Constrained Text Generation, ML alongside CO}
}
Document
Quantum Circuit Mapping Based on Incremental and Parallel SAT Solving

Authors: Jiong Yang, Yaroslav A. Kharkov, Yunong Shi, Marijn J. H. Heule, and Bruno Dutertre

Published in: LIPIcs, Volume 305, 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)


Abstract
Quantum Computing (QC) is a new computational paradigm that promises significant speedup over classical computing in various domains. However, near-term QC faces numerous challenges, including limited qubit connectivity and noisy quantum operations. To address the qubit connectivity constraint, circuit mapping is required for executing quantum circuits on quantum computers. This process involves performing initial qubit placement and using the quantum SWAP operations to relocate non-adjacent qubits for nearest-neighbor interaction. Reducing the SWAP count in circuit mapping is essential for improving the success rate of quantum circuit execution as SWAPs are costly and error-prone. In this work, we introduce a novel circuit mapping method by combining incremental and parallel solving for Boolean Satisfiability (SAT). We present an innovative SAT encoding for circuit mapping problems, which significantly improves solver-based mapping methods and provides a smooth trade-off between compilation quality and compilation time. Through comprehensive benchmarking of 78 instances covering 3 quantum algorithms on 2 distinct quantum computer topologies, we demonstrate that our method is 26× faster than state-of-the-art solver-based methods, reducing the compilation time from hours to minutes for important quantum applications. Our method also surpasses the existing heuristics algorithm by 26% in SWAP count.

Cite as

Jiong Yang, Yaroslav A. Kharkov, Yunong Shi, Marijn J. H. Heule, and Bruno Dutertre. Quantum Circuit Mapping Based on Incremental and Parallel SAT Solving. In 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 305, pp. 29:1-29:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{yang_et_al:LIPIcs.SAT.2024.29,
  author =	{Yang, Jiong and Kharkov, Yaroslav A. and Shi, Yunong and Heule, Marijn J. H. and Dutertre, Bruno},
  title =	{{Quantum Circuit Mapping Based on Incremental and Parallel SAT Solving}},
  booktitle =	{27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)},
  pages =	{29:1--29:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-334-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{305},
  editor =	{Chakraborty, Supratik and Jiang, Jie-Hong Roland},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2024.29},
  URN =		{urn:nbn:de:0030-drops-205517},
  doi =		{10.4230/LIPIcs.SAT.2024.29},
  annote =	{Keywords: Quantum computing, Quantum compilation, SAT solving, Incremental solving, Parallel solving}
}
Document
Buffered Streaming Edge Partitioning

Authors: Adil Chhabra, Marcelo Fonseca Faraj, Christian Schulz, and Daniel Seemaier

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
Addressing the challenges of processing massive graphs, which are prevalent in diverse fields such as social, biological, and technical networks, we introduce HeiStreamE and FreightE, two innovative (buffered) streaming algorithms designed for efficient edge partitioning of large-scale graphs. HeiStreamE utilizes an adapted Split-and-Connect graph model and a Fennel-based multilevel partitioning scheme, while FreightE partitions a hypergraph representation of the input graph. Besides ensuring superior solution quality, these approaches also overcome the limitations of existing algorithms by maintaining linear dependency on the graph size in both time and memory complexity with no dependence on the number of blocks of partition. Our comprehensive experimental analysis demonstrates that HeiStreamE outperforms current streaming algorithms and the re-streaming algorithm 2PS in partitioning quality (replication factor), and is more memory-efficient for real-world networks where the number of edges is far greater than the number of vertices. Further, FreightE is shown to produce fast and efficient partitions, particularly for higher numbers of partition blocks.

Cite as

Adil Chhabra, Marcelo Fonseca Faraj, Christian Schulz, and Daniel Seemaier. Buffered Streaming Edge Partitioning. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 5:1-5:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chhabra_et_al:LIPIcs.SEA.2024.5,
  author =	{Chhabra, Adil and Fonseca Faraj, Marcelo and Schulz, Christian and Seemaier, Daniel},
  title =	{{Buffered Streaming Edge Partitioning}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{5:1--5:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.5},
  URN =		{urn:nbn:de:0030-drops-203701},
  doi =		{10.4230/LIPIcs.SEA.2024.5},
  annote =	{Keywords: graph partitioning, edge partitioning, streaming, online, buffered partitioning}
}
Document
Tighter Worst-Case Response Time Bounds for Jitter-Based Self-Suspension Analysis

Authors: Mario Günzel, Georg von der Brüggen, and Jian-Jia Chen

Published in: LIPIcs, Volume 298, 36th Euromicro Conference on Real-Time Systems (ECRTS 2024)


Abstract
Tasks are called self-suspending if they can yield their ready state (specifically, releasing the processor while having highest priority) despite being incomplete, for instance, to offload computation to an external device or when waiting on access rights for shared resources or data. This self-suspending behavior requires special treatment when applying analytical results to compute worst-case response time bounds. One typical treatment is modeling self-suspension as release jitter in a so-called jitter-based analysis. The state of the art, when considering task-level fixed-priority scheduling, individually quantifies the jitter term of each higher-priority task by its worst-case response time minus its worst-case execution time. This work tightens the jitter term by taking the execution behavior of the other higher-priority tasks into account. Our improved jitter-based analysis analytically dominates the previous jitter-based analysis. Moreover, an evaluation for synthetically generated sporadic tasks demonstrates that this jitter term results in tighter worst-case response time bounds for self-suspending tasks. We observe an improvement for up to 55.89 % of the tasksets compared to the previous jitter-based analysis.

Cite as

Mario Günzel, Georg von der Brüggen, and Jian-Jia Chen. Tighter Worst-Case Response Time Bounds for Jitter-Based Self-Suspension Analysis. In 36th Euromicro Conference on Real-Time Systems (ECRTS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 298, pp. 4:1-4:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gunzel_et_al:LIPIcs.ECRTS.2024.4,
  author =	{G\"{u}nzel, Mario and von der Br\"{u}ggen, Georg and Chen, Jian-Jia},
  title =	{{Tighter Worst-Case Response Time Bounds for Jitter-Based Self-Suspension Analysis}},
  booktitle =	{36th Euromicro Conference on Real-Time Systems (ECRTS 2024)},
  pages =	{4:1--4:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-324-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{298},
  editor =	{Pellizzoni, Rodolfo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2024.4},
  URN =		{urn:nbn:de:0030-drops-203074},
  doi =		{10.4230/LIPIcs.ECRTS.2024.4},
  annote =	{Keywords: Worst-Case Response Time, WCRT, Jitter, Self-Suspension, Analysis}
}
Document
Optimizing Per-Core Priorities to Minimize End-To-End Latencies

Authors: Francesco Paladino, Alessandro Biondi, Enrico Bini, and Paolo Pazzaglia

Published in: LIPIcs, Volume 298, 36th Euromicro Conference on Real-Time Systems (ECRTS 2024)


Abstract
Logical Execution Time (LET) allows decoupling the schedule of real-time periodic tasks from their communication, with the advantage of isolating the communication pattern from the variability of the schedule. However, when such tasks are organized in chains, the usage of LET at the task level does not necessarily transfer the same LET properties to the chain level. In this paper, we extend a LET-like model from tasks to chains spanning over multiple cores. We leverage the designed constant latency chains to optimize per-core priority assignment. Finally, we also provide a set of heuristic algorithms, that are compared in a large-scale experimental evaluation.

Cite as

Francesco Paladino, Alessandro Biondi, Enrico Bini, and Paolo Pazzaglia. Optimizing Per-Core Priorities to Minimize End-To-End Latencies. In 36th Euromicro Conference on Real-Time Systems (ECRTS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 298, pp. 6:1-6:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{paladino_et_al:LIPIcs.ECRTS.2024.6,
  author =	{Paladino, Francesco and Biondi, Alessandro and Bini, Enrico and Pazzaglia, Paolo},
  title =	{{Optimizing Per-Core Priorities to Minimize End-To-End Latencies}},
  booktitle =	{36th Euromicro Conference on Real-Time Systems (ECRTS 2024)},
  pages =	{6:1--6:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-324-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{298},
  editor =	{Pellizzoni, Rodolfo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2024.6},
  URN =		{urn:nbn:de:0030-drops-203094},
  doi =		{10.4230/LIPIcs.ECRTS.2024.6},
  annote =	{Keywords: Cause-Effect Chains, Logical Execution Time, End-to-End Latency, Design Optimization, Task Priorities, Data Age, Reaction Time}
}
Document
DeepTrust^RT: Confidential Deep Neural Inference Meets Real-Time!

Authors: Mohammad Fakhruddin Babar and Monowar Hasan

Published in: LIPIcs, Volume 298, 36th Euromicro Conference on Real-Time Systems (ECRTS 2024)


Abstract
Deep Neural Networks (DNNs) are becoming common in "learning-enabled" time-critical applications such as autonomous driving and robotics. One approach to protect DNN inference from adversarial actions and preserve model privacy/confidentiality is to execute them within trusted enclaves available in modern processors. However, running DNN inference inside limited-capacity enclaves while ensuring timing guarantees is challenging due to (a) large size of DNN workloads and (b) extra switching between "normal" and "trusted" execution modes. This paper introduces new time-aware scheduling schemes - DeepTrust^RT - to securely execute deep neural inferences for learning-enabled real-time systems. We first propose a variant of EDF (called DeepTrust^RT-LW) that slices each DNN layer and runs them sequentially in the enclave. However, due to extra context switch overheads of individual layer slices, we further introduce a novel layer fusion technique (named DeepTrust^RT-FUSION). Our proposed scheme provides hard real-time guarantees by fusing multiple layers of DNN workload from multiple tasks; thus allowing them to fit and run concurrently within the enclaves while maintaining real-time guarantees. We implemented and tested DeepTrust^RT ideas on the Raspberry Pi platform running OP-TEE+DarkNet-TZ DNN APIs and three DNN workloads (AlexNet-squeezed, Tiny Darknet, YOLOv3-tiny). Compared to the layer-wise partitioning approach (DeepTrust^RT-LW), DeepTrust^RT-FUSION can schedule up to 3x more tasksets and reduce context switches by up to 11.12x. We further demonstrate the efficacy of DeepTrust^RT using a flight controller (ArduPilot) case study and find that DeepTrust^RT-FUSION retains real-time guarantees where DeepTrust^RT-LW becomes unschedulable.

Cite as

Mohammad Fakhruddin Babar and Monowar Hasan. DeepTrust^RT: Confidential Deep Neural Inference Meets Real-Time!. In 36th Euromicro Conference on Real-Time Systems (ECRTS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 298, pp. 13:1-13:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{babar_et_al:LIPIcs.ECRTS.2024.13,
  author =	{Babar, Mohammad Fakhruddin and Hasan, Monowar},
  title =	{{DeepTrust^RT: Confidential Deep Neural Inference Meets Real-Time!}},
  booktitle =	{36th Euromicro Conference on Real-Time Systems (ECRTS 2024)},
  pages =	{13:1--13:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-324-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{298},
  editor =	{Pellizzoni, Rodolfo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2024.13},
  URN =		{urn:nbn:de:0030-drops-203161},
  doi =		{10.4230/LIPIcs.ECRTS.2024.13},
  annote =	{Keywords: DNN, TrustZone, Real-Time Systems}
}
Document
GCAPS: GPU Context-Aware Preemptive Priority-Based Scheduling for Real-Time Tasks

Authors: Yidi Wang, Cong Liu, Daniel Wong, and Hyoseung Kim

Published in: LIPIcs, Volume 298, 36th Euromicro Conference on Real-Time Systems (ECRTS 2024)


Abstract
Scheduling real-time tasks that utilize GPUs with analyzable guarantees poses a significant challenge due to the intricate interaction between CPU and GPU resources, as well as the complex GPU hardware and software stack. While much research has been conducted in the real-time research community, several limitations persist, including the absence or limited availability of GPU-level preemption, extended blocking times, and/or the need for extensive modifications to program code. In this paper, we propose GCAPS, a GPU Context-Aware Preemptive Scheduling approach for real-time GPU tasks. Our approach exerts control over GPU context scheduling at the device driver level and enables preemption of GPU execution based on task priorities by simply adding one-line macros to GPU segment boundaries. In addition, we provide a comprehensive response time analysis of GPU-using tasks for both our proposed approach as well as the default Nvidia GPU driver scheduling that follows a work-conserving round-robin policy. Through empirical evaluations and case studies, we demonstrate the effectiveness of the proposed approaches in improving taskset schedulability and response time. The results highlight significant improvements over prior work as well as the default scheduling approach, with up to 40% higher schedulability, while also achieving predictable worst-case behavior on Nvidia Jetson embedded platforms.

Cite as

Yidi Wang, Cong Liu, Daniel Wong, and Hyoseung Kim. GCAPS: GPU Context-Aware Preemptive Priority-Based Scheduling for Real-Time Tasks. In 36th Euromicro Conference on Real-Time Systems (ECRTS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 298, pp. 14:1-14:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.ECRTS.2024.14,
  author =	{Wang, Yidi and Liu, Cong and Wong, Daniel and Kim, Hyoseung},
  title =	{{GCAPS: GPU Context-Aware Preemptive Priority-Based Scheduling for Real-Time Tasks}},
  booktitle =	{36th Euromicro Conference on Real-Time Systems (ECRTS 2024)},
  pages =	{14:1--14:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-324-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{298},
  editor =	{Pellizzoni, Rodolfo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2024.14},
  URN =		{urn:nbn:de:0030-drops-203170},
  doi =		{10.4230/LIPIcs.ECRTS.2024.14},
  annote =	{Keywords: Real-time systems, GPU scheduling}
}
Document
Extended Abstract
Detecting and Quantifying Crypto Wash Trading (Extended Abstract)

Authors: Lin William Cong, Xi Li, Ke Tang, and Yang Yang

Published in: OASIcs, Volume 97, 3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021)


Abstract
We introduce systematic tests exploiting robust statistical and behavioral patterns in trading to detect fake transactions on 29 cryptocurrency exchanges. Regulated exchanges feature patterns consistently observed in financial markets and nature; abnormal first-significant-digit distributions, size rounding, and transaction tail distributions on unregulated exchanges reveal rampant manipulations unlikely driven by strategy or exchange heterogeneity. We quantify the wash trading on each unregulated exchange, which averaged over 70% of the reported volume. We further document how these fabricated volumes (trillions of dollars annually) improve exchange ranking, temporarily distort prices, and relate to exchange characteristics (e.g., age and userbase), market conditions, and regulation.

Cite as

Lin William Cong, Xi Li, Ke Tang, and Yang Yang. Detecting and Quantifying Crypto Wash Trading (Extended Abstract). In 3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021). Open Access Series in Informatics (OASIcs), Volume 97, pp. 10:1-10:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{cong_et_al:OASIcs.Tokenomics.2021.10,
  author =	{Cong, Lin William and Li, Xi and Tang, Ke and Yang, Yang},
  title =	{{Detecting and Quantifying Crypto Wash Trading}},
  booktitle =	{3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021)},
  pages =	{10:1--10:6},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-220-4},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{97},
  editor =	{Gramoli, Vincent and Halaburda, Hanna and Pass, Rafael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.Tokenomics.2021.10},
  URN =		{urn:nbn:de:0030-drops-159072},
  doi =		{10.4230/OASIcs.Tokenomics.2021.10},
  annote =	{Keywords: Bitcoin, Cryptocurrency, FinTech, Forensic Finance, Fraud Detection, Regulation}
}
Document
Optimal Dataflow Scheduling on a Heterogeneous Multiprocessor With Reduced Response Time Bounds

Authors: Zheng Dong, Cong Liu, Alan Gatherer, Lee McFearin, Peter Yan, and James H. Anderson

Published in: LIPIcs, Volume 76, 29th Euromicro Conference on Real-Time Systems (ECRTS 2017)


Abstract
Heterogeneous computing platforms with multiple types of computing resources have been widely used in many industrial systems to process dataflow tasks with pre-defined affinity of tasks to subgroups of resources. For many dataflow workloads with soft real-time requirements, guaranteeing fast and bounded response times is often the objective. This paper presents a new set of analysis techniques showing that a classical real-time scheduler, namely earliest-deadline first (EDF), is able to support dataflow tasks scheduled on such heterogeneous platforms with provably bounded response times while incurring no resource capacity loss, thus proving EDF to be an optimal solution for this scheduling problem. Experiments using synthetic workloads with widely varied parameters also demonstrate that the magnitude of the response time bounds yielded under the proposed analysis is reasonably small under all scenarios. Compared to the state-of-the-art soft real-time analysis techniques, our test yields a 68% reduction on response time bounds on average. This work demonstrates the potential of applying EDF into practical industrial systems containing dataflow-based workloads that desire guaranteed bounded response times.

Cite as

Zheng Dong, Cong Liu, Alan Gatherer, Lee McFearin, Peter Yan, and James H. Anderson. Optimal Dataflow Scheduling on a Heterogeneous Multiprocessor With Reduced Response Time Bounds. In 29th Euromicro Conference on Real-Time Systems (ECRTS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 76, pp. 15:1-15:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dong_et_al:LIPIcs.ECRTS.2017.15,
  author =	{Dong, Zheng and Liu, Cong and Gatherer, Alan and McFearin, Lee and Yan, Peter and Anderson, James H.},
  title =	{{Optimal Dataflow Scheduling on a Heterogeneous Multiprocessor With Reduced Response Time Bounds}},
  booktitle =	{29th Euromicro Conference on Real-Time Systems (ECRTS 2017)},
  pages =	{15:1--15:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-037-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{76},
  editor =	{Bertogna, Marko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2017.15},
  URN =		{urn:nbn:de:0030-drops-71565},
  doi =		{10.4230/LIPIcs.ECRTS.2017.15},
  annote =	{Keywords: Real-time Scheduling, schedulability, heterogeneous multiprocessor}
}
  • Refine by Author
  • 2 Liu, Cong
  • 1 Aman, Enver
  • 1 Anderson, James H.
  • 1 Babar, Mohammad Fakhruddin
  • 1 Bini, Enrico
  • Show More...

  • Refine by Classification
  • 4 Computer systems organization → Real-time systems
  • 2 Computer systems organization → Embedded and cyber-physical systems
  • 2 Computing methodologies → Artificial intelligence
  • 2 Software and its engineering → Real-time schedulability
  • 1 Computer systems organization → Embedded systems
  • Show More...

  • Refine by Keyword
  • 1 Analysis
  • 1 Bitcoin
  • 1 Cause-Effect Chains
  • 1 Constrained Text Generation
  • 1 Cryptocurrency
  • Show More...

  • Refine by Type
  • 14 document

  • Refine by Publication Year
  • 12 2024
  • 1 2017
  • 1 2022

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail