11 Search Results for "Feng, Xin"


Document
Periodic Timetabling: Travel Time vs. Regenerative Energy

Authors: Sven Jäger, Sarah Roth, and Anita Schöbel

Published in: OASIcs, Volume 123, 24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024)


Abstract
While it is important to provide attractive public transportation to the passengers allowing short travel times, it should also be a major concern to reduce the amount of energy used by the public transport system. Electrical trains can regenerate energy when braking, which can be used by a nearby accelerating train. Therefore, apart from the minimization of travel times, the maximization of brake-traction overlaps of nearby trains is an important objective in periodic timetabling. Recently, this has been studied in a model allowing small modifications of a nominal timetable. We investigate the problem of finding periodic timetables that are globally good in both objective functions. We show that the general problem is NP-hard, even restricted to a single transfer station and if only travel time is to be minimized, and give an algorithm with an additive error bound for maximizing the brake-traction overlap on this small network. Moreover, we identify special cases in which the problem is solvable in polynomial time. Finally, we demonstrate the trade-off between the two objective functions in an experimental study.

Cite as

Sven Jäger, Sarah Roth, and Anita Schöbel. Periodic Timetabling: Travel Time vs. Regenerative Energy. In 24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024). Open Access Series in Informatics (OASIcs), Volume 123, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jager_et_al:OASIcs.ATMOS.2024.10,
  author =	{J\"{a}ger, Sven and Roth, Sarah and Sch\"{o}bel, Anita},
  title =	{{Periodic Timetabling: Travel Time vs. Regenerative Energy}},
  booktitle =	{24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS 2024)},
  pages =	{10:1--10:20},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-350-8},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{123},
  editor =	{Bouman, Paul C. and Kontogiannis, Spyros C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ATMOS.2024.10},
  URN =		{urn:nbn:de:0030-drops-211983},
  doi =		{10.4230/OASIcs.ATMOS.2024.10},
  annote =	{Keywords: periodic timetabling, regenerative braking}
}
Document
Transaction Fee Mechanism Design in a Post-MEV World

Authors: Maryam Bahrani, Pranav Garimidi, and Tim Roughgarden

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
The incentive-compatibility properties of blockchain transaction fee mechanisms have been investigated with passive block producers that are motivated purely by the net rewards earned at the consensus layer. This paper introduces a model of active block producers that have their own private valuations for blocks (representing, for example, additional value derived from the application layer). The block producer surplus in our model can be interpreted as one of the more common colloquial meanings of the phrase "maximal extractable value (MEV)." We first prove that transaction fee mechanism design is fundamentally more difficult with active block producers than with passive ones: With active block producers, no non-trivial or approximately welfare-maximizing transaction fee mechanism can be incentive-compatible for both users and block producers. These results can be interpreted as a mathematical justification for augmenting transaction fee mechanisms with additional components such as order flow auctions, block producer competition, trusted hardware, or cryptographic techniques. We then consider a more fine-grained model of block production that more accurately reflects current practice, in which we distinguish the roles of "searchers" (who actively identify opportunities for value extraction from the application layer and compete for the right to take advantage of them) and "proposers" (who participate directly in the blockchain protocol and make the final choice of the published block). Searchers can effectively act as an "MEV oracle" for a transaction fee mechanism, thereby enlarging the design space. Here, we first consider a TFM that is inspired by how searchers have traditionally been incorporated into the block production process, with each transaction effectively sold off to a searcher through a first-price auction. We then explore the TFM design space with searchers more generally, and design a mechanism that circumvents our impossibility results for TFMs without searchers. Our mechanism (the "SAKA" mechanism) is incentive-compatible (for users, searchers, and the block producer), sybil-proof, and guarantees roughly 50% of the maximum-possible welfare when transaction sizes are small relative to block sizes. We conclude with a matching negative result: even when transaction sizes are small, no DSIC and sybil-proof deterministic TFM can guarantee more than 50% of the maximum-possible welfare.

Cite as

Maryam Bahrani, Pranav Garimidi, and Tim Roughgarden. Transaction Fee Mechanism Design in a Post-MEV World. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 29:1-29:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bahrani_et_al:LIPIcs.AFT.2024.29,
  author =	{Bahrani, Maryam and Garimidi, Pranav and Roughgarden, Tim},
  title =	{{Transaction Fee Mechanism Design in a Post-MEV World}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{29:1--29:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.29},
  URN =		{urn:nbn:de:0030-drops-209658},
  doi =		{10.4230/LIPIcs.AFT.2024.29},
  annote =	{Keywords: MEV, Transaction Fee Mechanisms, Auctions}
}
Document
DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance

Authors: Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Decentralized Finance (DeFi) has witnessed a monumental surge, reaching 53.039 billion USD in total value locked. As this sector continues to expand, ensuring the reliability of DeFi smart contracts becomes increasingly crucial. While some users are adept at reading code or the compiled bytecode to understand smart contracts, many rely on documentation. Therefore, discrepancies between the documentation and the deployed code can pose significant risks, whether these discrepancies are due to errors or intentional fraud. To tackle these challenges, we developed DeFiAligner, an end-to-end system to identify inconsistencies between documentation and smart contracts. DeFiAligner incorporates a symbolic execution tool, SEVM, which explores execution paths of on-chain binary code, recording memory and stack states. It automatically generates symbolic expressions for token balance changes and branch conditions, which, along with related project documents, are processed by LLMs. Using structured prompts, the LLMs evaluate the alignment between the symbolic expressions and the documentation. Our tests across three distinct scenarios demonstrate DeFiAligner’s capability to automate inconsistency detection in DeFi, achieving recall rates of 92% and 90% on two public datasets respectively.

Cite as

Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin. DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 7:1-7:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gan_et_al:LIPIcs.AFT.2024.7,
  author =	{Gan, Rundong and Zhou, Liyi and Wang, Le and Qin, Kaihua and Lin, Xiaodong},
  title =	{{DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{7:1--7:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.7},
  URN =		{urn:nbn:de:0030-drops-209431},
  doi =		{10.4230/LIPIcs.AFT.2024.7},
  annote =	{Keywords: Decentralized Finance Security, Large Language Models, Project Review, Symbolic Analysis, Smart Contracts}
}
Document
A CFL-Reachability Formulation of Callsite-Sensitive Pointer Analysis with Built-In On-The-Fly Call Graph Construction

Authors: Dongjie He, Jingbo Lu, and Jingling Xue

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
In object-oriented languages, the traditional CFL-reachability formulation for k-callsite-sensitive pointer analysis (kCFA) focuses on modeling field accesses and calling contexts, but it relies on a separate algorithm for call graph construction. This division can result in a loss of precision in kCFA, a problem that persists even when using the most precise call graphs, whether pre-constructed or generated on the fly. Moreover, pre-analyses based on this framework aiming to improve the efficiency of kCFA may inadvertently reduce its precision, due to the framework’s lack of native call graph construction, essential for precise analysis. Addressing this gap, this paper introduces a novel CFL-reachability formulation of kCFA for Java, uniquely integrating on-the-fly call graph construction. This advancement not only addresses the precision loss inherent in the traditional CFL-reachability-based approach but also enhances its overall applicability. In a significant secondary contribution, we present the first precision-preserving pre-analysis to accelerate kCFA. This pre-analysis leverages selective context sensitivity to improve the efficiency of kCFA without sacrificing its precision. Collectively, these contributions represent a substantial step forward in pointer analysis, offering both theoretical and practical advancements that could benefit future developments in the field.

Cite as

Dongjie He, Jingbo Lu, and Jingling Xue. A CFL-Reachability Formulation of Callsite-Sensitive Pointer Analysis with Built-In On-The-Fly Call Graph Construction. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 18:1-18:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{he_et_al:LIPIcs.ECOOP.2024.18,
  author =	{He, Dongjie and Lu, Jingbo and Xue, Jingling},
  title =	{{A CFL-Reachability Formulation of Callsite-Sensitive Pointer Analysis with Built-In On-The-Fly Call Graph Construction}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{18:1--18:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.18},
  URN =		{urn:nbn:de:0030-drops-208674},
  doi =		{10.4230/LIPIcs.ECOOP.2024.18},
  annote =	{Keywords: Pointer Analysis, CFL Reachability, Call Graph Construction}
}
Document
Java Bytecode Normalization for Code Similarity Analysis

Authors: Stefan Schott, Serena Elisa Ponta, Wolfram Fischer, Jonas Klauke, and Eric Bodden

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Analyzing the similarity of two code fragments has many applications, including code clone, vulnerability and plagiarism detection. Most existing approaches for similarity analysis work on source code. However, in scenarios like plagiarism detection, copyright violation detection or Software Bill of Materials creation source code is often not available and thus similarity analysis has to be performed on binary formats. Java bytecode is a binary format executable by the Java Virtual Machine and obtained from the compilation of Java source code. Performing similarity detection on bytecode is challenging because different compilers can compile the same source code to syntactically vastly different bytecode. In this work we assess to what extent one can nonetheless enable similarity detection by bytecode normalization, a procedure to transform Java bytecode into a representation that is identical for the same original source code, irrespective of the Java compiler and Java version used during compilation. Our manual study revealed 16 classes of compilation differences that various compilation environments may induce. Based on these findings, we implemented bytecode normalization in a tool jNorm. It uses Jimple as intermediate representation, applies common code optimizations and transforms all classes of compilation difference to a normalized form, thus achieving a representation of the bytecode that is identical despite different compilation environments. Our evaluation, performed on more than 300 popular Java projects, shows that solely the act of incrementing a compiler version may cause differences in 46% of all resulting bytecode files. By applying bytecode normalization, one can remove more than 99% of these differences, thus acting as a crucial enabler for subsequent applications of bytecode similarity analysis.

Cite as

Stefan Schott, Serena Elisa Ponta, Wolfram Fischer, Jonas Klauke, and Eric Bodden. Java Bytecode Normalization for Code Similarity Analysis. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 37:1-37:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{schott_et_al:LIPIcs.ECOOP.2024.37,
  author =	{Schott, Stefan and Ponta, Serena Elisa and Fischer, Wolfram and Klauke, Jonas and Bodden, Eric},
  title =	{{Java Bytecode Normalization for Code Similarity Analysis}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{37:1--37:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.37},
  URN =		{urn:nbn:de:0030-drops-208865},
  doi =		{10.4230/LIPIcs.ECOOP.2024.37},
  annote =	{Keywords: Bytecode, Java Compiler, Code Similarity Analysis}
}
Document
Constraint Modelling with LLMs Using In-Context Learning

Authors: Kostis Michailidis, Dimos Tsouros, and Tias Guns

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Constraint Programming (CP) allows for the modelling and solving of a wide range of combinatorial problems. However, modelling such problems using constraints over decision variables still requires significant expertise, both in conceptual thinking and syntactic use of modelling languages. In this work, we explore the potential of using pre-trained Large Language Models (LLMs) as coding assistants, to transform textual problem descriptions into concrete and executable CP specifications. We present different transformation pipelines with explicit intermediate representations, and we investigate the potential benefit of various retrieval-augmented example selection strategies for in-context learning. We evaluate our approach on 2 datasets from the literature, namely NL4Opt (optimisation) and Logic Grid Puzzles (satisfaction), and a heterogeneous set of exercises from a CP course. The results show that pre-trained LLMs have promising potential for initialising the modelling process, with retrieval-augmented in-context learning significantly enhancing their modelling capabilities.

Cite as

Kostis Michailidis, Dimos Tsouros, and Tias Guns. Constraint Modelling with LLMs Using In-Context Learning. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 20:1-20:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{michailidis_et_al:LIPIcs.CP.2024.20,
  author =	{Michailidis, Kostis and Tsouros, Dimos and Guns, Tias},
  title =	{{Constraint Modelling with LLMs Using In-Context Learning}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{20:1--20:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.20},
  URN =		{urn:nbn:de:0030-drops-207053},
  doi =		{10.4230/LIPIcs.CP.2024.20},
  annote =	{Keywords: Constraint Modelling, Constraint Acquisition, Constraint Programming, Large Language Models, In-Context Learning, Natural Language Processing, Named Entity Recognition, Retrieval-Augmented Generation, Optimisation}
}
Document
Structure-Guided Local Improvement for Maximum Satisfiability

Authors: André Schidler and Stefan Szeider

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
The enhanced performance of today’s MaxSAT solvers has elevated their appeal for many large-scale applications, notably in software analysis and computer-aided design. Our research delves into refining anytime MaxSAT solving by repeatedly identifying and solving with an exact solver smaller subinstances that are chosen based on the graphical structure of the instance. We investigate various strategies to pinpoint these subinstances. This structure-guided selection of subinstances provides an exact solver with a high potential for improving the current solution. Our exhaustive experimental analyses contrast our methodology as instantiated in our tool MaxSLIM with previous studies and benchmark it against leading-edge MaxSAT solvers.

Cite as

André Schidler and Stefan Szeider. Structure-Guided Local Improvement for Maximum Satisfiability. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 26:1-26:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{schidler_et_al:LIPIcs.CP.2024.26,
  author =	{Schidler, Andr\'{e} and Szeider, Stefan},
  title =	{{Structure-Guided Local Improvement for Maximum Satisfiability}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{26:1--26:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.26},
  URN =		{urn:nbn:de:0030-drops-207112},
  doi =		{10.4230/LIPIcs.CP.2024.26},
  annote =	{Keywords: maximum satisfiability, large neighborhood search (LNS), SAT-based local improvement (SLIM), incomplete MaxSAT, graphical structure, metaheuristic}
}
Document
Engineering A* Search for the Flip Distance of Plane Triangulations

Authors: Philip Mayer and Petra Mutzel

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
The flip distance for two triangulations of a point set is defined as the smallest number of edge flips needed to transform one triangulation into another, where an edge flip is the act of replacing an edge of a triangulation by a different edge such that the result remains a triangulation. We adapt and engineer a sophisticated A* search algorithm acting on the so-called flip graph. In particular, we prove that previously proposed lower bounds for the flip distance form consistent heuristics for A* and show that they can be computed efficiently using dynamic algorithms. As an alternative approach, we present an integer linear program (ILP) for the flip distance problem. We experimentally evaluate our approaches on a new real-world benchmark data set based on an application in geodesy, namely sea surface reconstruction. Our evaluation reveals that A* search consistently outperforms our ILP formulation as well as a naive baseline, which is bidirectional breadth-first search. In particular, the runtime of our approach improves upon the baseline by more than two orders of magnitude. Furthermore, our A* search successfully solves most of the considered sea surface instances with up to 41 points. This is a substantial improvement compared to the baseline, which struggles with subsets of the real-world data of size 25. Lastly, to allow the consideration of global sea level data, we developed a decomposition-based heuristic for the flip distance. In our experiments it yields optimal flip distance values for most of the considered sea level data and it can be applied to large data sets due to its fast runtime.

Cite as

Philip Mayer and Petra Mutzel. Engineering A* Search for the Flip Distance of Plane Triangulations. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 23:1-23:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mayer_et_al:LIPIcs.SEA.2024.23,
  author =	{Mayer, Philip and Mutzel, Petra},
  title =	{{Engineering A* Search for the Flip Distance of Plane Triangulations}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{23:1--23:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.23},
  URN =		{urn:nbn:de:0030-drops-203887},
  doi =		{10.4230/LIPIcs.SEA.2024.23},
  annote =	{Keywords: Computational Geometry, Triangulations, Flip Distance, A-star Search, Integer Linear Programming}
}
Document
Improved Cut Strategy for Tensor Network Contraction Orders

Authors: Christoph Staudt, Mark Blacher, Julien Klaus, Farin Lippmann, and Joachim Giesen

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
In the field of quantum computing, simulating quantum systems on classical computers is crucial. Tensor networks are fundamental in simulating quantum systems. A tensor network is a collection of tensors, that need to be contracted into a result tensor. Tensor contraction is a generalization of matrix multiplication to higher order tensors. The contractions can be performed in different orders, and the order has a significant impact on the number of floating point operations (flops) needed to get the result tensor. It is known that finding an optimal contraction order is NP-hard. The current state-of-the-art approach for finding efficient contraction orders is to combinine graph partitioning with a greedy strategy. Although heavily used in practice, the current approach ignores so-called free indices, chooses node weights without regarding previous computations, and requires numerous hyperparameters that need to be tuned at runtime. In this paper, we address these shortcomings by developing a novel graph cut strategy. The proposed modifications yield contraction orders that significantly reduce the number of flops in the tensor contractions compared to the current state of the art. Moreover, by removing the need for hyperparameter tuning at runtime, our approach converges to an efficient solution faster, which reduces the required optimization time by at least an order of magnitude.

Cite as

Christoph Staudt, Mark Blacher, Julien Klaus, Farin Lippmann, and Joachim Giesen. Improved Cut Strategy for Tensor Network Contraction Orders. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 27:1-27:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{staudt_et_al:LIPIcs.SEA.2024.27,
  author =	{Staudt, Christoph and Blacher, Mark and Klaus, Julien and Lippmann, Farin and Giesen, Joachim},
  title =	{{Improved Cut Strategy for Tensor Network Contraction Orders}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{27:1--27:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.27},
  URN =		{urn:nbn:de:0030-drops-203924},
  doi =		{10.4230/LIPIcs.SEA.2024.27},
  annote =	{Keywords: tensor network, contraction order, graph partitioniong, quantum simulation}
}
Document
Position
Grounding Stream Reasoning Research

Authors: Pieter Bonte, Jean-Paul Calbimonte, Daniel de Leng, Daniele Dell'Aglio, Emanuele Della Valle, Thomas Eiter, Federico Giannini, Fredrik Heintz, Konstantin Schekotihin, Danh Le-Phuoc, Alessandra Mileo, Patrik Schneider, Riccardo Tommasini, Jacopo Urbani, and Giacomo Ziffer

Published in: TGDK, Volume 2, Issue 1 (2024): Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge, Volume 2, Issue 1


Abstract
In the last decade, there has been a growing interest in applying AI technologies to implement complex data analytics over data streams. To this end, researchers in various fields have been organising a yearly event called the "Stream Reasoning Workshop" to share perspectives, challenges, and experiences around this topic. In this paper, the previous organisers of the workshops and other community members provide a summary of the main research results that have been discussed during the first six editions of the event. These results can be categorised into four main research areas: The first is concerned with the technological challenges related to handling large data streams. The second area aims at adapting and extending existing semantic technologies to data streams. The third and fourth areas focus on how to implement reasoning techniques, either considering deductive or inductive techniques, to extract new and valuable knowledge from the data in the stream. This summary is written not only to provide a crystallisation of the field, but also to point out distinctive traits of the stream reasoning community. Moreover, it also provides a foundation for future research by enumerating a list of use cases and open challenges, to stimulate others to join this exciting research area.

Cite as

Pieter Bonte, Jean-Paul Calbimonte, Daniel de Leng, Daniele Dell'Aglio, Emanuele Della Valle, Thomas Eiter, Federico Giannini, Fredrik Heintz, Konstantin Schekotihin, Danh Le-Phuoc, Alessandra Mileo, Patrik Schneider, Riccardo Tommasini, Jacopo Urbani, and Giacomo Ziffer. Grounding Stream Reasoning Research. In Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 1, pp. 2:1-2:47, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{bonte_et_al:TGDK.2.1.2,
  author =	{Bonte, Pieter and Calbimonte, Jean-Paul and de Leng, Daniel and Dell'Aglio, Daniele and Della Valle, Emanuele and Eiter, Thomas and Giannini, Federico and Heintz, Fredrik and Schekotihin, Konstantin and Le-Phuoc, Danh and Mileo, Alessandra and Schneider, Patrik and Tommasini, Riccardo and Urbani, Jacopo and Ziffer, Giacomo},
  title =	{{Grounding Stream Reasoning Research}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{2:1--2:47},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.1.2},
  URN =		{urn:nbn:de:0030-drops-198597},
  doi =		{10.4230/TGDK.2.1.2},
  annote =	{Keywords: Stream Reasoning, Stream Processing, RDF streams, Streaming Linked Data, Continuous query processing, Temporal Logics, High-performance computing, Databases}
}
Document
Heterogeneous Skeleton for Summarizing Continuously Distributed Demand in a Region

Authors: Alan T. Murray, Xin Feng, and Ali Shokoufandeh

Published in: LIPIcs, Volume 114, 10th International Conference on Geographic Information Science (GIScience 2018)


Abstract
There has long been interest in the skeleton of a spatial object in GIScience. The reasons for this are many, as it has proven to be an extremely useful summary and explanatory representation of complex objects. While much research has focused on issues of computational complexity and efficiency in extracting the skeletal and medial axis representations as well as interpreting the final product, little attention has been paid to fundamental assumptions about the underlying object. This paper discusses the implied assumption of homogeneity associated with methods for deriving a skeleton. Further, it is demonstrated that addressing heterogeneity complicates both the interpretation and identification of a meaningful skeleton. The heterogeneous skeleton is introduced and formalized, along with a method for its identification. Application results are presented to illustrate the heterogeneous skeleton and provides comparative contrast to homogeneity assumptions.

Cite as

Alan T. Murray, Xin Feng, and Ali Shokoufandeh. Heterogeneous Skeleton for Summarizing Continuously Distributed Demand in a Region. In 10th International Conference on Geographic Information Science (GIScience 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 114, pp. 12:1-12:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{murray_et_al:LIPIcs.GISCIENCE.2018.12,
  author =	{Murray, Alan T. and Feng, Xin and Shokoufandeh, Ali},
  title =	{{Heterogeneous Skeleton for Summarizing Continuously Distributed Demand in a Region}},
  booktitle =	{10th International Conference on Geographic Information Science (GIScience 2018)},
  pages =	{12:1--12:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-083-5},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{114},
  editor =	{Winter, Stephan and Griffin, Amy and Sester, Monika},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GISCIENCE.2018.12},
  URN =		{urn:nbn:de:0030-drops-93400},
  doi =		{10.4230/LIPIcs.GISCIENCE.2018.12},
  annote =	{Keywords: Medial axis, Object center, Geographical summary, Spatial analytics}
}
  • Refine by Author
  • 1 Bahrani, Maryam
  • 1 Blacher, Mark
  • 1 Bodden, Eric
  • 1 Bonte, Pieter
  • 1 Calbimonte, Jean-Paul
  • Show More...

  • Refine by Classification
  • 2 Security and privacy → Distributed systems security
  • 2 Theory of computation → Computational geometry
  • 2 Theory of computation → Constraint and logic programming
  • 1 Applied computing → Operations research
  • 1 Applied computing → Physics
  • Show More...

  • Refine by Keyword
  • 2 Large Language Models
  • 1 A-star Search
  • 1 Auctions
  • 1 Bytecode
  • 1 CFL Reachability
  • Show More...

  • Refine by Type
  • 11 document

  • Refine by Publication Year
  • 10 2024
  • 1 2018

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail