9 Search Results for "Zhang, Xue"


Document
Survey
Uncertainty Management in the Construction of Knowledge Graphs: A Survey

Authors: Lucas Jarnac, Yoan Chabot, and Miguel Couceiro

Published in: TGDK, Volume 3, Issue 1 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 1


Abstract
Knowledge Graphs (KGs) are a major asset for companies thanks to their great flexibility in data representation and their numerous applications, e.g., vocabulary sharing, Q&A or recommendation systems. To build a KG, it is a common practice to rely on automatic methods for extracting knowledge from various heterogeneous sources. However, in a noisy and uncertain world, knowledge may not be reliable and conflicts between data sources may occur. Integrating unreliable data would directly impact the use of the KG, therefore such conflicts must be resolved. This could be done manually by selecting the best data to integrate. This first approach is highly accurate, but costly and time-consuming. That is why recent efforts focus on automatic approaches, which represent a challenging task since it requires handling the uncertainty of extracted knowledge throughout its integration into the KG. We survey state-of-the-art approaches in this direction and present constructions of both open and enterprise KGs. We then describe different knowledge extraction methods and discuss downstream tasks after knowledge acquisition, including KG completion using embedding models, knowledge alignment, and knowledge fusion in order to address the problem of knowledge uncertainty in KG construction. We conclude with a discussion on the remaining challenges and perspectives when constructing a KG taking into account uncertainty.

Cite as

Lucas Jarnac, Yoan Chabot, and Miguel Couceiro. Uncertainty Management in the Construction of Knowledge Graphs: A Survey. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 1, pp. 3:1-3:48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{jarnac_et_al:TGDK.3.1.3,
  author =	{Jarnac, Lucas and Chabot, Yoan and Couceiro, Miguel},
  title =	{{Uncertainty Management in the Construction of Knowledge Graphs: A Survey}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{3:1--3:48},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.1.3},
  URN =		{urn:nbn:de:0030-drops-233733},
  doi =		{10.4230/TGDK.3.1.3},
  annote =	{Keywords: Knowledge reconciliation, Uncertainty, Heterogeneous sources, Knowledge graph construction}
}
Document
Talking Wikidata: Communication Patterns and Their Impact on Community Engagement in Collaborative Knowledge Graphs

Authors: Elisavet Koutsiana, Ioannis Reklos, Kholoud Saad Alghamdi, Nitisha Jain, Albert Meroño-Peñuela, and Elena Simperl

Published in: TGDK, Volume 3, Issue 1 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 1


Abstract
We study collaboration patterns of Wikidata, one of the world's largest open source collaborative knowledge graph (KG) communities. Collaborative KG communities, play a key role in structuring machine-readable knowledge to support AI systems like conversational agents. However, these communities face challenges related to long-term member engagement, as a small subset of contributors often is responsible for the majority of contributions and decision-making. While prior research has explored contributors' roles and lifespans, discussions within collaborative KG communities remain understudied. To fill this gap, we investigated the behavioural patterns of contributors and factors affecting their communication and participation. We analysed all the discussions on Wikidata using a mixed methods approach, including statistical tests, network analysis, and text and graph embedding representations. Our findings reveal that the interactions between Wikidata editors form a small world network, resilient to dropouts and inclusive, where both the network topology and discussion content influence the continuity of conversations. Furthermore, the account age of Wikidata members and their conversations are significant factors in their long-term engagement with the project. Our observations and recommendations can benefit the Wikidata and semantic web communities, providing guidance on how to improve collaborative environments for sustainability, growth, and quality.

Cite as

Elisavet Koutsiana, Ioannis Reklos, Kholoud Saad Alghamdi, Nitisha Jain, Albert Meroño-Peñuela, and Elena Simperl. Talking Wikidata: Communication Patterns and Their Impact on Community Engagement in Collaborative Knowledge Graphs. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 1, pp. 2:1-2:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{koutsiana_et_al:TGDK.3.1.2,
  author =	{Koutsiana, Elisavet and Reklos, Ioannis and Alghamdi, Kholoud Saad and Jain, Nitisha and Mero\~{n}o-Pe\~{n}uela, Albert and Simperl, Elena},
  title =	{{Talking Wikidata: Communication Patterns and Their Impact on Community Engagement in Collaborative Knowledge Graphs}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{2:1--2:27},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.1.2},
  URN =		{urn:nbn:de:0030-drops-230114},
  doi =		{10.4230/TGDK.3.1.2},
  annote =	{Keywords: collaborative knowledge graph, network analysis, graph embeddings, text embeddings}
}
Document
Short Paper
FLEX: Fault Localization and Explanation Using Open-Source Large Language Models in Powertrain Systems (Short Paper)

Authors: Herbert Muehlburger and Franz Wotawa

Published in: OASIcs, Volume 125, 35th International Conference on Principles of Diagnosis and Resilient Systems (DX 2024)


Abstract
Cyber-physical systems (CPS) are critical to modern infrastructure, but are vulnerable to faults and anomalies that threaten their operational safety. In this work, we evaluate the use of open-source Large Language Models (LLMs), such as Mistral 7B, Llama3.1:8b-instruct-fp16, and others to detect anomalies in two distinct datasets: battery management and powertrain systems. Our methodology utilises retrieval-augmented generation (RAG) techniques, incorporating a novel two-step process where LLMs first infer operational rules from normal behavior before applying these rules for fault detection. During the experiments, we found that the original prompt design yielded strong results for the battery dataset but required modification for the powertrain dataset to improve performance. The adjusted prompt, which emphasises rule inference, significantly improved anomaly detection for the powertrain dataset. Experimental results show that models like Mistral 7B achieved F1-scores up to 0.99, while Llama3.1:8b-instruct-fp16 and Gemma 2 reached perfect F1-scores of 1.0 in complex scenarios. These findings demonstrate the impact of effective prompt design and rule inference in improving LLM-based fault detection for CPS, contributing to increased operational resilience.

Cite as

Herbert Muehlburger and Franz Wotawa. FLEX: Fault Localization and Explanation Using Open-Source Large Language Models in Powertrain Systems (Short Paper). In 35th International Conference on Principles of Diagnosis and Resilient Systems (DX 2024). Open Access Series in Informatics (OASIcs), Volume 125, pp. 25:1-25:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{muehlburger_et_al:OASIcs.DX.2024.25,
  author =	{Muehlburger, Herbert and Wotawa, Franz},
  title =	{{FLEX: Fault Localization and Explanation Using Open-Source Large Language Models in Powertrain Systems}},
  booktitle =	{35th International Conference on Principles of Diagnosis and Resilient Systems (DX 2024)},
  pages =	{25:1--25:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-356-0},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{125},
  editor =	{Pill, Ingo and Natan, Avraham and Wotawa, Franz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.DX.2024.25},
  URN =		{urn:nbn:de:0030-drops-221170},
  doi =		{10.4230/OASIcs.DX.2024.25},
  annote =	{Keywords: Fault detection, anomaly detection, powertrain systems, large language models, open-source LLMs}
}
Document
DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance

Authors: Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Decentralized Finance (DeFi) has witnessed a monumental surge, reaching 53.039 billion USD in total value locked. As this sector continues to expand, ensuring the reliability of DeFi smart contracts becomes increasingly crucial. While some users are adept at reading code or the compiled bytecode to understand smart contracts, many rely on documentation. Therefore, discrepancies between the documentation and the deployed code can pose significant risks, whether these discrepancies are due to errors or intentional fraud. To tackle these challenges, we developed DeFiAligner, an end-to-end system to identify inconsistencies between documentation and smart contracts. DeFiAligner incorporates a symbolic execution tool, SEVM, which explores execution paths of on-chain binary code, recording memory and stack states. It automatically generates symbolic expressions for token balance changes and branch conditions, which, along with related project documents, are processed by LLMs. Using structured prompts, the LLMs evaluate the alignment between the symbolic expressions and the documentation. Our tests across three distinct scenarios demonstrate DeFiAligner’s capability to automate inconsistency detection in DeFi, achieving recall rates of 92% and 90% on two public datasets respectively.

Cite as

Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin. DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 7:1-7:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gan_et_al:LIPIcs.AFT.2024.7,
  author =	{Gan, Rundong and Zhou, Liyi and Wang, Le and Qin, Kaihua and Lin, Xiaodong},
  title =	{{DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{7:1--7:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.7},
  URN =		{urn:nbn:de:0030-drops-209431},
  doi =		{10.4230/LIPIcs.AFT.2024.7},
  annote =	{Keywords: Decentralized Finance Security, Large Language Models, Project Review, Symbolic Analysis, Smart Contracts}
}
Document
Constraint Modelling with LLMs Using In-Context Learning

Authors: Kostis Michailidis, Dimos Tsouros, and Tias Guns

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Constraint Programming (CP) allows for the modelling and solving of a wide range of combinatorial problems. However, modelling such problems using constraints over decision variables still requires significant expertise, both in conceptual thinking and syntactic use of modelling languages. In this work, we explore the potential of using pre-trained Large Language Models (LLMs) as coding assistants, to transform textual problem descriptions into concrete and executable CP specifications. We present different transformation pipelines with explicit intermediate representations, and we investigate the potential benefit of various retrieval-augmented example selection strategies for in-context learning. We evaluate our approach on 2 datasets from the literature, namely NL4Opt (optimisation) and Logic Grid Puzzles (satisfaction), and a heterogeneous set of exercises from a CP course. The results show that pre-trained LLMs have promising potential for initialising the modelling process, with retrieval-augmented in-context learning significantly enhancing their modelling capabilities.

Cite as

Kostis Michailidis, Dimos Tsouros, and Tias Guns. Constraint Modelling with LLMs Using In-Context Learning. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 20:1-20:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{michailidis_et_al:LIPIcs.CP.2024.20,
  author =	{Michailidis, Kostis and Tsouros, Dimos and Guns, Tias},
  title =	{{Constraint Modelling with LLMs Using In-Context Learning}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{20:1--20:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.20},
  URN =		{urn:nbn:de:0030-drops-207053},
  doi =		{10.4230/LIPIcs.CP.2024.20},
  annote =	{Keywords: Constraint Modelling, Constraint Acquisition, Constraint Programming, Large Language Models, In-Context Learning, Natural Language Processing, Named Entity Recognition, Retrieval-Augmented Generation, Optimisation}
}
Document
Quantum Non-Identical Mean Estimation: Efficient Algorithms and Fundamental Limits

Authors: Jiachen Hu, Tongyang Li, Xinzhao Wang, Yecheng Xue, Chenyi Zhang, and Han Zhong

Published in: LIPIcs, Volume 310, 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)


Abstract
We systematically investigate quantum algorithms and lower bounds for mean estimation given query access to non-identically distributed samples. On the one hand, we give quantum mean estimators with quadratic quantum speed-up given samples from different bounded or sub-Gaussian random variables. On the other hand, we prove that, in general, it is impossible for any quantum algorithm to achieve quadratic speed-up over the number of classical samples needed to estimate the mean μ, where the samples come from different random variables with mean close to μ. Technically, our quantum algorithms reduce bounded and sub-Gaussian random variables to the Bernoulli case, and use an uncomputation trick to overcome the challenge that direct amplitude estimation does not work with non-identical query access. Our quantum query lower bounds are established by simulating non-identical oracles by parallel oracles, and also by an adversarial method with non-identical oracles. Both results pave the way for proving quantum query lower bounds with non-identical oracles in general, which may be of independent interest.

Cite as

Jiachen Hu, Tongyang Li, Xinzhao Wang, Yecheng Xue, Chenyi Zhang, and Han Zhong. Quantum Non-Identical Mean Estimation: Efficient Algorithms and Fundamental Limits. In 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 310, pp. 9:1-9:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hu_et_al:LIPIcs.TQC.2024.9,
  author =	{Hu, Jiachen and Li, Tongyang and Wang, Xinzhao and Xue, Yecheng and Zhang, Chenyi and Zhong, Han},
  title =	{{Quantum Non-Identical Mean Estimation: Efficient Algorithms and Fundamental Limits}},
  booktitle =	{19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)},
  pages =	{9:1--9:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-328-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{310},
  editor =	{Magniez, Fr\'{e}d\'{e}ric and Grilo, Alex Bredariol},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2024.9},
  URN =		{urn:nbn:de:0030-drops-206791},
  doi =		{10.4230/LIPIcs.TQC.2024.9},
  annote =	{Keywords: Quantum algorithms, Mean estimation, Non-identical samples, Query complexity}
}
Document
Program Tailoring: Slicing by Sequential Criteria

Authors: Yue Li, Tian Tan, Yifei Zhang, and Jingling Xue

Published in: LIPIcs, Volume 56, 30th European Conference on Object-Oriented Programming (ECOOP 2016)


Abstract
Protocol and typestate analyses often report some sequences of statements ending at a program point P that needs to be scrutinized, since P may be erroneous or imprecisely analyzed. Program slicing focuses only on the behavior at P by computing a slice of the program affecting the values at P. In this paper, we propose to restrict our attention to the subset of that behavior at P affected by one or several statement sequences, called a sequential criterion (SC). By leveraging the ordering information in a SC, e.g., the temporal order in a few valid/invalid API method invocation sequences, we introduce a new technique, program tailoring, to compute a tailored program that comprises the statements in all possible execution paths passing through at least one sequence in SC in the given order. With a prototyping implementation, Tailor, we show why tailoring is practically useful by conducting two case studies on seven large real-world Java applications. For program debugging and understanding, Tailor can complement program slicing by removing SC-irrelevant statements. For program analysis, Tailor can enable a pointer analysis, which is unscalable to a program, to perform a more focused and therefore potentially scalable analysis to its specific parts containing hard language features such as reflection.

Cite as

Yue Li, Tian Tan, Yifei Zhang, and Jingling Xue. Program Tailoring: Slicing by Sequential Criteria. In 30th European Conference on Object-Oriented Programming (ECOOP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 56, pp. 15:1-15:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ECOOP.2016.15,
  author =	{Li, Yue and Tan, Tian and Zhang, Yifei and Xue, Jingling},
  title =	{{Program Tailoring: Slicing by Sequential Criteria}},
  booktitle =	{30th European Conference on Object-Oriented Programming (ECOOP 2016)},
  pages =	{15:1--15:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-014-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{56},
  editor =	{Krishnamurthi, Shriram and Lerner, Benjamin S.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2016.15},
  URN =		{urn:nbn:de:0030-drops-61092},
  doi =		{10.4230/LIPIcs.ECOOP.2016.15},
  annote =	{Keywords: Program Slicing, Program Analysis, API Protocol Analysis}
}
Document
Program Tailoring: Slicing by Sequential Criteria (Artifact)

Authors: Tian Tan, Yue Li, Yifei Zhang, and Jingling Xue

Published in: DARTS, Volume 2, Issue 1, Special Issue of the 30th European Conference on Object-Oriented Programming (ECOOP 2016)


Abstract
Protocol and typestate analyses often report some sequences of statements ending at a program point P that needs to be scrutinized, since P may be erroneous or imprecisely analyzed. Program slicing focuses only on the behavior at P by computing a slice of the program affecting the values at P. In our companion paper "Program Tailoring: Slicing by Sequential Criteria", we propose to focus on the subset of that behavior at P affected by one or several statement sequences, called a sequential criterion (SC). By leveraging the ordering information in a SC, e.g., the temporal order in a few valid/invalid API method invocation sequences, we introduce a new technique, program tailoring, to compute a tailored program that comprises the statements in all possible execution paths passing through at least one sequence in SC in the given order. This artifact is based on TAILOR, a prototyping implementation of program tailoring, to evaluate the usefulness of TAILOR in practice. The provided package is designed to support repeatability of all the experiments of our companion paper. Specifically, it allows users to reproduce the results for all the three research questions addressed in the evaluation section of our companion paper. In addition, an extensive set of extra results, which are not described in the companion paper, are also included, in order to help users better understand this work.

Cite as

Tian Tan, Yue Li, Yifei Zhang, and Jingling Xue. Program Tailoring: Slicing by Sequential Criteria (Artifact). In Special Issue of the 30th European Conference on Object-Oriented Programming (ECOOP 2016). Dagstuhl Artifacts Series (DARTS), Volume 2, Issue 1, pp. 8:1-8:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Article{tan_et_al:DARTS.2.1.8,
  author =	{Tan, Tian and Li, Yue and Zhang, Yifei and Xue, Jingling},
  title =	{{Program Tailoring: Slicing by Sequential Criteria (Artifact)}},
  pages =	{8:1--8:3},
  journal =	{Dagstuhl Artifacts Series},
  ISSN =	{2509-8195},
  year =	{2016},
  volume =	{2},
  number =	{1},
  editor =	{Tan, Tian and Li, Yue and Zhang, Yifei and Xue, Jingling},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DARTS.2.1.8},
  URN =		{urn:nbn:de:0030-drops-61298},
  doi =		{10.4230/DARTS.2.1.8},
  annote =	{Keywords: Program Slicing, Program Analysis, API Protocol Specification}
}
Document
Complexity Classification of Two-Qubit Commuting Hamiltonians

Authors: Adam Bouland, Laura Mancinska, and Xue Zhang

Published in: LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)


Abstract
We classify two-qubit commuting Hamiltonians in terms of their computational complexity. Suppose one has a two-qubit commuting Hamiltonian H which one can apply to any pair of qubits, starting in a computational basis state. We prove a dichotomy theorem: either this model is efficiently classically simulable or it allows one to sample from probability distributions which cannot be sampled from classically unless the polynomial hierarchy collapses. Furthermore, the only simulable Hamiltonians are those which fail to generate entanglement. This shows that generic two-qubit commuting Hamiltonians can be used to perform computational tasks which are intractable for classical computers under plausible assumptions. Our proof makes use of new postselection gadgets and Lie theory.

Cite as

Adam Bouland, Laura Mancinska, and Xue Zhang. Complexity Classification of Two-Qubit Commuting Hamiltonians. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 28:1-28:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bouland_et_al:LIPIcs.CCC.2016.28,
  author =	{Bouland, Adam and Mancinska, Laura and Zhang, Xue},
  title =	{{Complexity Classification of Two-Qubit Commuting Hamiltonians}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{28:1--28:33},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.28},
  URN =		{urn:nbn:de:0030-drops-58469},
  doi =		{10.4230/LIPIcs.CCC.2016.28},
  annote =	{Keywords: Quantum Computing, Sampling Problems, Commuting Hamiltonians, IQP, Gate Classification Theorems}
}
  • Refine by Author
  • 2 Li, Yue
  • 2 Tan, Tian
  • 2 Xue, Jingling
  • 2 Zhang, Yifei
  • 1 Alghamdi, Kholoud Saad
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 2 Large Language Models
  • 2 Program Analysis
  • 2 Program Slicing
  • 1 API Protocol Analysis
  • 1 API Protocol Specification
  • Show More...

  • Refine by Type
  • 9 document

  • Refine by Publication Year
  • 4 2024
  • 3 2016
  • 2 2025

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail