51 Search Results for "Jones, Andrew V."


Volume

OASIcs, Volume 35

2013 Imperial College Computing Student Workshop

ICCSW 2013, September 26-27, 2013, London, United Kingdom

Editors: Andrew V. Jones and Nicholas Ng

Volume

OASIcs, Volume 28

2012 Imperial College Computing Student Workshop

ICCSW 2012, September 27-28, 2012, London, United Kingdom

Editors: Andrew V. Jones

Document
Complete Volume
OASIcs, Volume 35, ICCSW'13, Complete Volume

Authors: Andrew V. Jones and Nicholas Ng

Published in: OASIcs, Volume 35, 2013 Imperial College Computing Student Workshop


Abstract
OASIcs, Volume 35, ICCSW'13, Complete Volume

Cite as

2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@Proceedings{jones_et_al:OASIcs.ICCSW.2013,
  title =	{{OASIcs, Volume 35, ICCSW'13, Complete Volume}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013},
  URN =		{urn:nbn:de:0030-drops-43496},
  doi =		{10.4230/OASIcs.ICCSW.2013},
  annote =	{Keywords: Conference Proceedings}
}
Document
Front Matter
Frontmatter, Table of Contents, Preface, Conference Organization

Authors: Andrew V. Jones and Nicholas Ng

Published in: OASIcs, Volume 35, 2013 Imperial College Computing Student Workshop


Abstract
Frontmatter, Table of Contents, Preface, Conference Organization

Cite as

2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. i-xi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{jones_et_al:OASIcs.ICCSW.2013.i,
  author =	{Jones, Andrew V. and Ng, Nicholas},
  title =	{{Frontmatter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{i--xi},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.i},
  URN =		{urn:nbn:de:0030-drops-42645},
  doi =		{10.4230/OASIcs.ICCSW.2013.i},
  annote =	{Keywords: Frontmatter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Laws of programming with concurrency (Invited Talk)

Authors: Tony Hoare

Published in: OASIcs, Volume 35, 2013 Imperial College Computing Student Workshop


Abstract
The algebraic laws for programming with concurrency are as simple as (and very similar to) the familiar laws of arithmetic. Yet they are stronger for reasoning about the properties of programs than the axioms of Hoare Logic and the rules of an operational semantics put together.

Cite as

Tony Hoare. Laws of programming with concurrency (Invited Talk). In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, p. 1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{hoare:OASIcs.ICCSW.2013.1,
  author =	{Hoare, Tony},
  title =	{{Laws of programming with concurrency}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{1--1},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.1},
  URN =		{urn:nbn:de:0030-drops-42843},
  doi =		{10.4230/OASIcs.ICCSW.2013.1},
  annote =	{Keywords: concurrency}
}
Document
Invited Talk
Building Better Online Courses (Invited Talk)

Authors: Peter Norvig

Published in: OASIcs, Volume 35, 2013 Imperial College Computing Student Workshop


Abstract
We now have many choices in designing a course, whether it is in the classroom, online, or a hybrid. This talk will cover some of the mechanics of running an online course, including the factors involved in building a community. And we will discuss whether building a course is like building software: in the early days, software was crafted by individuals, but over time we established processes that enabled large groups to build much larger systems. Today, courses are still crafted by an individual teacher — if we want to build a larger class, serving more students, and more potential paths through the material, do we need a new set of course building processes? How can we assure that our courses will continually improve in quality?

Cite as

Peter Norvig. Building Better Online Courses (Invited Talk). In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, p. 2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{norvig:OASIcs.ICCSW.2013.2,
  author =	{Norvig, Peter},
  title =	{{Building Better Online Courses}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{2--2},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.2},
  URN =		{urn:nbn:de:0030-drops-42859},
  doi =		{10.4230/OASIcs.ICCSW.2013.2},
  annote =	{Keywords: online courses}
}
Document
A swarm based heuristic for sparse image recovery

Authors: Theofanis Apostolopoulos

Published in: OASIcs, Volume 35, 2013 Imperial College Computing Student Workshop


Abstract
This paper discusses the Compressive Sampling framework as an application for sparse representation (factorization) and recovery of images over an over-complete basis (dictionary). Compressive Sampling is a novel new area which asserts that one can recover images of interest, with much fewer measurements than were originally thought necessary, by searching for the sparsest representation of an image over an over-complete dictionary. This task is achieved by optimizing an objective function that includes two terms: one that measures the image reconstruction error and another that measures the sparsity level. We present and discuss a new swarm based heuristic for sparse image approximation using the Discrete Fourier Transform to enhance its level of sparsity. Our experimental results on reference images demonstrate the good performance of the proposed heuristic over other standard sparse recovery methods (L1-Magic and FOCUSS packages), in a noiseless environment using much fewer measurements. Finally, we discuss possible extensions of the heuristic in noisy environments and weakly sparse images as a realistic improvement with much higher applicability.

Cite as

Theofanis Apostolopoulos. A swarm based heuristic for sparse image recovery. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 3-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{apostolopoulos:OASIcs.ICCSW.2013.3,
  author =	{Apostolopoulos, Theofanis},
  title =	{{A swarm based heuristic for sparse image recovery}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{3--10},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.3},
  URN =		{urn:nbn:de:0030-drops-42655},
  doi =		{10.4230/OASIcs.ICCSW.2013.3},
  annote =	{Keywords: Compressive Sampling, sparse image recovery, non-linear programming, sparse repre sentation, linear inverse problems}
}
Document
Scalable and Fault-tolerant Stateful Stream Processing

Authors: Raul Castro Fernandez, Matteo Migliavacca, Evangelia Kalyvianaki, and Peter Pietzuch

Published in: OASIcs, Volume 35, 2013 Imperial College Computing Student Workshop


Abstract
As users of "big data" applications expect fresh results, we witness a new breed of stream processing systems (SPS) that are designed to scale to large numbers of cloud-hosted machines. Such systems face new challenges: (i) to benefit from the "pay-as-you-go" model of cloud computing, they must scale out on demand, acquiring additional virtual machines (VMs) and parallelising operators when the workload increases; (ii) failures are common with deployments on hundreds of VMs—systems must be fault-tolerant with fast recovery times, yet low per-machine overheads. An open question is how to achieve these two goals when stream queries include stateful operators, which must be scaled out and recovered without affecting query results. Our key idea is to expose internal operator state explicitly to the SPS through a set of state management primitives. Based on them, we describe an integrated approach for dynamic scale out and recovery of stateful operators. Externalised operator state is checkpointed periodically by the SPS and backed up to upstream VMs. The SPS identifies individual operator bottlenecks and automatically scales them out by allocating new VMs and partitioning the checkpointed state. At any point, failed operators are recovered by restoring checkpointed state on a new VM and replaying unprocessed tuples. We evaluate this approach with the Linear Road Benchmark on the Amazon EC2 cloud platform and show that it can scale automatically to a load factor of L=350 with 50 VMs, while recovering quickly from failures.

Cite as

Raul Castro Fernandez, Matteo Migliavacca, Evangelia Kalyvianaki, and Peter Pietzuch. Scalable and Fault-tolerant Stateful Stream Processing. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 11-18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{castrofernandez_et_al:OASIcs.ICCSW.2013.11,
  author =	{Castro Fernandez, Raul and Migliavacca, Matteo and Kalyvianaki, Evangelia and Pietzuch, Peter},
  title =	{{Scalable and Fault-tolerant Stateful Stream Processing}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{11--18},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.11},
  URN =		{urn:nbn:de:0030-drops-42669},
  doi =		{10.4230/OASIcs.ICCSW.2013.11},
  annote =	{Keywords: Stateful stream processing, scalability, fault tolerance}
}
Document
Generalizing Multi-Context Systems for Reactive Stream Reasoning Applications

Authors: Stefan Ellmauthaler

Published in: OASIcs, Volume 35, 2013 Imperial College Computing Student Workshop


Abstract
In the field of artificial intelligence (AI), the subdomain of knowledge representation (KR) has the aim to represent, integrate, and exchange knowledge in order to do some reasoning about the given information. During the last decades many different KR-languages were proposed for a variety of certain applications with specific needs. The concept of a managed Multi-Context System (mMCS) was introduced to provide adequate formal tools to interchange and integrate knowledge between different KR-approaches. Another arising field of interest in computer science is the design of online applications, which react directly to (possibly infinite) streams of information. This paper presents a genuine approach to generalize mMCS for online applications with continuous streams of information. Our major goal is to find a good tradeoff between expressiveness and computational complexity.

Cite as

Stefan Ellmauthaler. Generalizing Multi-Context Systems for Reactive Stream Reasoning Applications. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 19-26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{ellmauthaler:OASIcs.ICCSW.2013.19,
  author =	{Ellmauthaler, Stefan},
  title =	{{Generalizing Multi-Context Systems for Reactive Stream Reasoning Applications}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{19--26},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.19},
  URN =		{urn:nbn:de:0030-drops-42675},
  doi =		{10.4230/OASIcs.ICCSW.2013.19},
  annote =	{Keywords: Knowledge Representation, Artificial Intelligence}
}
Document
Conformal Prediction under Hypergraphical Models

Authors: Valentina Fedorova, Alex Gammerman, Ilia Nouretdinov, and Vladimir Vovk

Published in: OASIcs, Volume 35, 2013 Imperial College Computing Student Workshop


Abstract
Conformal predictors are usually defined and studied under the exchangeability assumption. However, their definition can be extended to a wide class of statistical models, called online compression models, while retaining their property of automatic validity. This paper is devoted to conformal prediction under hypergraphical models that are more specific than the exchangeability model. We define conformity measures for such hypergraphical models and study the corresponding conformal predictors empirically on benchmark LED data sets. Our experiments show that they are more efficient than conformal predictors that use only the exchangeability assumption.

Cite as

Valentina Fedorova, Alex Gammerman, Ilia Nouretdinov, and Vladimir Vovk. Conformal Prediction under Hypergraphical Models. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 27-34, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{fedorova_et_al:OASIcs.ICCSW.2013.27,
  author =	{Fedorova, Valentina and Gammerman, Alex and Nouretdinov, Ilia and Vovk, Vladimir},
  title =	{{Conformal Prediction under Hypergraphical Models}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{27--34},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.27},
  URN =		{urn:nbn:de:0030-drops-42685},
  doi =		{10.4230/OASIcs.ICCSW.2013.27},
  annote =	{Keywords: conformal prediction, hypergraphical models, conformity measure}
}
Document
Relational Knowledge Extraction from Attribute-Value Learners

Authors: Manoel V. M. França, Artur S. D. Garcez, and Gerson Zaverucha

Published in: OASIcs, Volume 35, 2013 Imperial College Computing Student Workshop


Abstract
Bottom Clause Propositionalization (BCP) is a recent propositionalization method which allows fast relational learning. Propositional learners can use BCP to obtain accuracy results comparable with Inductive Logic Programming (ILP) learners. However, differently from ILP learners, what has been learned cannot normally be represented in first-order logic. In this paper, we propose an approach and introduce a novel algorithm for extraction of first-order rules from propositional rule learners, when dealing with data propositionalized with BCP. A theorem then shows that the extracted first-order rules are consistent with their propositional version. The algorithm was evaluated using the rule learner RIPPER, although it can be applied on any propositional rule learner. Initial results show that the accuracies of both RIPPER and the extracted first-order rules can be comparable to those obtained by Aleph (a traditional ILP system), but our approach is considerably faster (obtaining speed-ups of over an order of magnitude), generating a compact rule set with at least the same representation power as standard ILP learners.

Cite as

Manoel V. M. França, Artur S. D. Garcez, and Gerson Zaverucha. Relational Knowledge Extraction from Attribute-Value Learners. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 35-42, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{franca_et_al:OASIcs.ICCSW.2013.35,
  author =	{Fran\c{c}a, Manoel V. M. and Garcez, Artur S. D. and Zaverucha, Gerson},
  title =	{{Relational Knowledge Extraction from Attribute-Value Learners}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{35--42},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.35},
  URN =		{urn:nbn:de:0030-drops-42690},
  doi =		{10.4230/OASIcs.ICCSW.2013.35},
  annote =	{Keywords: Relational Learning, Propositionalization, Knowledge Extraction}
}
Document
Tools for the implementation of argumentation models

Authors: Bas van Gijzel

Published in: OASIcs, Volume 35, 2013 Imperial College Computing Student Workshop


Abstract
The structured approach to argumentation has seen a surge of models, introducing a multitude of ways to deal with the formalisation of arguments. However, while the development of the mathematical models have flourished, the actual implementations and development of methods for implementation of these models have been lagging behind. This paper attempts to alleviate this problem by providing methods that simplify implementation, i.e. we demonstrate how the functional programming language Haskell can naturally express mathematical definitions and sketch how a theorem prover can verify this implementation. Furthermore, we provide methods to streamline the documenting of code, showing how literate programming allows the implementer to write formal definition, implementation and documentation in one file. All code has been made publicly available and reusable.

Cite as

Bas van Gijzel. Tools for the implementation of argumentation models. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 43-48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{vangijzel:OASIcs.ICCSW.2013.43,
  author =	{van Gijzel, Bas},
  title =	{{Tools for the implementation of argumentation models}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{43--48},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.43},
  URN =		{urn:nbn:de:0030-drops-42706},
  doi =		{10.4230/OASIcs.ICCSW.2013.43},
  annote =	{Keywords: argumentation, implementation, functional programming, Haskell, Carneades}
}
Document
Towards the Development of a Hybrid Parser for Natural Languages

Authors: Sardar F. Jaf and Allan Ramsay

Published in: OASIcs, Volume 35, 2013 Imperial College Computing Student Workshop


Abstract
In order to understand natural languages, we have to be able to determine the relations between words, in other words we have to be able to 'parse' the input text. This is a difficult task, especially for Arabic, which has a number of properties that make it particularly difficult to handle. There are two approaches to parsing natural languages: grammar-driven and data-driven. Each of these approaches poses its own set of problems, which we discuss in this paper. The goal of our work is to produce a hybrid parser, which retains the advantages of the data-driven approach but is guided by grammar rules in order to produce more accurate output. This work consists of two stages: the first stage is to develop a baseline data-driven parser, which is guided by a machine learning algorithm for establishing dependency relations between words. The second stage is to integrate grammar rules into the baseline parser. In this paper, we describe the first stage of our work, which is now implemented, and a number of experiments that have been conducted on this parser. We also discuss the result of these experiments and highlight the different factors that are affecting parsing speed and the correctness of the parser results.

Cite as

Sardar F. Jaf and Allan Ramsay. Towards the Development of a Hybrid Parser for Natural Languages. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 49-56, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{jaf_et_al:OASIcs.ICCSW.2013.49,
  author =	{Jaf, Sardar F. and Ramsay, Allan},
  title =	{{Towards the Development of a Hybrid Parser for Natural Languages}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{49--56},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.49},
  URN =		{urn:nbn:de:0030-drops-42710},
  doi =		{10.4230/OASIcs.ICCSW.2013.49},
  annote =	{Keywords: Hybrid Parsing, Arabic Parsing, Grammar-Driven Parser, Data-Driven Parser, Natural Language Processing}
}
Document
Improving the quality of APIs through the analysis of software crash reports

Authors: Maria Kechagia, Dimitris Mitropoulos, and Diomidis Spinellis

Published in: OASIcs, Volume 35, 2013 Imperial College Computing Student Workshop


Abstract
Modern programs depend on APIS to implement a significant part of their functionality. Apart from the way developers use APIS to build their software, the stability of these programs relies on the APIS design and implementation. In this work, we evaluate the reliability of APIS, by examining software telemetry data, in the form of stack traces, coming from Android application crashes. We got 4.9 GB worth of crash data that thousands of applications send to a centralized crash report management service. We processed that data to extract approximately a million stack traces, stitching together parts of chained exceptions, and established heuristic rules to draw the border between applications and API calls. We examined 80% of the stack traces to map the space of the most common application failure reasons. Our findings show that the top ones can be attributed to memory exhaustion, race conditions or deadlocks, and missing or corrupt resources. At the same time, a significant number of our stack traces (over 10%) remains unclassified due to generic unchecked exceptions, which do not highlight the problems that lead to crashes. Finally, given the classes of crash causes we found, we argue that API design and implementation improvements, such as specific exceptions, non-blocking algorithms, and default resources, can eliminate common failures.

Cite as

Maria Kechagia, Dimitris Mitropoulos, and Diomidis Spinellis. Improving the quality of APIs through the analysis of software crash reports. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 57-64, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{kechagia_et_al:OASIcs.ICCSW.2013.57,
  author =	{Kechagia, Maria and Mitropoulos, Dimitris and Spinellis, Diomidis},
  title =	{{Improving the quality of APIs through the analysis of software crash reports}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{57--64},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.57},
  URN =		{urn:nbn:de:0030-drops-42721},
  doi =		{10.4230/OASIcs.ICCSW.2013.57},
  annote =	{Keywords: application programming interfaces, mobile applications, crash reports, stack traces}
}
Document
Fast Implementation of the Scalable Video Coding Extension of the H.264/AVC Standard

Authors: Xin Lu and Graham R. Martin

Published in: OASIcs, Volume 35, 2013 Imperial College Computing Student Workshop


Abstract
In order to improve coding efficiency in the scalable extension of H.264/AVC, an inter-layer prediction mechanism is incorporated. This exploits as much lower layer information as possible to inform the process of coding the enhancement layer(s). However it also greatly increases the computational complexity. In this paper, a fast mode decision algorithm for efficient implementation of the SVC encoder is described. The proposed algorithm not only considers inter-layer correlation but also fully exploits both spatial and temporal correlation as well as an assessment of macroblock texture. All of these factors are organised within a hierarchical structure in the mode decision process. At each level of the structure, different strategies are implemented to eliminate inappropriate candidate modes. Simulation results show that the proposed algorithm reduces encoding time by up to 85% compared with the JSVM 9.18 implementation. This is achieved without any noticeable degradation in rate distortion.

Cite as

Xin Lu and Graham R. Martin. Fast Implementation of the Scalable Video Coding Extension of the H.264/AVC Standard. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 65-72, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{lu_et_al:OASIcs.ICCSW.2013.65,
  author =	{Lu, Xin and Martin, Graham R.},
  title =	{{Fast Implementation of the Scalable Video Coding Extension of the H.264/AVC Standard}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{65--72},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.65},
  URN =		{urn:nbn:de:0030-drops-42737},
  doi =		{10.4230/OASIcs.ICCSW.2013.65},
  annote =	{Keywords: Fast mode selection, Inter-layer prediction, Scalable Video Coding (SVC), SVC extension of H.264/AVC.}
}
  • Refine by Author
  • 4 Jones, Andrew V.
  • 2 Apostolopoulos, Theofanis
  • 2 Black, Elizabeth
  • 2 Cisternino, Antonio
  • 2 Cristea, Alexandra I.
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 3 multi-agent systems
  • 2 Compressive Sampling
  • 2 Conference Proceedings
  • 2 Constraint Programming
  • 2 Frontmatter
  • Show More...

  • Refine by Type
  • 49 document
  • 2 volume

  • Refine by Publication Year
  • 26 2012
  • 25 2013

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail