OASIcs, Volume 35

2013 Imperial College Computing Student Workshop



Thumbnail PDF

Event

ICCSW 2013, September 26-27, 2013, London, United Kingdom

Editors

Andrew V. Jones
Nicholas Ng

Publication Details

  • published at: 2013-10-14
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-939897-63-7
  • DBLP: db/conf/iccsw/iccsw2013

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
OASIcs, Volume 35, ICCSW'13, Complete Volume

Authors: Andrew V. Jones and Nicholas Ng


Abstract
OASIcs, Volume 35, ICCSW'13, Complete Volume

Cite as

2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@Proceedings{jones_et_al:OASIcs.ICCSW.2013,
  title =	{{OASIcs, Volume 35, ICCSW'13, Complete Volume}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013},
  URN =		{urn:nbn:de:0030-drops-43496},
  doi =		{10.4230/OASIcs.ICCSW.2013},
  annote =	{Keywords: Conference Proceedings}
}
Document
Front Matter
Frontmatter, Table of Contents, Preface, Conference Organization

Authors: Andrew V. Jones and Nicholas Ng


Abstract
Frontmatter, Table of Contents, Preface, Conference Organization

Cite as

2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. i-xi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{jones_et_al:OASIcs.ICCSW.2013.i,
  author =	{Jones, Andrew V. and Ng, Nicholas},
  title =	{{Frontmatter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{i--xi},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.i},
  URN =		{urn:nbn:de:0030-drops-42645},
  doi =		{10.4230/OASIcs.ICCSW.2013.i},
  annote =	{Keywords: Frontmatter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Laws of programming with concurrency (Invited Talk)

Authors: Tony Hoare


Abstract
The algebraic laws for programming with concurrency are as simple as (and very similar to) the familiar laws of arithmetic. Yet they are stronger for reasoning about the properties of programs than the axioms of Hoare Logic and the rules of an operational semantics put together.

Cite as

Tony Hoare. Laws of programming with concurrency (Invited Talk). In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, p. 1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{hoare:OASIcs.ICCSW.2013.1,
  author =	{Hoare, Tony},
  title =	{{Laws of programming with concurrency}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{1--1},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.1},
  URN =		{urn:nbn:de:0030-drops-42843},
  doi =		{10.4230/OASIcs.ICCSW.2013.1},
  annote =	{Keywords: concurrency}
}
Document
Invited Talk
Building Better Online Courses (Invited Talk)

Authors: Peter Norvig


Abstract
We now have many choices in designing a course, whether it is in the classroom, online, or a hybrid. This talk will cover some of the mechanics of running an online course, including the factors involved in building a community. And we will discuss whether building a course is like building software: in the early days, software was crafted by individuals, but over time we established processes that enabled large groups to build much larger systems. Today, courses are still crafted by an individual teacher — if we want to build a larger class, serving more students, and more potential paths through the material, do we need a new set of course building processes? How can we assure that our courses will continually improve in quality?

Cite as

Peter Norvig. Building Better Online Courses (Invited Talk). In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, p. 2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{norvig:OASIcs.ICCSW.2013.2,
  author =	{Norvig, Peter},
  title =	{{Building Better Online Courses}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{2--2},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.2},
  URN =		{urn:nbn:de:0030-drops-42859},
  doi =		{10.4230/OASIcs.ICCSW.2013.2},
  annote =	{Keywords: online courses}
}
Document
A swarm based heuristic for sparse image recovery

Authors: Theofanis Apostolopoulos


Abstract
This paper discusses the Compressive Sampling framework as an application for sparse representation (factorization) and recovery of images over an over-complete basis (dictionary). Compressive Sampling is a novel new area which asserts that one can recover images of interest, with much fewer measurements than were originally thought necessary, by searching for the sparsest representation of an image over an over-complete dictionary. This task is achieved by optimizing an objective function that includes two terms: one that measures the image reconstruction error and another that measures the sparsity level. We present and discuss a new swarm based heuristic for sparse image approximation using the Discrete Fourier Transform to enhance its level of sparsity. Our experimental results on reference images demonstrate the good performance of the proposed heuristic over other standard sparse recovery methods (L1-Magic and FOCUSS packages), in a noiseless environment using much fewer measurements. Finally, we discuss possible extensions of the heuristic in noisy environments and weakly sparse images as a realistic improvement with much higher applicability.

Cite as

Theofanis Apostolopoulos. A swarm based heuristic for sparse image recovery. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 3-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{apostolopoulos:OASIcs.ICCSW.2013.3,
  author =	{Apostolopoulos, Theofanis},
  title =	{{A swarm based heuristic for sparse image recovery}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{3--10},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.3},
  URN =		{urn:nbn:de:0030-drops-42655},
  doi =		{10.4230/OASIcs.ICCSW.2013.3},
  annote =	{Keywords: Compressive Sampling, sparse image recovery, non-linear programming, sparse repre sentation, linear inverse problems}
}
Document
Scalable and Fault-tolerant Stateful Stream Processing

Authors: Raul Castro Fernandez, Matteo Migliavacca, Evangelia Kalyvianaki, and Peter Pietzuch


Abstract
As users of "big data" applications expect fresh results, we witness a new breed of stream processing systems (SPS) that are designed to scale to large numbers of cloud-hosted machines. Such systems face new challenges: (i) to benefit from the "pay-as-you-go" model of cloud computing, they must scale out on demand, acquiring additional virtual machines (VMs) and parallelising operators when the workload increases; (ii) failures are common with deployments on hundreds of VMs—systems must be fault-tolerant with fast recovery times, yet low per-machine overheads. An open question is how to achieve these two goals when stream queries include stateful operators, which must be scaled out and recovered without affecting query results. Our key idea is to expose internal operator state explicitly to the SPS through a set of state management primitives. Based on them, we describe an integrated approach for dynamic scale out and recovery of stateful operators. Externalised operator state is checkpointed periodically by the SPS and backed up to upstream VMs. The SPS identifies individual operator bottlenecks and automatically scales them out by allocating new VMs and partitioning the checkpointed state. At any point, failed operators are recovered by restoring checkpointed state on a new VM and replaying unprocessed tuples. We evaluate this approach with the Linear Road Benchmark on the Amazon EC2 cloud platform and show that it can scale automatically to a load factor of L=350 with 50 VMs, while recovering quickly from failures.

Cite as

Raul Castro Fernandez, Matteo Migliavacca, Evangelia Kalyvianaki, and Peter Pietzuch. Scalable and Fault-tolerant Stateful Stream Processing. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 11-18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{castrofernandez_et_al:OASIcs.ICCSW.2013.11,
  author =	{Castro Fernandez, Raul and Migliavacca, Matteo and Kalyvianaki, Evangelia and Pietzuch, Peter},
  title =	{{Scalable and Fault-tolerant Stateful Stream Processing}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{11--18},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.11},
  URN =		{urn:nbn:de:0030-drops-42669},
  doi =		{10.4230/OASIcs.ICCSW.2013.11},
  annote =	{Keywords: Stateful stream processing, scalability, fault tolerance}
}
Document
Generalizing Multi-Context Systems for Reactive Stream Reasoning Applications

Authors: Stefan Ellmauthaler


Abstract
In the field of artificial intelligence (AI), the subdomain of knowledge representation (KR) has the aim to represent, integrate, and exchange knowledge in order to do some reasoning about the given information. During the last decades many different KR-languages were proposed for a variety of certain applications with specific needs. The concept of a managed Multi-Context System (mMCS) was introduced to provide adequate formal tools to interchange and integrate knowledge between different KR-approaches. Another arising field of interest in computer science is the design of online applications, which react directly to (possibly infinite) streams of information. This paper presents a genuine approach to generalize mMCS for online applications with continuous streams of information. Our major goal is to find a good tradeoff between expressiveness and computational complexity.

Cite as

Stefan Ellmauthaler. Generalizing Multi-Context Systems for Reactive Stream Reasoning Applications. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 19-26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{ellmauthaler:OASIcs.ICCSW.2013.19,
  author =	{Ellmauthaler, Stefan},
  title =	{{Generalizing Multi-Context Systems for Reactive Stream Reasoning Applications}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{19--26},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.19},
  URN =		{urn:nbn:de:0030-drops-42675},
  doi =		{10.4230/OASIcs.ICCSW.2013.19},
  annote =	{Keywords: Knowledge Representation, Artificial Intelligence}
}
Document
Conformal Prediction under Hypergraphical Models

Authors: Valentina Fedorova, Alex Gammerman, Ilia Nouretdinov, and Vladimir Vovk


Abstract
Conformal predictors are usually defined and studied under the exchangeability assumption. However, their definition can be extended to a wide class of statistical models, called online compression models, while retaining their property of automatic validity. This paper is devoted to conformal prediction under hypergraphical models that are more specific than the exchangeability model. We define conformity measures for such hypergraphical models and study the corresponding conformal predictors empirically on benchmark LED data sets. Our experiments show that they are more efficient than conformal predictors that use only the exchangeability assumption.

Cite as

Valentina Fedorova, Alex Gammerman, Ilia Nouretdinov, and Vladimir Vovk. Conformal Prediction under Hypergraphical Models. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 27-34, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{fedorova_et_al:OASIcs.ICCSW.2013.27,
  author =	{Fedorova, Valentina and Gammerman, Alex and Nouretdinov, Ilia and Vovk, Vladimir},
  title =	{{Conformal Prediction under Hypergraphical Models}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{27--34},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.27},
  URN =		{urn:nbn:de:0030-drops-42685},
  doi =		{10.4230/OASIcs.ICCSW.2013.27},
  annote =	{Keywords: conformal prediction, hypergraphical models, conformity measure}
}
Document
Relational Knowledge Extraction from Attribute-Value Learners

Authors: Manoel V. M. França, Artur S. D. Garcez, and Gerson Zaverucha


Abstract
Bottom Clause Propositionalization (BCP) is a recent propositionalization method which allows fast relational learning. Propositional learners can use BCP to obtain accuracy results comparable with Inductive Logic Programming (ILP) learners. However, differently from ILP learners, what has been learned cannot normally be represented in first-order logic. In this paper, we propose an approach and introduce a novel algorithm for extraction of first-order rules from propositional rule learners, when dealing with data propositionalized with BCP. A theorem then shows that the extracted first-order rules are consistent with their propositional version. The algorithm was evaluated using the rule learner RIPPER, although it can be applied on any propositional rule learner. Initial results show that the accuracies of both RIPPER and the extracted first-order rules can be comparable to those obtained by Aleph (a traditional ILP system), but our approach is considerably faster (obtaining speed-ups of over an order of magnitude), generating a compact rule set with at least the same representation power as standard ILP learners.

Cite as

Manoel V. M. França, Artur S. D. Garcez, and Gerson Zaverucha. Relational Knowledge Extraction from Attribute-Value Learners. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 35-42, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{franca_et_al:OASIcs.ICCSW.2013.35,
  author =	{Fran\c{c}a, Manoel V. M. and Garcez, Artur S. D. and Zaverucha, Gerson},
  title =	{{Relational Knowledge Extraction from Attribute-Value Learners}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{35--42},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.35},
  URN =		{urn:nbn:de:0030-drops-42690},
  doi =		{10.4230/OASIcs.ICCSW.2013.35},
  annote =	{Keywords: Relational Learning, Propositionalization, Knowledge Extraction}
}
Document
Tools for the implementation of argumentation models

Authors: Bas van Gijzel


Abstract
The structured approach to argumentation has seen a surge of models, introducing a multitude of ways to deal with the formalisation of arguments. However, while the development of the mathematical models have flourished, the actual implementations and development of methods for implementation of these models have been lagging behind. This paper attempts to alleviate this problem by providing methods that simplify implementation, i.e. we demonstrate how the functional programming language Haskell can naturally express mathematical definitions and sketch how a theorem prover can verify this implementation. Furthermore, we provide methods to streamline the documenting of code, showing how literate programming allows the implementer to write formal definition, implementation and documentation in one file. All code has been made publicly available and reusable.

Cite as

Bas van Gijzel. Tools for the implementation of argumentation models. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 43-48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{vangijzel:OASIcs.ICCSW.2013.43,
  author =	{van Gijzel, Bas},
  title =	{{Tools for the implementation of argumentation models}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{43--48},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.43},
  URN =		{urn:nbn:de:0030-drops-42706},
  doi =		{10.4230/OASIcs.ICCSW.2013.43},
  annote =	{Keywords: argumentation, implementation, functional programming, Haskell, Carneades}
}
Document
Towards the Development of a Hybrid Parser for Natural Languages

Authors: Sardar F. Jaf and Allan Ramsay


Abstract
In order to understand natural languages, we have to be able to determine the relations between words, in other words we have to be able to 'parse' the input text. This is a difficult task, especially for Arabic, which has a number of properties that make it particularly difficult to handle. There are two approaches to parsing natural languages: grammar-driven and data-driven. Each of these approaches poses its own set of problems, which we discuss in this paper. The goal of our work is to produce a hybrid parser, which retains the advantages of the data-driven approach but is guided by grammar rules in order to produce more accurate output. This work consists of two stages: the first stage is to develop a baseline data-driven parser, which is guided by a machine learning algorithm for establishing dependency relations between words. The second stage is to integrate grammar rules into the baseline parser. In this paper, we describe the first stage of our work, which is now implemented, and a number of experiments that have been conducted on this parser. We also discuss the result of these experiments and highlight the different factors that are affecting parsing speed and the correctness of the parser results.

Cite as

Sardar F. Jaf and Allan Ramsay. Towards the Development of a Hybrid Parser for Natural Languages. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 49-56, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{jaf_et_al:OASIcs.ICCSW.2013.49,
  author =	{Jaf, Sardar F. and Ramsay, Allan},
  title =	{{Towards the Development of a Hybrid Parser for Natural Languages}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{49--56},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.49},
  URN =		{urn:nbn:de:0030-drops-42710},
  doi =		{10.4230/OASIcs.ICCSW.2013.49},
  annote =	{Keywords: Hybrid Parsing, Arabic Parsing, Grammar-Driven Parser, Data-Driven Parser, Natural Language Processing}
}
Document
Improving the quality of APIs through the analysis of software crash reports

Authors: Maria Kechagia, Dimitris Mitropoulos, and Diomidis Spinellis


Abstract
Modern programs depend on APIS to implement a significant part of their functionality. Apart from the way developers use APIS to build their software, the stability of these programs relies on the APIS design and implementation. In this work, we evaluate the reliability of APIS, by examining software telemetry data, in the form of stack traces, coming from Android application crashes. We got 4.9 GB worth of crash data that thousands of applications send to a centralized crash report management service. We processed that data to extract approximately a million stack traces, stitching together parts of chained exceptions, and established heuristic rules to draw the border between applications and API calls. We examined 80% of the stack traces to map the space of the most common application failure reasons. Our findings show that the top ones can be attributed to memory exhaustion, race conditions or deadlocks, and missing or corrupt resources. At the same time, a significant number of our stack traces (over 10%) remains unclassified due to generic unchecked exceptions, which do not highlight the problems that lead to crashes. Finally, given the classes of crash causes we found, we argue that API design and implementation improvements, such as specific exceptions, non-blocking algorithms, and default resources, can eliminate common failures.

Cite as

Maria Kechagia, Dimitris Mitropoulos, and Diomidis Spinellis. Improving the quality of APIs through the analysis of software crash reports. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 57-64, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{kechagia_et_al:OASIcs.ICCSW.2013.57,
  author =	{Kechagia, Maria and Mitropoulos, Dimitris and Spinellis, Diomidis},
  title =	{{Improving the quality of APIs through the analysis of software crash reports}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{57--64},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.57},
  URN =		{urn:nbn:de:0030-drops-42721},
  doi =		{10.4230/OASIcs.ICCSW.2013.57},
  annote =	{Keywords: application programming interfaces, mobile applications, crash reports, stack traces}
}
Document
Fast Implementation of the Scalable Video Coding Extension of the H.264/AVC Standard

Authors: Xin Lu and Graham R. Martin


Abstract
In order to improve coding efficiency in the scalable extension of H.264/AVC, an inter-layer prediction mechanism is incorporated. This exploits as much lower layer information as possible to inform the process of coding the enhancement layer(s). However it also greatly increases the computational complexity. In this paper, a fast mode decision algorithm for efficient implementation of the SVC encoder is described. The proposed algorithm not only considers inter-layer correlation but also fully exploits both spatial and temporal correlation as well as an assessment of macroblock texture. All of these factors are organised within a hierarchical structure in the mode decision process. At each level of the structure, different strategies are implemented to eliminate inappropriate candidate modes. Simulation results show that the proposed algorithm reduces encoding time by up to 85% compared with the JSVM 9.18 implementation. This is achieved without any noticeable degradation in rate distortion.

Cite as

Xin Lu and Graham R. Martin. Fast Implementation of the Scalable Video Coding Extension of the H.264/AVC Standard. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 65-72, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{lu_et_al:OASIcs.ICCSW.2013.65,
  author =	{Lu, Xin and Martin, Graham R.},
  title =	{{Fast Implementation of the Scalable Video Coding Extension of the H.264/AVC Standard}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{65--72},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.65},
  URN =		{urn:nbn:de:0030-drops-42737},
  doi =		{10.4230/OASIcs.ICCSW.2013.65},
  annote =	{Keywords: Fast mode selection, Inter-layer prediction, Scalable Video Coding (SVC), SVC extension of H.264/AVC.}
}
Document
Improved Rate Control Algorithm for Scalable Video Coding

Authors: Xin Lu and Graham R. Martin


Abstract
In the Scalable Video Coding (SVC) standard, a multi-layer based structure is utilised to support scalability. However in the latest Joint Scalable Video Model (JSVM) reference software, the rate control algorithm is implemented only in the base layer, and the enhancement layers are not equipped with a rate control scheme. In this work, a novel rate control algorithm is proposed for when inter-layer prediction is employed. Firstly, a Rate-Quantisation (R-Q) model, which considers the coding properties of different prediction modes, is described. Secondly, an improved Mean Absolute Difference (MAD) prediction model for the spatial enhancement layers is proposed, in which the encoding results from the base layer are used to assist the linear MAD prediction in the spatial/CGS enhancement layers. Simulation results show that, on average, rate control accuracy is maintained to within 0.07%. Compared with the default JVT-G012 rate control scheme employed in SVC, the proposed rate control algorithm achieves higher coding efficiency, namely an improvement of up to 0.26dB in PSNR and a saving of 4.66% in bitrate.

Cite as

Xin Lu and Graham R. Martin. Improved Rate Control Algorithm for Scalable Video Coding. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 73-80, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{lu_et_al:OASIcs.ICCSW.2013.73,
  author =	{Lu, Xin and Martin, Graham R.},
  title =	{{Improved Rate Control Algorithm for Scalable Video Coding}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{73--80},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.73},
  URN =		{urn:nbn:de:0030-drops-42744},
  doi =		{10.4230/OASIcs.ICCSW.2013.73},
  annote =	{Keywords: Inter-layer prediction, MAD prediction, Rate control, Scalable Video Coding (SVC), SVC extension of H.264/AVC}
}
Document
An Optimal Real-time Pricing Algorithm for the Smart Grid: A Bi-level Programming Approach

Authors: Fan-Lin Meng and Xiao-Jun Zeng


Abstract
This paper proposes an improved approach to our previous work [meng2012stackelberg]. [meng2012stackelberg] uses Stackelberg game to model the interactions between electricity retailer and its customers and genetic algorithms are used to obtain the Stackelberg Equilibrium (SE). In this paper, we propose a bi-level programming model by considering benefits of the electricity retailer (utility company) and its customer. In the upper level model, the electricity retailer determines the real-time retail prices with the aim to maximize its profit. The customer reacts to the prices announced by the retailer aiming to minimize their electricity bills in the lower level model. In order to make it more tractable, we convert the hierarchical bi-level programming problem into one single level problem by replacing the lower lever's problem with his Karush–Kuhn–Tucker (KKT) conditions. A branch and bound algorithm is chosen to solve the resulting single level problem. Experimental results show that both the bi-level programming model and the solution method are feasible. Compared with the genetic algorithm approach proposed in work [meng2012stackelberg], the branch and bound algorithm in this paper is more efficient in finding the optimal solution.

Cite as

Fan-Lin Meng and Xiao-Jun Zeng. An Optimal Real-time Pricing Algorithm for the Smart Grid: A Bi-level Programming Approach. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 81-88, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{meng_et_al:OASIcs.ICCSW.2013.81,
  author =	{Meng, Fan-Lin and Zeng, Xiao-Jun},
  title =	{{An Optimal Real-time Pricing Algorithm for the Smart Grid: A Bi-level Programming Approach}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{81--88},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.81},
  URN =		{urn:nbn:de:0030-drops-42752},
  doi =		{10.4230/OASIcs.ICCSW.2013.81},
  annote =	{Keywords: Real-time Pricing, Demand Response, Smart Gird, Bi-level Programming, Branch and Bound Algorithm}
}
Document
Dreaming Machines: On multimodal fusion and information retrieval using neural-symbolic cognitive agents

Authors: Leo de Penning, Artur D'Avila Garcez, and John-Jules C. Meyer


Abstract
Deep Boltzmann Machines (DBM) have been used as a computational cognitive model in various AI-related research and applications, notably in computational vision and multimodal fusion. Being regarded as a biological plausible model of the human brain, the DBM is also becoming a popular instrument to investigate various cortical processes in neuroscience. In this paper, we describe how a multimodal DBM is implemented as part of a Neural-Symbolic Cognitive Agent (NSCA) for real-time multimodal fusion and inference of streaming audio and video data. We describe how this agent can be used to simulate certain neurological mechanisms related to hallucinations and dreaming and how these mechanisms are beneficial to the integrity of the DBM. Finally, we will explain how the NSCA is used to extract multimodal information from the DBM and provide a compact and practical iconographic temporal logic formula for complex relations between visual and auditory patterns.

Cite as

Leo de Penning, Artur D'Avila Garcez, and John-Jules C. Meyer. Dreaming Machines: On multimodal fusion and information retrieval using neural-symbolic cognitive agents. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 89-94, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{depenning_et_al:OASIcs.ICCSW.2013.89,
  author =	{de Penning, Leo and D'Avila Garcez, Artur and Meyer, John-Jules C.},
  title =	{{Dreaming Machines: On multimodal fusion and information retrieval using neural-symbolic cognitive agents}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{89--94},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.89},
  URN =		{urn:nbn:de:0030-drops-42765},
  doi =		{10.4230/OASIcs.ICCSW.2013.89},
  annote =	{Keywords: Multimodal fusion, Deep Boltzmann Machine, Neural-Symbolic Cognitive Agent, Dreaming, Hallucinations}
}
Document
Self-composition by Symbolic Execution

Authors: Quoc-Sang Phan


Abstract
Self-composition is a logical formulation of non-interference, a high-level security property that guarantees the absence of illicit information leakages through executing programs. In order to capture program executions, self-composition has been expressed in Hoare or modal logic, and has been proved (or refuted) by using theorem provers. These approaches require considerable user interaction, and verification expertise. This paper presents an automated technique to prove self-composition. We reformulate the idea of self-composition into comparing pairs of symbolic paths of the same program; the symbolic paths are given by Symbolic Execution. The result of our analysis is a logical formula expressing self-composition in first-order theories, which can be solved by off-the-shelf Satisfiability Modulo Theories solvers.

Cite as

Quoc-Sang Phan. Self-composition by Symbolic Execution. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 95-102, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{phan:OASIcs.ICCSW.2013.95,
  author =	{Phan, Quoc-Sang},
  title =	{{Self-composition by Symbolic Execution}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{95--102},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.95},
  URN =		{urn:nbn:de:0030-drops-42770},
  doi =		{10.4230/OASIcs.ICCSW.2013.95},
  annote =	{Keywords: Information Flow, Symbolic Execution, Satisfiability Modulo Theories}
}
Document
Evaluation of Social Personalized Adaptive E-Learning Environments: End-User Point of View

Authors: Lei Shi, Malik Shahzad Awan, and Alexandra I. Cristea


Abstract
The use of adaptations, along with the social affordances of collaboration and networking, carries a great potential for improving e-learning experiences. However, the review of the previous work indicates current e-learning systems have only marginally explored the integration of social features and adaptation techniques. The overall aim of this research, therefore, is to address this gap by evaluating a system developed to foster social personalized adaptive e-learning experiences. We have developed our first prototype system, Topolor, based on the concepts of Adaptive Educational Hypermedia and Social E-Learning. We have also conducted an experimental case study for the evaluation of the prototype system from different perspectives. The results show a considerably high satisfaction of the end users. This paper reports the evaluation results from end user point of view, and generalizes our method to a component-based evaluation framework.

Cite as

Lei Shi, Malik Shahzad Awan, and Alexandra I. Cristea. Evaluation of Social Personalized Adaptive E-Learning Environments: End-User Point of View. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 103-110, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{shi_et_al:OASIcs.ICCSW.2013.103,
  author =	{Shi, Lei and Awan, Malik Shahzad and Cristea, Alexandra I.},
  title =	{{Evaluation of Social Personalized Adaptive E-Learning Environments: End-User Point of View}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{103--110},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.103},
  URN =		{urn:nbn:de:0030-drops-42789},
  doi =		{10.4230/OASIcs.ICCSW.2013.103},
  annote =	{Keywords: adaptive educational hypermedia, social e-learning, evaluation}
}
Document
Logical Foundations of Services

Authors: Ionut Tutu


Abstract
In this paper we consider a logical system of networks of processes that interact in an asynchronous manner by exchanging messages through communication channels. This provides a foundational algebraic framework for service-oriented computing that constitutes a primary factor in defining logical specifications of services, the way models of these specifications capture service orchestrations, and how properties of interaction-points, i.e. points through which such networks connect to one another, can be expressed. We formalise the resulting logic as a parameterised institution, which promotes the development of both declarative and operational semantics of services in a heterogeneous setting by means of logic-programming concepts.

Cite as

Ionut Tutu. Logical Foundations of Services. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 111-118, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{tutu:OASIcs.ICCSW.2013.111,
  author =	{Tutu, Ionut},
  title =	{{Logical Foundations of Services}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{111--118},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.111},
  URN =		{urn:nbn:de:0030-drops-42793},
  doi =		{10.4230/OASIcs.ICCSW.2013.111},
  annote =	{Keywords: Formal methods, Service-oriented computing, Institution theory}
}
Document
Refactoring Boundary

Authors: Tim Wood and Sophia Drossopoulou


Abstract
We argue that the limit of the propagation of the heap effects of a source code modification is determined by the aliasing structure of method parameters in a trace of the method calls that cross a boundary which partitions the heap. Further, that this aliasing structure is sufficient to uniquely determine the state of the part of the heap which has not been affected. And we give a definition of what it means for a part of the heap to be unaffected by a source code modification. This can be used to determine the correctness of a refactoring.

Cite as

Tim Wood and Sophia Drossopoulou. Refactoring Boundary. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 119-127, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{wood_et_al:OASIcs.ICCSW.2013.119,
  author =	{Wood, Tim and Drossopoulou, Sophia},
  title =	{{Refactoring Boundary}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{119--127},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.119},
  URN =		{urn:nbn:de:0030-drops-42809},
  doi =		{10.4230/OASIcs.ICCSW.2013.119},
  annote =	{Keywords: Refactoring, Object Oriented}
}
Document
Using Self-learning and Automatic Tuning to Improve the Performance of Sexual Genetic Algorithms for Constraint Satisfaction Problems

Authors: Hu Xu, Karen Petrie, and Iain Murray


Abstract
Currently the parameters in a constraint solver are often selected by hand by experts in the field; these parameters might include the level of preprocessing to be used and the variable ordering heuristic. The efficient and automatic choice of a preprocessing level for a constraint solver is a step towards making constraint programming a more widely accessible technology. Self-learning sexual genetic algorithms are a new approach combining a self-learning mechanism with sexual genetic algorithms in order to suggest or predict a suitable solver configuration for large scale problems by learning from the same class of small scale problems. In this paper, Self-learning Sexual genetic algorithms are applied to create an automatic solver configuration mechanism for solving various constraint problems. The starting population of self-learning sexual genetic algorithms will be trained through experience on small instances. The experiments in this paper are a proof-of-concept for the idea of combining sexual genetic algorithms with a self-learning strategy to aid in parameter selection for constraint programming.

Cite as

Hu Xu, Karen Petrie, and Iain Murray. Using Self-learning and Automatic Tuning to Improve the Performance of Sexual Genetic Algorithms for Constraint Satisfaction Problems. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 128-135, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{xu_et_al:OASIcs.ICCSW.2013.128,
  author =	{Xu, Hu and Petrie, Karen and Murray, Iain},
  title =	{{Using Self-learning and Automatic Tuning to Improve the Performance of Sexual Genetic Algorithms for Constraint Satisfaction Problems}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{128--135},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.128},
  URN =		{urn:nbn:de:0030-drops-42811},
  doi =		{10.4230/OASIcs.ICCSW.2013.128},
  annote =	{Keywords: Self-learning Genetic Algorithm, Sexual Genetic algorithm, Constraint Programming, Parameter Tuning}
}
Document
Achieving Superscalar Performance without Superscalar Overheads - A Dataflow Compiler IR for Custom Computing

Authors: Ali Mustafa Zaidi and David J. Greaves


Abstract
The difficulty of effectively parallelizing code for multicore processors, combined with the end of threshold voltage scaling has resulted in the problem of 'Dark Silicon', severely limiting performance scaling despite Moore's Law. To address dark silicon, not only must we drastically improve the energy efficiency of computation, but due to Amdahl's Law, we must do so without compromising sequential performance. Designers increasingly utilize custom hardware to dramatically improve both efficiency and performance in increasingly heterogeneous architectures. Unfortunately, while it efficiently accelerates numeric, data-parallel applications, custom hardware often exhibits poor performance on sequential code, so complex, power-hungry superscalar processors must still be utilized. This paper addresses the problem of improving sequential performance in custom hardware by (a) switching from a statically scheduled to a dynamically scheduled (dataflow) execution model, and (b) developing a new compiler IR for high-level synthesis that enables aggressive exposition of ILP even in the presence of complex control flow. This new IR is directly implemented as a static dataflow graph in hardware by our high-level synthesis tool-chain, and shows an average speedup of 1.13 times over equivalent hardware generated using LegUp, an existing HLS tool. In addition, our new IR allows us to further trade area & energy for performance, increasing the average speedup to 1.55 times, through loop unrolling, with a peak speedup of 4.05 times. Our custom hardware is able to approach the sequential cycle-counts of an Intel Nehalem Core i7 superscalar processor, while consuming on average only 0.25 times the energy of an in-order Altera Nios IIf processor.

Cite as

Ali Mustafa Zaidi and David J. Greaves. Achieving Superscalar Performance without Superscalar Overheads - A Dataflow Compiler IR for Custom Computing. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 136-143, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{zaidi_et_al:OASIcs.ICCSW.2013.136,
  author =	{Zaidi, Ali Mustafa and Greaves, David J.},
  title =	{{Achieving Superscalar Performance without Superscalar Overheads - A Dataflow Compiler IR for Custom Computing}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{136--143},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.136},
  URN =		{urn:nbn:de:0030-drops-42825},
  doi =		{10.4230/OASIcs.ICCSW.2013.136},
  annote =	{Keywords: High-level Synthesis, Instruction Level Parallelism, Custom Computing, Compilers, Dark Silicon}
}
Document
A Graph based approach for Co-scheduling jobs on Multi-core computers

Authors: Huanzhou Zhu and Ligang He


Abstract
In a multicore processor system, running multiple applications on different cores in the same chip could cause resource contention, which leads to performance degradation. Recent studies have shown that job co-scheduling can effectively reduce the contention. However, most existing co-schedulers do not aim to find the optimal co-scheduling solution. It is very useful to know the optimal co-scheduling performance so that the system and scheduler designers can know how much room there is for further performance improvement. Moreover, most co-schedulers only consider serial jobs, and do not take parallel jobs into account. This paper aims to tackle the above issues. In this paper, we first present a new approach to modelling the problem of co-scheduling both parallel and serial jobs. Further, a method is developed to find the optimal co-scheduling solutions. The simulation results show that compare to the method that only considers serial jobs, our developed method to co-schedule parallel jobs can improve the performance by 31% on average.

Cite as

Huanzhou Zhu and Ligang He. A Graph based approach for Co-scheduling jobs on Multi-core computers. In 2013 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 35, pp. 144-151, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{zhu_et_al:OASIcs.ICCSW.2013.144,
  author =	{Zhu, Huanzhou and He, Ligang},
  title =	{{A Graph based approach for Co-scheduling jobs on Multi-core computers}},
  booktitle =	{2013 Imperial College Computing Student Workshop},
  pages =	{144--151},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-63-7},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{35},
  editor =	{Jones, Andrew V. and Ng, Nicholas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2013.144},
  URN =		{urn:nbn:de:0030-drops-42837},
  doi =		{10.4230/OASIcs.ICCSW.2013.144},
  annote =	{Keywords: Co-scheduling algorithm, Multicore processor, Cache interference, Parallel Job}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail