OASIcs, Volume 28

2012 Imperial College Computing Student Workshop



Thumbnail PDF

Publication Details

  • published at: 2012-11-09
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-939897-48-4
  • DBLP: db/conf/iccsw/iccsw2012

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
OASIcs, Volume 28, ICCSW'12, Complete Volume

Authors: Andrew V. Jones


Abstract
OASIcs, Volume 28, ICCSW'12, Complete Volume

Cite as

2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@Proceedings{jones:OASIcs.ICCSW.2012,
  title =	{{OASIcs, Volume 28, ICCSW'12, Complete Volume}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2013},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012},
  URN =		{urn:nbn:de:0030-drops-40799},
  doi =		{10.4230/OASIcs.ICCSW.2012},
  annote =	{Keywords: Conference Proceedings}
}
Document
Front Matter
Frontmatter, Table of Contents, Preface, Conference Organisation, Supporters and Sponsors

Authors: Andrew V. Jones


Abstract
Frontmatter, Table of Contents, Preface, Conference Organisation, Supporters and Sponsors

Cite as

2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. i-xi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{jones:OASIcs.ICCSW.2012.i,
  author =	{Jones, Andrew V.},
  title =	{{Frontmatter, Table of Contents, Preface, Conference Organisation, Supporters and Sponsors}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{i--xi},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.i},
  URN =		{urn:nbn:de:0030-drops-37567},
  doi =		{10.4230/OASIcs.ICCSW.2012.i},
  annote =	{Keywords: Frontmatter, Table of Contents, Preface, Conference Organisation, Supporters and Sponsors}
}
Document
Knowledge Transformation using a Hypergraph Data Model

Authors: Lama Al Khuzayem and Peter McBrien


Abstract
In the Semantic Web, knowledge integration is frequently performed between heterogeneous knowledge bases. Such knowledge integration often requires the schema expressed in one knowledge modelling language be translated into an equivalent schema in another knowledge modelling language. This paper defines how schemas expressed in OWL-DL (the Web Ontology Language using Description Logic) can be translated into equivalent schemas in the Hypergraph Data Model (HDM). The HDM is used in the AutoMed data integration (DI) system. It allows constraints found in data modelling languages to be represented by a small set of primitive constraint operators. By mapping into the AutoMed HDM language, we are then able to further map the OWL-DL schemas into any of the existing modelling languages supported by AutoMed. We show how previously defined transformation rules between relational and HDM schemas, and our newly defined rules between OWL-DL and HDM schemas, can be composed to give a bidirectional mapping between OWL-DL and relational schemas through the use of the both-as-view approach in AutoMed.

Cite as

Lama Al Khuzayem and Peter McBrien. Knowledge Transformation using a Hypergraph Data Model. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 1-7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{alkhuzayem_et_al:OASIcs.ICCSW.2012.1,
  author =	{Al Khuzayem, Lama and McBrien, Peter},
  title =	{{Knowledge Transformation using a Hypergraph Data Model}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{1--7},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.1},
  URN =		{urn:nbn:de:0030-drops-37570},
  doi =		{10.4230/OASIcs.ICCSW.2012.1},
  annote =	{Keywords: Knowledge Transformation, Hypergraph Data Model, BAV Mappings}
}
Document
A heuristic for sparse signal reconstruction

Authors: Theofanis Apostolopoulos


Abstract
Compressive Sampling (CS) is a new method of signal acquisition and reconstruction from frequency data which do not follow the basic principle of the Nyquist-Shannon sampling theory. This new method allows reconstruction of the signal from substantially fewer measurements than those required by conventional sampling methods. We present and discuss a new, swarm based, technique for representing and reconstructing signals, with real values, in a noiseless environment. The method consists of finding an approximation of the l_0-norm based problem, as a combinatorial optimization problem for signal reconstruction. We also present and discuss some experimental results which compare the accuracy and the running time of our heuristic to the IHT and IRLS methods.

Cite as

Theofanis Apostolopoulos. A heuristic for sparse signal reconstruction. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 8-14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{apostolopoulos:OASIcs.ICCSW.2012.8,
  author =	{Apostolopoulos, Theofanis},
  title =	{{A heuristic for sparse signal reconstruction}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{8--14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.8},
  URN =		{urn:nbn:de:0030-drops-37589},
  doi =		{10.4230/OASIcs.ICCSW.2012.8},
  annote =	{Keywords: Compressive Sampling, sparse signal representation, l\underline0 minimisation, non-linear programming, signal recovery}
}
Document
Predicate Invention in Inductive Logic Programming

Authors: Duangtida Athakravi, Krysia Broda, and Alessandra Russo


Abstract
The ability to recognise new concepts and incorporate them into our knowledge is an essential part of learning. From new scientific concepts to the words that are used in everyday conversation, they all must have at some point in the past, been invented and their definition defined. In this position paper, we discuss how a general framework for predicate invention could be made, by reasoning about the problem at the meta-level using an appropriate notion of top theory in inductive logic programming.

Cite as

Duangtida Athakravi, Krysia Broda, and Alessandra Russo. Predicate Invention in Inductive Logic Programming. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 15-21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{athakravi_et_al:OASIcs.ICCSW.2012.15,
  author =	{Athakravi, Duangtida and Broda, Krysia and Russo, Alessandra},
  title =	{{Predicate Invention in Inductive Logic Programming}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{15--21},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.15},
  URN =		{urn:nbn:de:0030-drops-37596},
  doi =		{10.4230/OASIcs.ICCSW.2012.15},
  annote =	{Keywords: Predicate invention, Inductive logic programming, Machine learning}
}
Document
Targeting a Practical Approach for Robot Vision with Ensembles of Visual Features

Authors: Emanuela Boros


Abstract
We approach the task of topological localization in mobile robotics without using a temporal continuity of the sequences of images. The provided information about the environment is contained in images taken with a perspective colour camera mounted on a robot platform. The main contributions of this work are quantifiable examinations of a wide variety of different global and local invariant features, and different distance measures. We focus on finding the optimal set of features and a deepened analysis was carried out. The characteristics of different features were analysed using widely known dissimilarity measures and graphical views of the overall performances. The quality of the acquired configurations is also tested in the localization stage by means of location recognition in the Robot Vision task, by participating at the ImageCLEF International Evaluation Campaign. The long term goal of this project is to develop integrated, stand alone capabilities for real-time topological localization in varying illumination conditions and over longer routes.

Cite as

Emanuela Boros. Targeting a Practical Approach for Robot Vision with Ensembles of Visual Features. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 22-28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{boros:OASIcs.ICCSW.2012.22,
  author =	{Boros, Emanuela},
  title =	{{Targeting a Practical Approach for Robot Vision with Ensembles of Visual Features}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{22--28},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.22},
  URN =		{urn:nbn:de:0030-drops-37602},
  doi =		{10.4230/OASIcs.ICCSW.2012.22},
  annote =	{Keywords: Visual Place Classification, Robot Topological Localization, Visual Feature Detectors, Visual Feature Descriptors}
}
Document
Incremental HMM with an improved Baum-Welch Algorithm

Authors: Tiberiu S. Chis and Peter G. Harrison


Abstract
There is an increasing demand for systems which handle higher density, additional loads as seen in storage workload modelling, where workloads can be characterized on-line. This paper aims to find a workload model which processes incoming data and then updates its parameters "on-the-fly." Essentially, this will be an incremental hidden Markov model (IncHMM) with an improved Baum-Welch algorithm. Thus, the benefit will be obtaining a parsimonious model which updates its encoded information whenever more real time workload data becomes available. To achieve this model, two new approximations of the Baum-Welch algorithm are defined, followed by training our model using discrete time series. This time series is transformed from a large network trace made up of I/O commands, into a partitioned binned trace, and then filtered through a K-means clustering algorithm to obtain an observation trace. The IncHMM, together with the observation trace, produces the required parameters to form a discrete Markov arrival process (MAP). Finally, we generate our own data trace (using the IncHMM parameters and a random distribution) and statistically compare it to the raw I/O trace, thus validating our model.

Cite as

Tiberiu S. Chis and Peter G. Harrison. Incremental HMM with an improved Baum-Welch Algorithm. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 29-34, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{chis_et_al:OASIcs.ICCSW.2012.29,
  author =	{Chis, Tiberiu S. and Harrison, Peter G.},
  title =	{{Incremental HMM with an improved Baum-Welch Algorithm}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{29--34},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.29},
  URN =		{urn:nbn:de:0030-drops-37613},
  doi =		{10.4230/OASIcs.ICCSW.2012.29},
  annote =	{Keywords: hidden Markov model, Baum-Welch algorithm, Backward algorithm, discrete Markov arrival process, incremental workload model}
}
Document
Device specialization in heterogeneous multi-GPU environments

Authors: Gabriele Cocco and Antonio Cisternino


Abstract
In the last few years there have been many activities towards coupling CPUs and GPUs in order to get the most from CPU-GPU heterogeneous systems. One of the main problems that prevent these systems to be exploited in a device-aware manner is the CPU-GPU communication bottleneck, which often doesn't allow to produce code more efficient than the GPU-only and the CPU-only counterparts. As a consequence, most of the heterogeneous scheduling systems treat CPUs and GPUs as homogeneous nodes, electing map-like data partitioning to employ both these processing resources. We propose to study how the radical change in the connection between GPU, CPU and memory characterizing the APUs (Accelerated Processing Units) affect the architecture of a compiler and if it is possible to use all these computing resources in a device-aware manner. We investigate on a methodology to analyze the devices that populate heterogeneous multi-GPU systems and to classify general purpose algorithms in order to perform near-optimal control flow and data partitioning.

Cite as

Gabriele Cocco and Antonio Cisternino. Device specialization in heterogeneous multi-GPU environments. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 35-41, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{cocco_et_al:OASIcs.ICCSW.2012.35,
  author =	{Cocco, Gabriele and Cisternino, Antonio},
  title =	{{Device specialization in heterogeneous multi-GPU environments}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{35--41},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.35},
  URN =		{urn:nbn:de:0030-drops-37623},
  doi =		{10.4230/OASIcs.ICCSW.2012.35},
  annote =	{Keywords: HPC APU GPU GPGPU Heterogeneous-computing Parallel-computing Task-scheduling}
}
Document
Abstracting Continuous Nonpolynomial Dynamical Systems

Authors: William Denman


Abstract
The reachability problem, whether some unsafe state can be reached, is known to be undecidable for nonlinear dynamical systems. However, finite-state abstractions have successfully been used for safety verification. This paper presents a method for automatically abstracting nonpolynomial systems that do not have analytical or closed form solutions. The abstraction is constructed by splitting up the state-space using nonpolynomial Lyapunov functions. These functions place guarantees on the behaviour of the system without requiring the explicit calculation of trajectories. MetiTarski, an automated theorem prover for special functions (sin, cos, sqrt, exp) is used to identify possible transitions between the abstract states. The resulting finite-state system is perfectly suited for verification by a model checker.

Cite as

William Denman. Abstracting Continuous Nonpolynomial Dynamical Systems. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 42-48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{denman:OASIcs.ICCSW.2012.42,
  author =	{Denman, William},
  title =	{{Abstracting Continuous Nonpolynomial Dynamical Systems}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{42--48},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.42},
  URN =		{urn:nbn:de:0030-drops-37638},
  doi =		{10.4230/OASIcs.ICCSW.2012.42},
  annote =	{Keywords: Formal Verification, Automated Theorem Proving, Abstraction, Nonpolynomial System, MetiTarski}
}
Document
Improving the Quality of Distributed Composite Service Applications

Authors: Dionysios Efstathiou, Peter McBurney, Noël Plouzeau, and Steffen Zschaler


Abstract
Dynamic service composition promotes the on-the-fly creation of value-added applications by combining services. Large scale, dynamic distributed applications, like those in the pervasive computing domain, pose many obstacles to service composition such as mobility, and resource availability. In such environments, a huge number of possible composition configurations may provide the same functionality, but only some of those may exhibit the desirable non-functional qualities (e.g. low battery consumption and response time) or satisfy users' preferences and constraints. The goal of a service composition optimiser is to scan the possible composition plans to detect these that are optimal in some sense (e.g. maximise availability or minimise data latency) with acceptable performance (e.g. relatively fast for the application domain). However, the majority of the proposed optimisation approaches for finding optimal composition plans, examine only the Quality of Service of each participated service in isolation without studying how the services are composed together within the composition. We argue that the consideration of multiple factors when searching for the optimal composition plans, such as which services are selected to participate in the composition, how these services are coordinated, communicate and interact within a composition, may improve the end-to-end quality of composite applications.

Cite as

Dionysios Efstathiou, Peter McBurney, Noël Plouzeau, and Steffen Zschaler. Improving the Quality of Distributed Composite Service Applications. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 49-55, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{efstathiou_et_al:OASIcs.ICCSW.2012.49,
  author =	{Efstathiou, Dionysios and McBurney, Peter and Plouzeau, No\"{e}l and Zschaler, Steffen},
  title =	{{Improving the Quality of Distributed Composite Service Applications}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{49--55},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.49},
  URN =		{urn:nbn:de:0030-drops-37649},
  doi =		{10.4230/OASIcs.ICCSW.2012.49},
  annote =	{Keywords: Service Composition, Optimisation, Dynamism, Evolution}
}
Document
Fine-Grained Opinion Mining as a Relation Classification Problem

Authors: Alexandru Lucian Ginsca


Abstract
The main focus of this paper is to investigate methods for opinion extraction at a more detailed level of granularity, retrieving not only the opinionated portion of text, but also the target of that expressed opinion. We describe a novel approach to fine-grained opinion mining that, after an initial lexicon based processing step, treats the problem of finding the opinion expressed towards an entity as a relation classification task. We detail a classification workflow that combines the initial lexicon based module with a broader classification part that involves two different models, one for relation classification and the other for sentiment polarity shift identification. We provided detailed descriptions of a series of classification experiments in which we use an original proximity based bag-of-words model. We also introduce a new use of syntactic features used together with a tree kernel for both the relation and sentiment polarity shift classification tasks.

Cite as

Alexandru Lucian Ginsca. Fine-Grained Opinion Mining as a Relation Classification Problem. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 56-61, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{ginsca:OASIcs.ICCSW.2012.56,
  author =	{Ginsca, Alexandru Lucian},
  title =	{{Fine-Grained Opinion Mining as a Relation Classification Problem}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{56--61},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.56},
  URN =		{urn:nbn:de:0030-drops-37653},
  doi =		{10.4230/OASIcs.ICCSW.2012.56},
  annote =	{Keywords: Opinion Mining, Opinion Target Identification, Syntactic Features}
}
Document
Mechanisms for Opponent Modelling

Authors: Christos Hadjinikolis, Sanjay Modgil, Elizabeth Black, and Peter McBurney


Abstract
In various competitive game contexts, gathering information about one's opponent and relying on it for planning a strategy has been the dominant approach for numerous researchers who deal with what in game theoretic terms is known as the best response problem. This approach is known as opponent modelling. The general idea is given a model of one's adversary to rely on it for simulating the possible ways based on which a game may evolve, so as to then choose out of a number of response options the most suitable~in relation to one's goals. Similarly, many approaches concerned with strategising in the context of dialogue games rely on such models for implementing and employing strategies. In most cases though, the methodologies and the formal procedures based on which an opponent model may be built and updated receive little attention, as they are usually left implicit. In this paper we assume a general framework for argumentation-based persuasion dialogue, and we rely on a logical conception of arguments - based on the recent ASPIC^+ model for argumentation - to formally define a number of mechanisms based on which an opponent model may be built, updated, and augmented.

Cite as

Christos Hadjinikolis, Sanjay Modgil, Elizabeth Black, and Peter McBurney. Mechanisms for Opponent Modelling. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 62-68, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{hadjinikolis_et_al:OASIcs.ICCSW.2012.62,
  author =	{Hadjinikolis, Christos and Modgil, Sanjay and Black, Elizabeth and McBurney, Peter},
  title =	{{Mechanisms for Opponent Modelling}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{62--68},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.62},
  URN =		{urn:nbn:de:0030-drops-37663},
  doi =		{10.4230/OASIcs.ICCSW.2012.62},
  annote =	{Keywords: dialogue, strategies, argumentation, opponent model}
}
Document
4D Cardiac Volume Reconstruction from Free-Breathing 2D Real-Time Image Acquisitions using Iterative Motion Correction

Authors: Martin Jantsch, Daniel Rueckert, and Jo Hajnal


Abstract
For diagnosis, treatment and study of various cardiac diseases directly affecting the functionality and morphology of the heart, physicians rely more and more on MR imaging techniques. MRI has good tissue contrast and can achieve high spatial and temporal resolutions. However it requires a relatively long time to obtain enough data to reconstruct useful images. Additionally, when imaging the heart, the occurring motions - breathing and heart beat - have to be taken into account. While the cardiac motion still has to be correctly seen to asses functionality, the respiratory motion has to be removed to avoid serious motion artefacts. We present initial results for a reconstruction pipeline that takes multiple stacks of 2D slices, calculates the occurring deformations for both cardiac and respiratory motions and reconstructs a coherent 4D volume of the beating heart. The 2D slices are acquired during free-breathing over the whole respiratory cycle, using a fast real-time technique. For motion estimation two different transformation models were used. A cyclic 4D B-spline free-form deformation model for the cardiac motion and a 1D B-spline affine model for the respiratory motion. Both transformations and the common reference frame needed for the registration are optimized in an interleaved, iterative scheme.

Cite as

Martin Jantsch, Daniel Rueckert, and Jo Hajnal. 4D Cardiac Volume Reconstruction from Free-Breathing 2D Real-Time Image Acquisitions using Iterative Motion Correction. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 69-74, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{jantsch_et_al:OASIcs.ICCSW.2012.69,
  author =	{Jantsch, Martin and Rueckert, Daniel and Hajnal, Jo},
  title =	{{4D Cardiac Volume Reconstruction from Free-Breathing 2D Real-Time Image Acquisitions using Iterative Motion Correction}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{69--74},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.69},
  URN =		{urn:nbn:de:0030-drops-37676},
  doi =		{10.4230/OASIcs.ICCSW.2012.69},
  annote =	{Keywords: MRI, Cardiac, Registration}
}
Document
Collecting battery data with Open Battery

Authors: Gareth L. Jones and Peter G. Harrison


Abstract
In this paper we present Open Battery, a tool for collecting data on mobile phone battery usage, describe the data we have collected so far and make some observations. We then introduce the fluid queue model which we hope may prove a useful tool in future work to describe mobile phone battery traces.

Cite as

Gareth L. Jones and Peter G. Harrison. Collecting battery data with Open Battery. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 75-80, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{jones_et_al:OASIcs.ICCSW.2012.75,
  author =	{Jones, Gareth L. and Harrison, Peter G.},
  title =	{{Collecting battery data with Open Battery}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{75--80},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.75},
  URN =		{urn:nbn:de:0030-drops-37683},
  doi =		{10.4230/OASIcs.ICCSW.2012.75},
  annote =	{Keywords: battery model, battery data}
}
Document
Informing Coalition Structure Generation in Multi-Agent Systems Through Emotion Modelling

Authors: Martyn Lloyd-Kelly and Luke Riley


Abstract
We propose a hybrid coalition formation method for multi-agent systems that combines a rational mechanism and an emotionally-inspired mechanism to reduce the associated computational cost. To initialise coalition formation, the rational mechanism is used and in subsequent iterations, the emotional mechanism (that forms coalitions resulting from emotional reactions to aspects of interactions between agents) is used. The emotions of anger and gratitude are modelled and used as a basis to model trust which is in turn used to restrict the coalition state-space. We offer some discussion as to how this hybrid method offers an improvement over using a method that only considers payoff maximisation and we propose some direction for future work.

Cite as

Martyn Lloyd-Kelly and Luke Riley. Informing Coalition Structure Generation in Multi-Agent Systems Through Emotion Modelling. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 81-87, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{lloydkelly_et_al:OASIcs.ICCSW.2012.81,
  author =	{Lloyd-Kelly, Martyn and Riley, Luke},
  title =	{{Informing Coalition Structure Generation in Multi-Agent Systems Through Emotion Modelling}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{81--87},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.81},
  URN =		{urn:nbn:de:0030-drops-37699},
  doi =		{10.4230/OASIcs.ICCSW.2012.81},
  annote =	{Keywords: Multi-Agent Systems, Coalition Formation, Emotion}
}
Document
Bounded Model Checking for Linear Time Temporal-Epistemic Logic

Authors: Artur Meski, Wojciech Penczek, and Maciej Szreter


Abstract
We present a novel approach to the verification of multi-agent systems using bounded model checking for specifications in LTLK, a linear time temporal-epistemic logic. The method is based on binary decision diagrams rather than the standard conversion to Boolean satisfiability. We apply the approach to two classes of interpreted systems: the standard, synchronous semantics and the interleaved semantics. We provide a symbolic algorithm for the verification of LTLK over models of multi-agent systems and evaluate its implementation against MCK, a competing model checker for knowledge. Our evaluation indicates that the interleaved semantics can often be preferable in the verification of LTLK.

Cite as

Artur Meski, Wojciech Penczek, and Maciej Szreter. Bounded Model Checking for Linear Time Temporal-Epistemic Logic. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 88-94, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{meski_et_al:OASIcs.ICCSW.2012.88,
  author =	{Meski, Artur and Penczek, Wojciech and Szreter, Maciej},
  title =	{{Bounded Model Checking for Linear Time Temporal-Epistemic Logic}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{88--94},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.88},
  URN =		{urn:nbn:de:0030-drops-37705},
  doi =		{10.4230/OASIcs.ICCSW.2012.88},
  annote =	{Keywords: model checking, multi-agent systems, temporal-epistemic logic, verification, interpreted systems}
}
Document
A compositional model to characterize software and hardware from their resource usage

Authors: Davide Morelli and Antonio Cisternino


Abstract
Since the introduction of laptops and mobile devices, there has been a strong research focus towards the energy efficiency of hardware. Many papers, both from academia and industrial research labs, focus on methods and ideas to lower power consumption in order to lengthen the battery life of portable device components. Much less effort has been spent on defining the responsibility of software in the overall computational system’s energy consumption. Some attempts have been made to describe the energy behaviour of software, but none of them abstract from the physical machine where the measurements were taken. In our opinion this is a strong drawback because results can not be generalized. We propose a measuring method and a set of algebraic tools that can be applied to resource usage measurements. These tools are expressive and show insights on how the hardware consumes energy (or other resources), but are equally able to describe how efficiently the software exploits hardware characteristics. The method is based on the idea of decomposing arbitrary programs into linear combinations of benchmarks of a test-bed without the need to analyse a program’s source code by employing a black box approach, measuring only its resource usage.

Cite as

Davide Morelli and Antonio Cisternino. A compositional model to characterize software and hardware from their resource usage. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 95-101, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{morelli_et_al:OASIcs.ICCSW.2012.95,
  author =	{Morelli, Davide and Cisternino, Antonio},
  title =	{{A compositional model to characterize software and hardware from their resource usage}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{95--101},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.95},
  URN =		{urn:nbn:de:0030-drops-37714},
  doi =		{10.4230/OASIcs.ICCSW.2012.95},
  annote =	{Keywords: Performance, Metrics, Energy consumption}
}
Document
Integration of Temporal Abstraction and Dynamic Bayesian Networks in Clinical Systems. A preliminary approach

Authors: Kalia Orphanou, Elpida Keravnou, and Joseph Moutiris


Abstract
Abstraction of temporal data (TA) aims to abstract time-points into higher-level interval concepts and to detect significant trends in both low-level data and abstract concepts. TA methods are used for summarizing and interpreting clinical data. Dynamic Bayesian Networks (DBNs) are temporal probabilistic graphical models which can be used to represent knowledge about uncertain temporal relationships between events and state changes during time. In clinical systems, they were introduced to encode and use the domain knowledge acquired from human experts to perform decision support. A hypothesis that this study plans to investigate is whether temporal abstraction methods can be effectively integrated with DBNs in the context of medical decision-support systems. A preliminary approach is presented where a DBN model is constructed for prognosis of the risk for coronary artery disease (CAD) based on its risk factors and using as test bed a dataset that was collected after monitoring patients who had positive history of cardiovascular disease. The technical objectives of this study are to examine how DBNs will represent the abstracted data in order to construct the prognostic model and whether the retrieved rules from the model can be used for generating more complex abstractions.

Cite as

Kalia Orphanou, Elpida Keravnou, and Joseph Moutiris. Integration of Temporal Abstraction and Dynamic Bayesian Networks in Clinical Systems. A preliminary approach. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 102-108, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{orphanou_et_al:OASIcs.ICCSW.2012.102,
  author =	{Orphanou, Kalia and Keravnou, Elpida and Moutiris, Joseph},
  title =	{{Integration of Temporal Abstraction and Dynamic Bayesian Networks in Clinical Systems. A preliminary approach}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{102--108},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.102},
  URN =		{urn:nbn:de:0030-drops-37727},
  doi =		{10.4230/OASIcs.ICCSW.2012.102},
  annote =	{Keywords: temporal abstraction, medical prognostic models, dynamic Bayesian network, coronary artery disease}
}
Document
Get started imminently: Using tutorials to accelerate learning in automated static analysis

Authors: Jan-Peter Ostberg and Stefan Wagner


Abstract
Static analysis can be a valuable quality assurance technique as it can find problems by analysing the source code of a system without executing it. Getting used to a static analysis tool, however, can easily take several hours or even days. In particular, understanding the warnings issued by the tool and rooting out the false positives is time consuming. This lowers the benefits of static analysis and demotivates developers in using it. Games solve this problem by offering a tutorial. Those tutorials are integrated in the setting of the game and teach the basic mechanics of the game. Often it is possible to repeat or pick topics of interest. We transfer this pattern to static analysis lowering the initial barrier of using it as well as getting an understanding of software quality spread out to more people. In this paper we propose a research strategy starting with a piloting period in which we will gather information about the questions static analysis users have as well as hone our answers to these questions. These results will be integrated into the prototype. We will evaluate our work then by comparing the fix times of user using the original tool versus our tool.

Cite as

Jan-Peter Ostberg and Stefan Wagner. Get started imminently: Using tutorials to accelerate learning in automated static analysis. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 109-115, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{ostberg_et_al:OASIcs.ICCSW.2012.109,
  author =	{Ostberg, Jan-Peter and Wagner, Stefan},
  title =	{{Get started imminently: Using tutorials to accelerate learning in automated static analysis}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{109--115},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.109},
  URN =		{urn:nbn:de:0030-drops-37739},
  doi =		{10.4230/OASIcs.ICCSW.2012.109},
  annote =	{Keywords: static analysis, motivation, usability, empirical research, gamification}
}
Document
A Quantitative Study of Social Organisation in Open Source Software Communities

Authors: Marcelo Serrano Zanetti, Emre Sarigöl, Ingo Scholtes, Claudio Juan Tessone, and Frank Schweitzer


Abstract
The success of open source projects crucially depends on the voluntary contributions of a sufficiently large community of users. Apart from the mere size of the community, interesting questions arise when looking at the evolution of structural features of collaborations between community members. In this article, we discuss several network analytic proxies that can be used to quantify different aspects of the social organisation in social collaboration networks. We particularly focus on measures that can be related to the cohesiveness of the communities, the distribution of responsibilities and the resilience against turnover of community members. We present a comparative analysis on a large-scale dataset that covers the full history of collaborations between users of $14$ major open source software communities. Our analysis covers both aggregate and time-evolving measures and highlights differences in the social organisation across communities. We argue that our results are a promising step towards the definition of suitable, potentially multi-dimensional, resilience and risk indicators for open source software communities.

Cite as

Marcelo Serrano Zanetti, Emre Sarigöl, Ingo Scholtes, Claudio Juan Tessone, and Frank Schweitzer. A Quantitative Study of Social Organisation in Open Source Software Communities. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 116-122, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{serranozanetti_et_al:OASIcs.ICCSW.2012.116,
  author =	{Serrano Zanetti, Marcelo and Sarig\"{o}l, Emre and Scholtes, Ingo and Tessone, Claudio Juan and Schweitzer, Frank},
  title =	{{A Quantitative Study of Social Organisation in Open Source Software Communities}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{116--122},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.116},
  URN =		{urn:nbn:de:0030-drops-37748},
  doi =		{10.4230/OASIcs.ICCSW.2012.116},
  annote =	{Keywords: open source software, mining software repositories, social networks}
}
Document
Apply the We!Design Methodology in E-learning 2.0 System Design: A Pilot Study

Authors: Lei Shi, Dana Al Qudah, and Alexandra I. Cristea


Abstract
During the emergence of Web 2.0, the methodologies and technologies of E-learning have developed to a new era, E-learning 2.0, emphasises on social learning and the use of social interaction tools. The students are the main end-user of the E-learning 2.0 systems, so it is essential to take students' opinions into consideration during the design process of such systems. The We!Design participatory design methodology is proposed for incorporating undergraduate students in the development of educational systems. This pilot study aims to investigate how the We!Design methodology would work and what the results might propose, and gather initial preferences and improve the quality and efficiency of the larger scale studies in the future.

Cite as

Lei Shi, Dana Al Qudah, and Alexandra I. Cristea. Apply the We!Design Methodology in E-learning 2.0 System Design: A Pilot Study. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 123-128, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{shi_et_al:OASIcs.ICCSW.2012.123,
  author =	{Shi, Lei and Al Qudah, Dana and Cristea, Alexandra I.},
  title =	{{Apply the We!Design Methodology in E-learning 2.0 System Design: A Pilot Study}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{123--128},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.123},
  URN =		{urn:nbn:de:0030-drops-37750},
  doi =		{10.4230/OASIcs.ICCSW.2012.123},
  annote =	{Keywords: Participatory design, Requirement analysis, E-learning 2.0, Web 2.0}
}
Document
An Implementation Model of a Declarative Framework for Automated Negotiation

Authors: Laura Surcel


Abstract
The subject of automated negotiations has received a lot of attention in the Multi-Agent Systems (MAS) research community. Most work in this field on the auction design space, on its parametrization and on mechanisms for specific types of auctions. One of the problems that have been recently addressed consists in developing a generic negotiation protocol (GNP) capable of governing the interaction between agents that participate in any type of auction. Though much has been said on this matter, the current results stop at the XML representation of specific negotiation mechanisms. In this paper we propose a declarative approach for specifying a generic auction protocol by using Belief-Desire-Intention (BDI) agents and the Jason programming language to represent the entities that communicate in an auction. In order to validate the claim on the generality of the proposed approach we have used the GNP to model two negotiation mechanisms: one for the English auction and one for the Dutch auction.

Cite as

Laura Surcel. An Implementation Model of a Declarative Framework for Automated Negotiation. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 129-134, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{surcel:OASIcs.ICCSW.2012.129,
  author =	{Surcel, Laura},
  title =	{{An Implementation Model of a Declarative Framework for Automated Negotiation}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{129--134},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.129},
  URN =		{urn:nbn:de:0030-drops-37769},
  doi =		{10.4230/OASIcs.ICCSW.2012.129},
  annote =	{Keywords: Auctions, automated negotiations, multi-agent systems, Jason, generic negotiation protocol}
}
Document
Blurring the Computation-Communication Divide: Extraneous Memory Accesses and their Effects on MPI Intranode Communications

Authors: Wilson M. Tan and Stephen A. Jarvis


Abstract
Modern MPI simulator frameworks assume the existence of a Computation-Communication Divide: thus, they model and simulate the computation and communication sections of an MPI Program separately. The assumption is actually sound for MPI processes that are situated in different nodes and communicate through a network medium such as Ethernet or Infiniband. For processes that are within a node however, the validity of the assumption is limited since the processes communicate using shared memory, which also figures in computation by storing the application and its associated data structures. In this work, the limits of the said assumption's validity were tested, and it is shown that Extraneous Memory Accesses(EMAs) by a compute section could significantly slow down the communication operations following it. Two general observations were made in the course of this work: first, more EMAs cause greater slowdown; and second, EMAs coming from the compute section of the processes containing the MPI_Recv are more detrimental to communication performance than those coming from processes containing MPI_Send.

Cite as

Wilson M. Tan and Stephen A. Jarvis. Blurring the Computation-Communication Divide: Extraneous Memory Accesses and their Effects on MPI Intranode Communications. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 135-141, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{tan_et_al:OASIcs.ICCSW.2012.135,
  author =	{Tan, Wilson M. and Jarvis, Stephen A.},
  title =	{{Blurring the Computation-Communication Divide: Extraneous Memory Accesses and their Effects on MPI Intranode Communications}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{135--141},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.135},
  URN =		{urn:nbn:de:0030-drops-37771},
  doi =		{10.4230/OASIcs.ICCSW.2012.135},
  annote =	{Keywords: High performance computing, Message passing, Multicore processing, Computer simulation, Computer networks, Parallel programming, Parallel processing}
}
Document
Search-Based Ambiguity Detection in Context-Free Grammars

Authors: Naveneetha Vasudevan and Laurence Tratt


Abstract
Context Free Grammars (CFGs) can be ambiguous, allowing inputs to be parsed in more than one way, something that is undesirable for uses such as programming languages. However, statically detecting ambiguity is undecidable. Though approximation techniques have had some success in uncovering ambiguity, they can struggle when the ambiguous subset of the grammar is large. In this paper, we describe a simple search-based technique which appears to have a better success rate in such cases.

Cite as

Naveneetha Vasudevan and Laurence Tratt. Search-Based Ambiguity Detection in Context-Free Grammars. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 142-148, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{vasudevan_et_al:OASIcs.ICCSW.2012.142,
  author =	{Vasudevan, Naveneetha and Tratt, Laurence},
  title =	{{Search-Based Ambiguity Detection in Context-Free Grammars}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{142--148},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.142},
  URN =		{urn:nbn:de:0030-drops-37788},
  doi =		{10.4230/OASIcs.ICCSW.2012.142},
  annote =	{Keywords: Ambiguity, Parsing}
}
Document
Introduction to Team Disruption Mechanisms

Authors: Andrada Voinitchi, Elizabeth Black, and Michael Luck


Abstract
This paper discusses how teams can be disrupted. More specifically, it discusses the steps that need to be taken in order to fully understand team disruption and design efficient mechanisms to disrupt teams. In order to answer the high-level question of how to disrupt teams, a few other questions need to be tackled first: what is a disrupted team? What are the crucial elements that make a collection of agents function as a team? Can norms, incentives or other mechanisms be used to disrupt these elements? How would we evaluate their efficiency? We first present the ideas of team and team disruption and motivate the need for these concepts to be properly defined. Secondly, we introduce an idea of team-disruption mechanism that we will further investigate. Lastly, we provide a long-term perspective and identify contributions that our research will make in the multi-agents field.

Cite as

Andrada Voinitchi, Elizabeth Black, and Michael Luck. Introduction to Team Disruption Mechanisms. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 149-155, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{voinitchi_et_al:OASIcs.ICCSW.2012.149,
  author =	{Voinitchi, Andrada and Black, Elizabeth and Luck, Michael},
  title =	{{Introduction to Team Disruption Mechanisms}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{149--155},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.149},
  URN =		{urn:nbn:de:0030-drops-37792},
  doi =		{10.4230/OASIcs.ICCSW.2012.149},
  annote =	{Keywords: Team disruption, multi-agent systems, organisations, teams, goals}
}
Document
Self-Learning Genetic Algorithm For Constrains Satisfaction Problems

Authors: Hu Xu and Karen Petrie


Abstract
The efficient choice of a preprocessing level can reduce the search time of a constraint solver to find a solution to a constraint problem. Currently the parameters in constraint solver are often picked by hand by experts in the field. Genetic algorithms are a robust machine learning technology for problem optimization such as function optimization. Self-learning Genetic Algorithm are a strategy which suggests or predicts the suitable preprocessing method for large scale problems by learning from the same class of small scale problems. In this paper Self-learning Genetic Algorithms are used to create an automatic preprocessing selection mechanism for solving various constraint problems. The experiments in the paper are a proof of concept for the idea of combining genetic algorithm self-learning ability with constraint programming to aid in the parameter selection issue.

Cite as

Hu Xu and Karen Petrie. Self-Learning Genetic Algorithm For Constrains Satisfaction Problems. In 2012 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 28, pp. 156-162, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{xu_et_al:OASIcs.ICCSW.2012.156,
  author =	{Xu, Hu and Petrie, Karen},
  title =	{{Self-Learning Genetic Algorithm For Constrains Satisfaction Problems}},
  booktitle =	{2012 Imperial College Computing Student Workshop},
  pages =	{156--162},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-48-4},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{28},
  editor =	{Jones, Andrew V.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2012.156},
  URN =		{urn:nbn:de:0030-drops-37808},
  doi =		{10.4230/OASIcs.ICCSW.2012.156},
  annote =	{Keywords: Self-learning Genetic Algorithm, Constraint Programming, Parameter Tuning}
}

Filters