Leibniz Transactions on Embedded Systems, Volume 8, Issue 1

LITES, Volume 8, Issue 1



Thumbnail PDF

Special Issue

Special Issue on Embedded Systems for Computer Vision

Editors

Samarjit Chakraborty
  • The University of North Carolina (UNC) at Chapel Hill, US
Qing Rao
  • Momenta Europe GmbH, Stuttgart, Germany

Publication Details


Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Issue
LITES, Volume 8, Issue 1

Abstract
LITES, Volume 8, Issue 1

Cite as

LITES, Volume 8, Issue 1: Special Issue on Embedded Systems for Computer Vision, pp. 1-91, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{LITES-v008-i001,
  title =	{{LITES, Volume 8, Issue 1}},
  journal =	{Leibniz Transactions on Embedded Systems},
  pages =	{1--91},
  ISSN =	{2199-2002},
  year =	{2022},
  volume =	{8},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES-v008-i001},
  doi =		{10.4230/LITES-v008-i001},
  annote =	{Keywords: LITES, Volume 8, Issue 1}
}
Document
Introduction
Introduction to the Special Issue on Embedded Systems for Computer Vision

Authors: Samarjit Chakraborty and Qing Rao


Abstract
We provide a broad overview of some of the current research directions at the intersection of embedded systems and computer vision, in addition to introducing the papers appearing in this special issue. Work at this intersection is steadily growing in importance, especially in the context of autonomous and cyber-physical systems design. Vision-based perception is almost a mandatory component in any autonomous system, but also adds myriad challenges like, how to efficiently implement vision processing algorithms on resource-constrained embedded architectures, and how to verify the functional and timing correctness of these algorithms. Computer vision is also crucial in implementing various smart functionality like security, e.g., using facial recognition, or monitoring events or traffic patterns. Some of these applications are reviewed in this introductory article. The remaining articles featured in this special issue dive into more depth on a few of them.

Cite as

LITES, Volume 8, Issue 1: Special Issue on Embedded Systems for Computer Vision, pp. 0:i-0:viii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{chakraborty_et_al:LITES.8.1.0,
  author =	{Chakraborty, Samarjit and Rao, Qing},
  title =	{{Introduction to the Special Issue on Embedded Systems for Computer Vision}},
  journal =	{Leibniz Transactions on Embedded Systems},
  pages =	{00:1--00:8},
  ISSN =	{2199-2002},
  year =	{2022},
  volume =	{8},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES.8.1.0},
  doi =		{10.4230/LITES.8.1.0},
  annote =	{Keywords: Embedded systems, Computer vision, Cyber-physical systems, Computer architecture}
}
Document
Susceptibility to Image Resolution in Face Recognition and Training Strategies to Enhance Robustness

Authors: Martin Knoche, Stefan Hörmann, and Gerhard Rigoll


Abstract
Many face recognition approaches expect the input images to have similar image resolution. However, in real-world applications, the image resolution varies due to different image capture mechanisms or sources, affecting the performance of face recognition systems. This work first analyzes the image resolution susceptibility of modern face recognition. Face verification on the very popular LFW dataset drops from 99.23% accuracy to almost 55% when image dimensions of both images are reduced to arguable very poor resolution. With cross-resolution image pairs (one HR and one LR image), face verification accuracy is even worse. This characteristic is investigated more in-depth by analyzing the feature distances utilized for face verification. To increase the robustness, we propose two training strategies applied to a state-of-the-art face recognition model: 1) Training with 50% low resolution images within each batch and 2) using the cosine distance loss between high and low resolution features in a siamese network structure. Both methods significantly boost face verification accuracy for matching training and testing image resolutions. Training a network with different resolutions simultaneously instead of adding only one specific low resolution showed improvements across all resolutions and made a single model applicable to unknown resolutions. However, models trained for one particular low resolution perform better when using the exact resolution for testing. We improve the face verification accuracy from 96.86% to 97.72% on the popular LFW database with uniformly distributed image dimensions between 112 × 112 px and 5 × 5 px. Our approaches improve face verification accuracy even more from 77.56% to 87.17% for distributions focusing on lower images resolutions. Lastly, we propose specific image dimension sets focusing on high, mid, and low resolution for five well-known datasets to benchmark face verification accuracy in cross-resolution scenarios.

Cite as

Martin Knoche, Stefan Hörmann, and Gerhard Rigoll. Susceptibility to Image Resolution in Face Recognition and Training Strategies to Enhance Robustness. In LITES, Volume 8, Issue 1 (2022): Special Issue on Embedded Systems for Computer Vision. Leibniz Transactions on Embedded Systems, Volume 8, Issue 1, pp. 01:1-01:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{knoche_et_al:LITES.8.1.1,
  author =	{Knoche, Martin and H\"{o}rmann, Stefan and Rigoll, Gerhard},
  title =	{{Susceptibility to Image Resolution in Face Recognition and Training Strategies to Enhance Robustness}},
  journal =	{Leibniz Transactions on Embedded Systems},
  pages =	{01:1--01:20},
  ISSN =	{2199-2002},
  year =	{2022},
  volume =	{8},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES.8.1.1},
  doi =		{10.4230/LITES.8.1.1},
  annote =	{Keywords: recognition, resolution, cross, face, identification}
}
Document
Micro- and Macroscopic Road Traffic Analysis using Drone Image Data

Authors: Friedrich Kruber, Eduardo Sánchez Morales, Robin Egolf, Jonas Wurst, Samarjit Chakraborty, and Michael Botsch


Abstract
The current development in the drone technology, alongside with machine learning based image processing, open new possibilities for various applications. Thus, the market volume is expected to grow rapidly over the next years. The goal of this paper is to demonstrate the capabilities and limitations of drone based image data processing for the purpose of road traffic analysis. In the first part a method for generating microscopic traffic data is proposed. More precisely, the state of vehicles and the resulting trajectories are estimated. The method is validated by conducting experiments with reference sensors and proofs to achieve precise vehicle state estimation results. It is also shown, how the computational effort can be reduced by incorporating the tracking information into a neural network. A discussion on current limitations supplements the findings. By collecting a large number of vehicle trajectories, macroscopic statistics, such as traffic flow and density can be obtained from the data. In the second part, a publicly available drone based data set is analyzed to evaluate the suitability for macroscopic traffic modeling. The results show that the method is well suited for gaining detailed information about macroscopic statistics, such as traffic flow dependent time headway or lane change occurrences. In conclusion, this paper presents methods to exploit the remarkable opportunities of drone based image processing for joint macro- and microscopic traffic analysis.

Cite as

Friedrich Kruber, Eduardo Sánchez Morales, Robin Egolf, Jonas Wurst, Samarjit Chakraborty, and Michael Botsch. Micro- and Macroscopic Road Traffic Analysis using Drone Image Data. In LITES, Volume 8, Issue 1 (2022): Special Issue on Embedded Systems for Computer Vision. Leibniz Transactions on Embedded Systems, Volume 8, Issue 1, pp. 02:1-02:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{kruber_et_al:LITES.8.1.2,
  author =	{Kruber, Friedrich and S\'{a}nchez Morales, Eduardo and Egolf, Robin and Wurst, Jonas and Chakraborty, Samarjit and Botsch, Michael},
  title =	{{Micro- and Macroscopic Road Traffic Analysis using Drone Image Data}},
  journal =	{Leibniz Transactions on Embedded Systems},
  pages =	{02:1--02:27},
  ISSN =	{2199-2002},
  year =	{2022},
  volume =	{8},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES.8.1.2},
  doi =		{10.4230/LITES.8.1.2},
  annote =	{Keywords: traffic data analysis, trajectory data, drone image data}
}
Document
HW-Flow: A Multi-Abstraction Level HW-CNN Codesign Pruning Methodology

Authors: Manoj-Rohit Vemparala, Nael Fasfous, Alexander Frickenstein, Emanuele Valpreda, Manfredi Camalleri, Qi Zhao, Christian Unger, Naveen-Shankar Nagaraja, Maurizio Martina, and Walter Stechele


Abstract
Convolutional neural networks (CNNs) have produced unprecedented accuracy for many computer vision problems in the recent past. In power and compute-constrained embedded platforms, deploying modern CNNs can present many challenges. Most CNN architectures do not run in real-time due to the high number of computational operations involved during the inference phase. This emphasizes the role of CNN optimization techniques in early design space exploration. To estimate their efficacy in satisfying the target constraints, existing techniques are either hardware (HW) agnostic, pseudo-HW-aware by considering parameter and operation counts, or HW-aware through inflexible hardware-in-the-loop (HIL) setups. In this work, we introduce HW-Flow, a framework for optimizing and exploring CNN models based on three levels of hardware abstraction: Coarse, Mid and Fine. Through these levels, CNN design and optimization can be iteratively refined towards efficient execution on the target hardware platform. We present HW-Flow in the context of CNN pruning by augmenting a reinforcement learning agent with key metrics to understand the influence of its pruning actions on the inference hardware. With 2× reduction in energy and latency, we prune ResNet56, ResNet50, and DeepLabv3 with minimal accuracy degradation on the CIFAR-10, ImageNet, and CityScapes datasets, respectively.

Cite as

Manoj-Rohit Vemparala, Nael Fasfous, Alexander Frickenstein, Emanuele Valpreda, Manfredi Camalleri, Qi Zhao, Christian Unger, Naveen-Shankar Nagaraja, Maurizio Martina, and Walter Stechele. HW-Flow: A Multi-Abstraction Level HW-CNN Codesign Pruning Methodology. In LITES, Volume 8, Issue 1 (2022): Special Issue on Embedded Systems for Computer Vision. Leibniz Transactions on Embedded Systems, Volume 8, Issue 1, pp. 03:1-03:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{vemparala_et_al:LITES.8.1.3,
  author =	{Vemparala, Manoj-Rohit and Fasfous, Nael and Frickenstein, Alexander and Valpreda, Emanuele and Camalleri, Manfredi and Zhao, Qi and Unger, Christian and Nagaraja, Naveen-Shankar and Martina, Maurizio and Stechele, Walter},
  title =	{{HW-Flow: A Multi-Abstraction Level HW-CNN Codesign Pruning Methodology}},
  journal =	{Leibniz Transactions on Embedded Systems},
  pages =	{03:1--03:30},
  ISSN =	{2199-2002},
  year =	{2022},
  volume =	{8},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES.8.1.3},
  doi =		{10.4230/LITES.8.1.3},
  annote =	{Keywords: Convolutional Neural Networks, Optimization, Hardware Modeling, Pruning}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail