11.1 Special Day Session on Designing Autonomous Systems: Smart Vision Systems

Printer-friendly version PDF version

Date: Thursday 22 March 2018
Time: 14:00 - 15:30
Location / Room: Saal 2

Chair:
Bernhard Rinner, Alpen-Adria-Universität Klagenfurt, AT

Smart vision systems that capture data in both private and public environments are now ubiquitous and have applications in security, disaster response, robotics, and smart environments, among others. Processing this data manually is an immensely tedious - and for some applications - an infeasible task, and an enhanced level of automation and self-awareness in the overall system is a key to overcome the design challenges. This special session addresses design aspects of smart vision systems realized at different levels: the image sensor, the camera node and the system level.

TimeLabelPresentation Title
Authors
14:0011.1.1THE CAMEL APPROACH TO STACKED SENSOR SMART CAMERAS
Speaker:
Marilyn Wolf, Georgia Institute of Technology, US
Authors:
Saibal Mukhopadhyay1, Marilyn Wolf1 and Evan Gebahrdt2
1Georgia Institute of Technology, US; 2School of ECE, Georgia Institute of Technology, US
Abstract
Stacked image sensor systems combine an image sensor, memory, and processors using 3D technology. Stacking camera components that have traditionally been packaged separately provides several benefits: very high bandwidth out of the image sensor, allowing for higher frame rates; very low latency, providing opportunities for image processing and computer vision algorithms which can adapt at very high rates; and lower power consumption. This paper will describe the characteristics of stacked image sensor systems and novel algorithmic and systems concepts that are made possible by these stacked sensors.

Download Paper (PDF; Only available from the DATE venue WiFi)
14:1811.1.2A DESIGN TOOL FOR HIGH PERFORMANCE IMAGE PROCESSING ON MULTICORE PLATFORMS
Speaker:
Shuvra Bhattacharyya, University of Maryland, College Park, MD, USA and Tampere University of Technology, Finland, US
Authors:
Jiahao Wu1, Timothy Blattner2, Walid Keyrouz2 and Shuvra S. Bhattacharyya1
1University of Maryland, US; 2National Institute of Standards and Technology, US
Abstract
Design and implementation of smart vision systems often involve the mapping of complex image processing algorithms into efficient, real-time implementations on multicore platforms. In this paper, we describe a novel design tool that is developed to address this important challenge. A key component of the tool is a new approach to hierarchical dataflow scheduling that integrates a global scheduler and multiple local schedulers. The local schedulers are lightweight modules that work independently. The global scheduler interacts with the local schedulers to optimize overall memory usage and execution time. The proposed design tool is demonstrated through a case study involving an image stitching application for large scale microscopy images.

Download Paper (PDF; Only available from the DATE venue WiFi)
14:3611.1.3QUASAR, A HIGH-LEVEL PROGRAMMING LANGUAGE AND DEVELOPMENT ENVIRONMENT FOR DESIGNING SMART VISION SYSTEMS ON EMBEDDED PLATFORMS
Speaker:
Bart Goossens, Ghent University - imec, BE
Authors:
Bart Goossens, Hiêp Luong, Jan Aelterman and Wilfried Philips, Ghent University, Dept. of Telecommunications and Information Processing, BE
Abstract
We present Quasar, a new programming framework that handles many complex aspects in the design of smart vision systems on embedded platforms, such as parallelization, data flow management, scheduling and load balancing. Quasar, as a highlevel programming language, is nearly hardware-agnostic, has a low barrier of entry and is therefore well suited for algorithm design and rapid prototyping. Through several benchmarks and application use cases we demonstrate that programs written in Quasar have a performance that is on a par with (or better than) hand-tuned CUDA and OpenACC code while the development requires much less time and is future-proof.

Download Paper (PDF; Only available from the DATE venue WiFi)
14:5411.1.4CONCURRENT FOCAL-PLANE GENERATION OF COMPRESSED SAMPLES FROM TIME-ENCODED PIXEL VALUES
Speaker:
Ricardo Carmona-Galan, Instituto de Microelectronica de Sevilla (CSIC-Univ. de Sevilla), ES
Authors:
Marco Trevisi1, Héctor C Bandala2, Jorge Fernández-Berni1, Ricardo Carmona-Galán1 and Ángel Rodríguez-Vázquez1
1Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC-Universidad de Sevilla, ES; 2Dept. Electronics, Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE), MX
Abstract
Compressive sampling allows wrapping the relevant content of an image in a reduced set of data. It exploits the sparsity of natural images. This principle can be employed to deliver images over a network under a restricted data rate and still receive enough meaningful information. An efficient implementation of this principle lies in the generation of the compressed samples right at the imager. Otherwise, i. e. digitizing the complete image and then composing the compressed samples in the digital plane, the required memory and processing resources can seriously compromise the budget of an autonomous camera node. In this paper we present the design of a pixel architecture that encodes light intensity into time, followed by a global strategy to pseudo-randomly combine pixel values and generate, on-chip and on-line, the compressed samples.

Download Paper (PDF; Only available from the DATE venue WiFi)
15:1211.1.5CONTACTLESS FINGER AND FACE CAPTURING ON A SECURE HANDHELD EMBEDDED DEVICE
Speaker:
Axel Weissenfeld, AIT Austrian Institute of Technology GmbHA, AT
Authors:
Axel Weissenfeld and Bernhard Strobl, Austrian Institute of Technology, AT
Abstract
Traveler flows and crossings at the external borders of the EU are increasing and are expected to increase even more in the future; trends which encompass great challenges for travelers, border guards and the border infrastructure. In this paper we present a new handheld device, which enables border control authorities to check European, visa-holding and frequent third country travelers in a comfortable, fast and secure way. The mobile solution incorporates new multimodal biometric capturing and matching units for face and 4-finger authentication. Thereby, the focus is on the capturing unit and fingerprint verification, which is evaluated in detail. On the other hand, the use in border control requires high security measurements and trustworthy use of credentials, which are also presented. Tests of the handheld device at a land border indicate great acceptance by travelers and border guards.

Download Paper (PDF; Only available from the DATE venue WiFi)
15:30End of session
Coffee Break in Exhibition Area



Coffee Breaks in the Exhibition Area

On all conference days (Tuesday to Thursday), coffee and tea will be served during the coffee breaks at the below-mentioned times in the exhibition area (Terrace Level of the ICCD).

Lunch Breaks (Großer Saal + Saal 1)

On all conference days (Tuesday to Thursday), a seated lunch (lunch buffet) will be offered in the rooms "Großer Saal" and "Saal 1" (Saal Level of the ICCD) to fully registered conference delegates only. There will be badge control at the entrance to the lunch break area.

Tuesday, March 20, 2018

  • Coffee Break 10:30 - 11:30
  • Lunch Break 13:00 - 14:30
  • Awards Presentation and Keynote Lecture in "Saal 2" 13:50 - 14:20
  • Coffee Break 16:00 - 17:00

Wednesday, March 21, 2018

  • Coffee Break 10:00 - 11:00
  • Lunch Break 12:30 - 14:30
  • Awards Presentation and Keynote Lecture in "Saal 2" 13:30 - 14:20
  • Coffee Break 16:00 - 17:00

Thursday, March 22, 2018

  • Coffee Break 10:00 - 11:00
  • Lunch Break 12:30 - 14:00
  • Keynote Lecture in "Saal 2" 13:20 - 13:50
  • Coffee Break 15:30 - 16:00