8.4 Advanced systems for healthcare and assistive technologies

Printer-friendly version PDF version

Date: Wednesday 29 March 2017
Time: 17:00 - 18:30
Location / Room: 3A

Chair:
Ruben Braojos, EPFL, CH

Co-Chair:
Luca Fanucci, University of Pisa, IT

This session focuses on embedded systems for human activity recognition and control. These systems combine flexible and dynamic hardware architectures with advanced novel signal processing techniques for activity recognition, myoelectric prosthesis control, motor intention decoding and brain computer interface. Finally, we will have two interactive presentations focused on embedded systems for diagnosis.

TimeLabelPresentation Title
Authors
17:008.4.1(Best Paper Award Candidate)
ADAPTIVE COMPRESSED SENSING AT THE FINGERTIP OF INTERNET-OF-THINGS SENSORS: AN ULTRA-LOW POWER ACTIVITY RECOGNITION
Speaker:
Josué Pagan Ortiz, UCM, ES
Authors:
Ramin Fallahzadeh1, Josué Pagán2 and Hassan Ghasemzadeh3
1School of Electrical Engineering and Computer Science, Washington State University, US; 2Complutense University of Madrid, ES; 3Washington State University, US
Abstract
With the proliferation of wearable devices in the Internet-of-Things applications, designing highly power-efficient solutions for continuous operation of these technologies in life-critical settings emerges. We propose a novel ultra-low power framework for adaptive compressed sensing in activity recognition. The proposed design uses a coarse-grained activity recognition module to adaptively tune the compressed sensing module for minimized sensing/transmission costs. We pose an optimization problem to minimize activity specific sensing rates and introduce a polynomial time approximation algorithm using a novel heuristic dynamic optimization tree. Our evaluations on real-world data shows that the proposed autonomous framework is capable of generating feed-back with +80% confidence and improves power reduction performance of the state-of-the-art approach by a factor of two.

Download Paper (PDF; Only available from the DATE venue WiFi)
17:308.4.2A ZYNQ-BASED DYNAMICALLY RECONFIGURABLE HIGH DENSITY MYOELECTRIC PROSTHESIS CONTROLLER
Speaker:
Linus Witschen, Paderborn University, DE
Authors:
Alexander Boschmann1, Georg Thombansen1, Linus Witschen1, Alex Wiens1 and Marco Platzner2
1Paderborn University, DE; 2University of Paderborn, DE
Abstract
The combination of high-density electromyographic (HD EMG) sensor technology and modern machine learning algorithms allows for intuitive and robust prosthesis control of multiple degrees of freedom. However, HD EMG real-time processing poses a challenge for common microprocessors in an embedded system. With the goal set on an autonomous prosthesis capable of performing training and classification of an amputee's HD EMG signals, the focus of this paper lies in the acceleration of the computationally expensive parts of the embedded signal processing chain: the feature extraction and classification. Using the Xilinx Zynq as a low-cost off-the-shelf system, we present a solution capable of processing 192 HD EMG channels with controller delays below 120 milliseconds, suitable for highly responsive real-world prosthesis control, achieving speed-ups up to 2.8 as compared to a software-only solution. Using dynamic FPGA reconfiguration, the system is able to trade off increased controller delay against improved classification accuracy when signal quality is decreased due to noisy channels. Offloading feature extraction and classification to the FPGA also reduced the system's power consumption, making it more suitable to be used in a battery-powered setup. The system was validated using real-time experiments with online HD EMG data from an amputee to control a state-of-the-art prosthesis.

Download Paper (PDF; Only available from the DATE venue WiFi)
18:008.4.3MICROWATT END-TO-END DIGITAL NEURAL SIGNAL PROCESSING SYSTEMS FOR MOTOR INTENTION DECODING
Speaker:
Zhewei Jiang, Columbia University, US
Authors:
Zhewei Jiang1, Chisung Bae2, Joonseong Kang2, Sang Joon Kim2 and Mingoo Seok1
1Columbia University, US; 2Samsung Electronics, KR
Abstract
This paper presents microwatt end-to-end digital signal processing (DSP) systems for deployment-stage real-time upper-limb movement intent prediction. This brain computer interface (BCI) DSP systems feature intercellular spike detection, sorting, and decoding operations for a 96-channel prosthetic implant. We design the algorithms for those operations to achieve minimal computation complexity while matching or advancing the accuracy of state-of-art BCI sorting and movement decoding. Based on those algorithms, we architect the DSP hardware with the focus on hardware reuse and event-driven operation. The VLSI implementation of the proposed architecture in a 65-nm high-VTH shows that it can achieve 7.7μW at the supply voltage of 300mV in the post-layout simulation. The area is 0.16 mm2.

Download Paper (PDF; Only available from the DATE venue WiFi)
18:158.4.4AN EMBEDDED SYSTEM REMOTELY DRIVING MECHANICAL DEVICES BY P300 BRAIN ACTIVITY
Speaker:
Daniela De Venuto, Politecnico di Bari, IT
Authors:
Valerio F. Annese1, Giovanni Mezzina2 and Daniela De Venuto2
1Politecnico di Bari, IT; 2Dept. of Electrical and Information Engineering, Politecnico di Bari, IT
Abstract
In this paper we present a P300-based Brain Computer Interface (BCI) for the remote control of a mechatronic actuator, such as wheelchair, or even a car, driven by EEG signals to be used by tetraplegic and paralytic users or just for safe drive in case of car. The P300 signal, an Evoked Related Potential (ERP) devoted to the cognitive brain activity, is induced for purpose by visual stimulation. The EEG data are collected by 6 smart wireless electrodes from the parietal-cortex area and online classified by a linear threshold classifier, basing on a suitable stage of Machine Learning (ML). The ML is implemented on a µPC dedicated to the system and where the data acquisition and processing is performed. The main improvement in remote driving car by EEG, regards the approach used for the intentions recognition. In this work, the classification is based on the P300 and not just on the average of more not well identify potentials. This approach reduces the number of electrodes on the EEG helmet. The ML stage is based on a custom algorithm (t-RIDE) which tunes the following classification stage on the user's "cognitive chronometry". The ML algorithm starts with a fast calibration phase (just ~190s for the first learning). Furthermore, the BCI presents a functional approach for time-domain features extraction, which reduces the amount of data to be analyzed, and then the system response times. In this paper, a proof of concept of the proposed BCI is shown using a prototype car, tested on 5 subjects (aged 26 ± 3). The experimental results show that the novel ML approach allows a complete P300 spatio-temporal characterization in 1.95s using 38 target brain visual stimuli (for each direction of the car path). In free-drive mode, the BCI classification reaches 80.5 ± 4.1% on single-trial detection accuracy while the worst-case computational time is 19.65ms ± 10.1. The BCI system here described can be also used on different mechatronic actuators, such as robots.

Download Paper (PDF; Only available from the DATE venue WiFi)
18:31IP4-1, 911024-CHANNEL 3D ULTRASOUND DIGITAL BEAMFORMER IN A SINGLE 5W FPGA
Speaker:
Aya Ibrahim, EPFL, CH
Authors:
Federico Angiolini1, Aya Ibrahim1, William Simon1, Ahmet Caner Yüzügüler1, Marcel Arditi1, Jean-Philippe Thiran1 and Giovanni De Micheli2
1EPFL, CH; 2École Polytechnique Fédérale de Lausanne (EPFL), CH
Abstract
3D ultrasound, an emerging medical imaging tech- nique that is presently only used in hospitals, has the potential to enable breakthrough telemedicine applications, provided that its cost and power dissipation can be minimized. In this paper, we present an FPGA architecture suitable for a portable medical 3D ultrasound device. We show an optimized design for the digital part of the imager, including the delay calculation block, which is its most critical part. Our computationally efficient approach requires a single FPGA for 3D imaging, which is unprecedented. The design is scalable; a configuration supporting a 32×32- channel probe, which enables high-quality imaging, consumes only about 5W.

Download Paper (PDF; Only available from the DATE venue WiFi)
18:30End of session