AMSER: Adaptive Multimodal Sensing for Energy Efficient and Resilient eHealth Systems

Emad Kasaeyan Naeini1,a, Sina Shahhosseini1,b, Anil Kanduri2,c, Pasi Liljeberg2,d, Amir M. Rahmani1,e and Nikil Dutt1,f
1Dept. of CS, University of California, Irvine, USA
2Dept. of Computing, University of Turku, Finland
aekasaeya@uci.edu
bsshahos@uci.edu
cspakan@utu.fi
dpakrili@utu.fi
ea.rahmani@uci.edu
fdutt@uci.edu

ABSTRACT


eHealth systems deliver critical digital healthcare and wellness services for users by continuously monitoring physiological and contextual data. eHealth applications use multimodal machine learning kernels to analyze data from different sensor modalities and automate decision-making. Noisy inputs and motion artifacts during sensory data acquisition affect the i) prediction accuracy and resilience of eHealth services and ii) energy efficiency in processing garbage data. Monitoring raw sensory inputs to identify and drop data and features from noisy modalities can improve prediction accuracy and energy efficiency. We propose a closed-loop monitoring and control framework for multi-modal eHealth applications, AMSER, that can mitigate garbage-in garbage-out by i) monitoring input modalities, ii) analyzing raw input to selectively drop noisy data and features, and iii) choosing appropriate machine learning models that fit the configured data and feature vector - to improve prediction accuracy and energy efficiency. We evaluate our AMSER approach using multi-modal eHealth applications of pain assessment and stress monitoring over different levels and types of noisy components incurred via different sensor modalities. Our approach achieves up to 22% improvement in prediction accuracy and 5.6× energy consumption reduction in the sensing phase against the state-of-the-art multi-modal monitoring application.



Full Text (PDF)