ManiHD: Efficient Hyper-Dimensional Learning Using Manifold Trainable Encoder

Zhuowen Zou1, Yeseong Kim2, M. Hassan Najafi3 and Mohsen Imani4,a
1University of California San Diego
2DGIST
3University of Louisiana
4University of California Irvine
am.imani@uci.edu

ABSTRACT


Hyper-Dimensional (HD) computing emulates the human short memory functionality by computing with hypervectors as an alternative to computing with numbers. The main goal of HD computing is to map data points into sparse highdimensional space where the learning task can perform in a linear and hardware-friendly way. The existing HD computing algorithms are using static and non-trainable encoder; thus, they require very high-dimensionality to provide acceptable accuracy. However, this high dimensionality results in high computational cost, especially over the realistic learning problems. In this paper, we proposed ManiHD that supports adaptive and trainable encoder for efficient learning in high-dimensional space. ManiHD explicitly considers non-linear interactions between the features during the encoding. This enables ManiHD to provide maximum learning accuracy using much lower dimensionality. ManiHD not only enhances the learning accuracy but also significantly improves the learning efficiency during both training and inference phases. ManiHD also enables online learning by sampling data points and capturing the essential features in an unsupervised manner. We also propose a quantization method that trades accuracy and efficiency for optimal configuration. Our evaluation of a wide range of classification tasks shows that ManiHD provides 4.8% higher accuracy than the stateof- the-art HD algorithms. In addition, ManiHD provides, on average, 12.3× (3.2×) faster and 19.3× (6.3×) more energyefficient training (inference) as compared to the state-of-the-art learning algorithms.



Full Text (PDF)