PinT: Polynomial in Temperature Decode Weights in a Neuromorphic Architecture

Scott Reid, Antonio Montoya and Kwabena Boahen
Stanford University

ABSTRACT


We present Polynomial in Temperature (PinT) decode weights, a novel approach to approximating functions with an ensemble of silicon neurons that increases thermal robustness. In mixed-signal neuromorphics, computing accurately across a wide range of temperatures is challenging because of individual silicon neurons’ thermal sensitivity. To compensate for the resulting changes in the neuron’s tuning-curves in the PinT framework, weights change continuously as a polynomial function of temperature.We validate PinT across a 38°C range by applying it to tuning curves measured for ensembles of 64 to 1936 neurons on Braindrop, a mixed-signal neuromorphic chip fabricated in 28-nm FDSOI CMOS. LinT, the Linear in Temperature version of PinT, reduces error by a small margin on test data, relative to an ensemble with temperature-independent weights. LinT and higher-order models show much greater promise on training data, suggesting that performance can be further improved. When implemented on-chip, LinT’s performance is very similar to the performance with temperature-independent decode weights. SpLinT and SpLSAT, the Sparse variants of LinT and LSAT, are promising avenues for efficiently reducing error. In the SpLSAT model, up to 90% of neurons on chip can be deactivated while maintaining the same function-approximation error.

Keywords: Mixed-signal neuromorphics, Thermal robustness.



Full Text (PDF)