Power-Efficient Deep Neural Networks with Noisy Memristor Implementation - Pôle Traitement et Transmission de l’Information, algorIthme et Intégration Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Power-Efficient Deep Neural Networks with Noisy Memristor Implementation

Résumé

This paper considers Deep Neural Network (DNN) linear-nonlinear computations implemented on memristor crossbar substrates. To address the case where true memristor conductance values may differ from their target values, it introduces a theoretical framework that characterizes the effect of conductance value variations on the final inference computation. With only second-order moment assumptions, theoretical results on tracking the mean, variance, and covariance of the layer-bylayer noisy computations are given. By allowing the possibility of amplifying certain signals within the DNN, power consumption is characterized and then optimized via KKT conditions. Simulation results verify the accuracy of the proposed analysis and demonstrate the significant power efficiency gains that are possible via optimization for a target mean squared error.
Fichier principal
Vignette du fichier
Dupraz21ITW.pdf (472.6 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03337122 , version 1 (07-09-2021)

Identifiants

Citer

Elsa Dupraz, Lav R Varshney, François Leduc-Primeau. Power-Efficient Deep Neural Networks with Noisy Memristor Implementation. ITW 2021: IEEE Information Theory Workshop, Oct 2021, Kanazawa, Japan. ⟨10.1109/ITW48936.2021.9611431⟩. ⟨hal-03337122⟩
38 Consultations
81 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More