Back to Top

Paper Title

Training generalizable quantized deep neural nets

Authors

Charles Hernandez
Charles Hernandez
Bijan Taslimi
Bijan Taslimi
Hung Yi
Hung Yi
Hongcheng Liu
Hongcheng Liu

Article Type

Research Article

Research Impact Tools

Issue

Volume : 213 | Issue : Part B | Page No : 118736

Published On

April, 2023

Downloads

Abstract

While a number of practical methods for training quantized DL models have been presented in the literature, there exists a critical gap in the theoretical generalizability results for such approaches. Although empirical evidence often suggests a high tolerance of DL architectures to variations of training procedures, existing theoretical generalization analyses are often contingent on the specific designs of training algorithms, e.g., in stochastic gradient descent (SGD). This specialization makes such generalizability results inapplicable to the case of quantized DL models. In view of this critical vacuum, this paper provides several almost-algorithm-independent results to ensure the generalizability of a quantized neural network at different levels of optimality. These results include the characterizations of a computable, quantized local solution that ensures the generalization performance and an algorithm that is provably convergent to such a local solution.

View more >>

Uploded Document Preview