Document Type

Technical Report

Publication Date

1989-06-01

Filename

WUCS-89-25.pdf

DOI:

10.7936/K7M61HMJ

Technical Report Number

WUCS-89-25

Abstract

The present work investigates the significance of arithmetic precision in neural network simulation. Noting that a biological brain consists of a large number of cells of low precision, we try to answer the question: With a fixed size of memory and CPU cycles available for simulation, does a larger sized net with less precision perform better than smaller sized one with higher precision? We evaluate the merits and demerits of using low precision integer arithmetic in simulating backpropagation networks. Two identical backpropagation simulators, ibp and fbp, were constructed on Mac II, ibp with 16 bits integer representations of network parameters such as activation values, back-errors, and weights; and fbp with 96 bits floating point representations of the same parameters. The performance of the two stimulators are compared in solving the same Boolean mapping problem, the sine transfer function with eight binary inputs and one analog output. The speed-up ratio from fbp to ibp is a single training cycle in approximately 7.3 for smaller networks and 4.2 for larger ones. However, for total time necessary to obtain a solution net, the speed-up ratio is 163 for the smaller nets, and 121 for the larger nets, because ibp requires much less number of training cycles than fbp. We also found that networks trained by integer arithmetic have more generalization capabilities than those trained by floating point arithmetic. At present time we have no explanation on this matter.

Comments

Permanent URL: http://dx.doi.org/10.7936/K7M61HMJ

Share

COinS