Mixing floating- and fixed-point formats for neural network learning on neuroprocessors
We examine the efficient implementation of back-propagation (BP) type algorithms on TO [3], a vector processor with a fixed-point engine, designed for neural network simulation. Using Matrix Back Propagation (MBP) [2]we achieve an asymptotically optimal performance on TO (about 0.8 GOPS) for both fo...
Saved in:
| Published in | Microprocessing and microprogramming Vol. 41; no. 10; pp. 757 - 769 |
|---|---|
| Main Authors | , |
| Format | Journal Article |
| Language | English |
| Published |
Elsevier B.V
01.06.1996
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 0165-6074 |
| DOI | 10.1016/0165-6074(96)00012-9 |
Cover
| Summary: | We examine the efficient implementation of back-propagation (BP) type algorithms on TO [3], a vector processor with a fixed-point engine, designed for neural network simulation. Using Matrix Back Propagation (MBP) [2]we achieve an asymptotically optimal performance on TO (about 0.8 GOPS) for both forward and backward phases, which is not possible with the standard on-line BP algorithm. We use a mixture of fixed- and floating-point operations in order to guarantee both high efficiency and fast convergence. Though the most expensive computations are implemented in fixed-point, we achieve a rate of convergence that is comparable to the floating-point version. The time taken for conversion between fixed- and floating-point is also shown to be reasonably low. |
|---|---|
| Bibliography: | ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 23 |
| ISSN: | 0165-6074 |
| DOI: | 10.1016/0165-6074(96)00012-9 |