Wavelet neural networks are asymptotically optimal approximators for functions of one variable
Neural networks are universal approximators. For example, it has been proved (K. Hornik et al., 1989) that for every /spl epsiv/>0 an arbitrary continuous function on a compact set can be /spl epsiv/-approximated by a 3-layer neural network. This and other results prove that in principle, any fun...
Saved in:
| Published in | IEEE International Conference on Neural Networks, 1994 Vol. 1; pp. 299 - 304 vol.1 |
|---|---|
| Main Authors | , , |
| Format | Conference Proceeding |
| Language | English |
| Published |
IEEE
1994
|
| Subjects | |
| Online Access | Get full text |
| ISBN | 078031901X 9780780319011 |
| DOI | 10.1109/ICNN.1994.374179 |
Cover
| Summary: | Neural networks are universal approximators. For example, it has been proved (K. Hornik et al., 1989) that for every /spl epsiv/>0 an arbitrary continuous function on a compact set can be /spl epsiv/-approximated by a 3-layer neural network. This and other results prove that in principle, any function (e.g., any control) can be implemented by an appropriate neural network. But why neural networks? In addition to neural networks, an arbitrary continuous function can be also approximated by polynomials, etc. What is so special about neural networks that make them preferable approximators? To compare different approximators, one can compare the number of bits that we must store in order to be able to reconstruct a function with a given precision /spl epsiv/. For neural networks, we must store weights and thresholds. For polynomials, we must store coefficients, etc. We consider functions of one variable, and show that for some special neurons (corresponding to wavelets), neural networks are optimal approximators in the sense that they require (asymptotically) the smallest possible number of bits.< > |
|---|---|
| ISBN: | 078031901X 9780780319011 |
| DOI: | 10.1109/ICNN.1994.374179 |