I have a question that seems more difficult to answer succinctly the more I
think about it.
While it can be shown that a perceptron will eventually train to the correct
linear model of a linear system, provided resources requirements exist, does
this expectation be realized in the following case?
Suppose a non-linear system for which a backprop network finds a solution
which is so nearly linear that it adequately maps input to output within the
limits of error. This may be only one of a single class of approximators, or
one of a class of a group of approximators; I have not done that research.
Assuming, though, that the linear approximator is unique, does the same
"guarantee" which pertains to perceptron learning in linear systems apply to
nonlinear systems with good linear approximators? That is, would a simple perceptron network find the same solution as the more complex backprop network?