# Questions and Answers from the Introductory Neural Networks page

I propose to place here (good) questions and my responses to them, with a view to making my introductory page more useful.

Is each iteration of backpropagation guaranteed to bring the neural net closer to learning what it is supposed to learn, or could it just as easily cause it to regress? In other words, could a neural net take one step forward and two steps back in its learning process, making it theoretically possible that it will never learn what you are trying to get it to learn?

Well...yes and no. Each weight update moves each weight in such a way that it reduces the overall error. If all the moves were infinitesimally small, then that would ensure that the overall error decreased. But only infinitesimally slowly. Given the impatience of people (like, they don't want to wait 10,000,000 years...) they use larger size weight movements. If the weight surface (which is a mapping from the weight space -> the positive real numbers: the weight space is probably of very high dimension) is complicated, with deep ravines etc. then there is certainly the possiblity that weight movement will cause overshoot, and not decrease the weight. Momentum is often used to help with this (time precludes a detailed explanation of how & why: try googling momentum and backprop), but it can still be a problem. Alternatives include attempting to use steepest detcent (rather than simply descent in error space), but this usually involves inverting large matrices, and is certainly a non-local learning technique. In conclusion: yes, it might never learn even although the network is such that it could learn. Usually, one gets round this by (i) starting from a number of different points in weight space, and (ii) using a variety of learning rates and momentum rates (or even reducing them if it looks as though the error sum is oscillating.

If I know what the output of a neural net is, and I know all the weights, and assuming 0 error, is it possible to find out what the input was?

In a word, no. The output of the neural network does not in general define a unique input. (Or to put it mathematically, the function embodied in the network is not (uniquely) invertable.) There's no reason why it should be - often we really do want different inputs to map to the same output!

Can you suggest a location to look at for further information on Fuzzy Logic and Neural Technologies