PyMLPΒΆ

Building ones own multilayer perceptron (MLP) seems to be like a rite of passage in understanding neural networks. So this is my implementation. In contrast to many other embedded MLP implementation didn’t I restrict myself to a single file.

Hence, I separated the basic MLP weights, forwards- and backwards functions into the FFNetwork and put the trainer specific parts into its own class named SupvervisedTrainer. This trainer uses different datasets for learning and testing and can give its client the mean error of the testing datasets over time. The transfer functions are also extracted to its own module.

The NeuralNetworkAdapter should be the standard interface for the MLP. It contains the necessary factory and (input, target) based learning methods. It also allows to use another, external MLP.