# pymlp.mlp package¶

## pymlp.mlp.FFNetwork module¶

Multilayer Perceptron implementation.

class pymlp.mlp.FFNetwork.FFNetwork(shape, transferFunctions, alpha)[source]

Simple feed forward Multilayer Perceptron (MLP) implementation.

clone()[source]

Returns a clone of this network.

feedForward(x)[source]

Runs forward through the network and caches the A of each layer in the list A.

Parameters:
x – the input value to run through the network returns – the networks outputs
load(files)[source]

Loads the weights of the given files.

Parameters:
files – a list of files to read the weights
propagateBack(error)[source]

Implementation according to (Rojas, 1996, p. 170 ff.).

Parameters:
error – the error to propagate back.
propagateBackBatch(error, O)[source]

Implements batch learning. However, it must be called for every data-point separately. Based on (Rojas, 1996, p. 170 ff.).:

Parameters:
error – the error to propagate back O – the outputs of every layer of the MLP.
save(filetemplate)[source]

Saves the weights into files based on the filetemplate.

Parameters:
filetemplate – to add the layer number to save in.

## pymlp.mlp.NetworkVisualizer module¶

class pymlp.mlp.NetworkVisualizer.NetworkVisualizer(rows, columns, colormap=<matplotlib.colors.LinearSegmentedColormap instance at 0xbb7620c>)[source]

Visualizes the weights of a network.

removeUnnecessaryPlots()[source]

Removes the unused subfigures so there are no empty graphs. Helpful for modular networks.

setData(weights, column, row)[source]
visualize()[source]
pymlp.mlp.NetworkVisualizer.visualize()[source]

Sample function to show, how the visualizer is used.

class pymlp.mlp.NeuralNetworkAdapter.FFNetworkAdapter(layout, transferFunctions, learningRate)[source]

Adapter for the self implemented feed-forward neural network.

clone()[source]
forward(inpt)[source]
saveCurrentNetwork(template)[source]

Saves the network at the current state.

saveInitialNetwork(template)[source]

Saves the network at the state after construction.

train(inpt, target, numberOfIterations, batch=False)[source]
class pymlp.mlp.NeuralNetworkAdapter.FFNetworkFactory(folder)[source]

Reads the initial weights of the storage files and returns a FFNetworkAdapter based on them.

createNetwork(name, layout, transferFunctions, learningRate, replicationNumber)[source]
saveNetwork(network, name, folder, uniqueIdentifier)[source]

Saves the network with the given name in the given folder. Parameters:

network – the network to save name – the name of the network folder – folder in which to store the network uniqueIdentifier – an identifier for this particular network, e.g.

the replication number

## pymlp.mlp.SupervisedTrainer module¶

Trains a neural network supervised.

class pymlp.mlp.SupervisedTrainer.DummyListener[source]

Doesn’t do anything, but makes the post process optional.

postProcess(network)[source]

Is called after each training epoch.

class pymlp.mlp.SupervisedTrainer.SupervisedTrainer(network, trainingData, trainingTargets, testingData, testingTargets, folder, postProcessListener=None)[source]

Trains two neural networks, one with stochastic and one with non-stochastic sampling.

clone()[source]

Clones this SupervisedTrainer.

collectErrors(testingData, testingTargets)[source]

Collects the square error between the actual networks output and target values.

Parameters:

testingData – the test data points testingTargets – the target values for the testingData returns – the square error between the actual results and target

values
errorAfterTraining(network, trainingData, trainingTargets, testingData, testingTargets)[source]

Calculates the mean error after a training epoch.

Parameters:
network – the network trainingData – the dataset for training trainingTargets – the targets for training testingData – the dataset for testing testingTargets – the targets for the testing data returns – the mean square error for the testing datasets
folderTemplate = '%s_%s/'
numberOfEpochs = 500
trainNetwork()[source]

Uses normal online learning to train the network.

trainNetworkBatch()[source]

Uses batch-learning to train the network.

trainNetworkStochastic()[source]

Trains the network stochastically. It shuffles the order in which the training datapoints are presented to the network.

## pymlp.mlp.TransferFunctions module¶

class pymlp.mlp.TransferFunctions.Linear[source]

Linear function and its derivative.

df(activations)[source]
f(activations)[source]
class pymlp.mlp.TransferFunctions.Logistic(beta=10)[source]

The logistic function and its derivative.

df(activations)[source]

Differential of the activation function f. According to (Rojas, 1996)

f(activations)[source]

Is the activation function.

class pymlp.mlp.TransferFunctions.TanH[source]

The TanH function and its derivative.

df(activations)[source]
f(activations)[source]