# pyrltr.analyzer package¶

## pyrltr.analyzer.DataAnalyzer module¶

class pyrltr.analyzer.DataAnalyzer.DataAnalyzer(name, steplength=50, groupsize=50)[source]
computeGroupSums(x)[source]
computeMeanStdOfFiles(dataSets, computeSum=False, computeGroupSums=False)[source]

Computes the mean values and standard deviations of the values within the given files.

expandData(data, computeSum=False)[source]

Expands the short datasets with nans to have a uniform length. data has to have the form [dataset0, dataset1, ...] where each dataset is a 1d array/list of different lengths.

filterFiles(f)[source]
filterNAN(datasets)[source]
isDataFile(filename)[source]
loadData(filename)[source]

## pyrltr.analyzer.DataContainer module¶

class pyrltr.analyzer.DataContainer.DataContainer(folder, prefix)[source]

Contains all the data, logged while running the simulations.

Attributes:
rewards – the received rewards lengths – the lengths of each episode startPositions – the start positions of each epoch goalPositios – the goal positions of each epoch actions – the actions taken during a simulation undos – the count of undos for every step
addAction(action)[source]

Adds an action to the current episode.

addFinalPosition(position)[source]
addGoalPosition(goalPosition)[source]

Adds the goal position to the current epoch.

addLength(length)[source]

Adds the length of the last episode to the current epoch.

addReward(reward)[source]

Adds the sum of reward for the last episode.

addStartPosition(startPosition)[source]

Adds the start position to the current epoch.

addUndo(undos)[source]

Adds the number of undos for the last action.

prepareNewEpisode()[source]

Prepares a new episode. Adds all necessary lists so that the next epoch can be logged properly.

prepareNewEpoch()[source]

Prepares a new epoch. Adds all necessary lists so that the next epoch can be logged properly.

writeData(index)[source]

Writes the data onto the harddrive.

## pyrltr.analyzer.Metrics module¶

class pyrltr.analyzer.Metrics.Metrics(dataset, normalizer)[source]
calculateKSData(firstDataset, secondDataset)[source]
calculateLearningSpeed()[source]
calculateNormalizedLearningSpeed()[source]
learnedBehaviour(numberOfSteps)[source]
learnedNormalizedBehaviour(numberOfSteps)[source]
meanVar(data)[source]
meanVarConfidence(data, confidenceLevel)[source]
normalizeData()[source]