CERRLA (Cross-Entropy Relational Reinforcement Learning Algorithm) was the name given to the algorithm developed throughout my PhD research between 2009-2013. This page serves as a portal for various CERRLA-related links and data.
My thesis: ‘Policy Search Based Relational Reinforcement Learning using the Cross-Entropy Method’
After much work, I have completed and submitted my thesis for my PhD work. You can find it (as well as an alternative location for CERRLA-related files) here.
CERRLA Source Code
The source code for the CERRLA agent can be found here. All code is in Java (or JESS) and is fairly well documented. Note that JESS.jar and other jar files will be required to run CERRLA. Academic JESS licenses can be obtained from here.
Outputs of the experiments performed and presented in my thesis include the agent observations (319KB) and the raw experimental outputs. (Warning! 130MB file). Each file is compressed using 7zip.
The agent observations consist of two files per environment (action conditions and condition relations) and two additional goal-related files per goal (only relevant for BlocksWorld goals).
The raw experiment files include the files produced by CERRLA for every experiment presented in the thesis (except one: the individual experiment files for the Ms. Pac-Man 10 levels experiment were lost, but the combined averaged files are still around). The raw experiments files may be a little messy, but they were designed to be readable so with some effort, they can be parsed and the relevant information extracted. When viewing the slots, only slots with mu(S) > 0.001 are shown, and only the top 5 most probable rules are shown per slot. Nonetheless, one can get a decent snapshot of the state of CERRLA taken every 300 episodes.
Videos of CERRLA in Action!
Check out videos of CERRLA playing Ms. Pac-Man, Mario and Carcassonne on my Youtube channel.