PhD Progress: Formalising Mario into a Relational Environment

One of the main problems currently facing me with implmenting Mario is how to represent it. There are different tracks available to me, and I don’t really know which one would be best.

The first and possibly easiest track is to do the same with Mario as I did with PacMan – turn possible groups of actions into a single action (toDot, etc). In Mario, this would be like jumpOnGoomba, collectCoin. The problem with this is that it is practically defining the environment for a baby to learn in.

The second step up the generalisation ladder is to only use general actions, but still clear enough in their intent: jumpOver, jumpOnto, shoot. These actions are enough to suitably play Mario (there is still the problem of obstacles, when exactly to run, grabbing, hitting boxes, etc. But these should still be enough to evidence some form of play. These actions can be achieved by (theoretically) implementing a slot-splitting mechanic which splits a slot into slots of the identical action, but with different type predicates in their rules. So there’ll be jumpOnto(coin) slot and jumpOnto(enemy). The problem with this is type hierarchies. All things in Mario are objects, and some are coins, some enemies. Some enemies are also non-jumpable, which could be an observation to allow the environment ot provide. Who knows… perhaps the agent can learn which enemies are not jump-ontoable.

Note that this system was theorised for Ms. PacMan once too, and may have to be implemented before Mario is. It is only a question of how much the slot-splitting mechanic will slow/decrease learning.

The third step is at keystroke level. There are numerous problems with this, specifically the fact that the keys are not relationally linked to the objects in game, save Mario himself. Rules could be created for such things, but not currently with the rules created, which use action objects for specialising the rules.