# PhD Progress: Differing pre-goal actions

Working on forming a general description of the pre-goal state and it is raising issues. In Blocks World, it is all fine and good, because for each of the 3 main goals, the final action is the same, therefore there are no issues in generalising the pre-goal to the action (simply swap all variables for those present in the action). Constants are another issue, but they shouldn’t be conflicting – they are simply a matter of holding onto constants in the pre-goal unless forced to generalise.

The problem lies in the clear(a) case. In the clear(a) pre-goal, the state is: on(X,a), clear(X), clear(Y), etc. However, the problem can be solved by either move(X,Y) or moveFloor(X). The fact that X happens to be the same block in either action is simply circumstance on how the actions are defined.

The only way that I can see this being solved is to have separate pre-goals for each action predicate. Hmm, this actually makes sense anyway, as each action will need to be mutated towards it’s particular pre-goal anyway (if it exists). So this also restricts the mutation operator by only allowing mutations on pre-goals that exist for that action. Unfortunately, restricting mutation could be a problem because while an action may be ideal for the final step, the intermediate steps may require other actions.

I have a feeling this is where modularisation fits in well. While Blocks World is just a toy example, it may do for now. The onAB problem has opportunities for modularisation, as well as the ‘final step’ rule which state ‘if a and b are clear, move a to b’. Dropping into modularisation for clear a, the pre-goal for this state will essentially look the same for either action, but will behave differently.

So, clear A. The general rules for each action are:
(move X Y) < = (clear X) (clear Y)
(moveFloor X) Each of these need mutations to be optimal rules for clear A:
(move X Y)(moveFloor X)

Apart from the move case, which isn’t always guaranteed, these should always work. And their pre-goal states will lead towards these mutations, as well. Going back to onAB, the last action will always be move, so the moveFloor will simply remain in its general format (and likely be ignored through iterations).

So, the slightly modified strategy now uses multiple pre-goals for actions.

However, looking at the Ms PacMan case, things may be more difficult. Assuming there are only 2 actions (moveTowards and moveFrom), the pre-goals for each of these will likely be in an extremely general form. Typically, the final action will be moving towards a dot, but Ms PacMan may simply happen across the final dot while running from a ghost, or moving towards a safe junction. Each case will look something like this:
moveTowards(X) Sp: dot(X), ghost(?), junction(?), thing(X)

moveTowards(X)Sp: dot(?), ghost(?), junction(X), thing(X)

Union: Sp: dot(?), ghost(?), junction(?), thing(X)
Which is rather useless. Perhaps the modified Ms PacMan world (still in progress) will be more helpful. Or perhaps the actions need to be dropped down a level to moveToDot(X), moveFromGhost(X)…

I think I’ll continue trying to solve the Blocks World as currently defined before trying to tackle Ms PacMan/StarCraft.