Judging by the progress being made on automated modular learning, the algorithm could easily be applied to continuous domains. For currently it learns module in basically a continuous domain, because the agent simply doesn’t care about external reward received by the environment – it only tries to achieve its own internal goals.
Furthermore, currently the agent is only learning a single module at a time, but this can (and probably will) be extended to allow learning over multiple modules concurrently. Or at least learning of pre-goal states for other modules. This is because the agent can learn over pseudo-goals – goals it creates on-the-fly. For instance, when an agent moves a block to the floor, the block underneath becomes clear, allowing the agent to say “I was meant to do that!” and initiate the pre-goal unification process for the newly cleared block.
Again, predicates or terms which are unique (multiple sized blocks world?) could cause problems with this, as the agent is simply latching onto any old clear predicate, and POMDPs will likely screw with its progress, but this is still good news, and probably my future work.