PhD Progress: Truly Dynamic Gamma Size

Gamma being the size of the elites. Gamma too small, and the probabilities for the select few that get into the top samples shoot up, but too big and the process takes too long. This has previously been examined as using the square of the largest slot (perhaps the average slot size). The problem is, in the onAB task, the size of the slot can quickly explode. Currently, at 3.7% in on an onAB task, there are 58 rules in a slot. That’s a (proposed) gamma size of 3364! That will take quite some time to gather. Well, maybe not HEAPS of time, but some.

This may not be as large a problem once slot splitting is implemented, but it still seems wrong. In a sense, because the set of samples is floating, and the population N no longer matters in Cross-Entrobeam learning, it doesn’t particularly matter at all. But values will likely take quite some time to converge.

The current experiment is also testing the use of restricted specialisations, which seem to be slowing the specialisation. But I failed to take into account pruning. If a rule’s parent is pruned, then obviously the parent is bad, and the rule itself will likely have a larger probability than the parent (because rules are introduced with average probability). Maybe further restrictions are required: The rule must have an average or better probability. And when new rules are created, maybe they should have the same probability as the parent rule.

The system needs a little more tweaking to deal with pruned parents and low probability rules.