One of my ideas for my PhD was to utilise Cyc, the common sense ontology. It already has a bunch of background information and a fully functional first-order logic structure, complete with a powerful inference engine.
So, a thought occurred to me while at Dal’s parents place, which I immediately wrote down so I could post it here for future reference.
A common issue in general purpose AI is having too many facts for the agent to consider. I’m sure the book has several good ways of addressing this issue, but I haven’t got to it yet. For now, I have an idea of utilising Cyc’s microtheory structure to contain relevant information in specific areas. For instance, if an agent is solving a problem in a building, it will have these MTs: ProblemMt, for the facts relating to the problem itself; RoomMt, for the facts relating the room it is currently inside; and BuildingMt, for the facts relating to the building.
This allows the agent to focus on problems at the appropriate scale, and still know where it stands in the grand scheme of things. Similar MTs could be grouped together for generalisation purposes as well, such as grouping all rooms together.