DagSemProc.10081.14.pdf
- Filesize: 286 kB
- 6 pages
Robot learning is usually done by trial-anderror or learning by example. Neither of these methods takes advantage of prior knowledge or of any ability to reason about actions. We describe two learning systems. In the first, we learn a model of a robot's actions. This is used in simulation to search for a sequence of actions that achieves the goal of traversing rough terrain. Further learning is used to compress the results of this search into a set of situation-action rules. In the second system, we assume the robot has some knowledge of the effects of actions and can use these to plan a sequence of actions. The qualitative states that result from the plan are used as constraints for trial-and-error learning. This approach greatly reduces the number of trials required by the learner. The method is demonstrated on the problem of a bipedal robot learning to walk.
Feedback for Dagstuhl Publishing