Previous [ 1] [ 2] [ 3] [ 4] [ 5] [ 6] [ 7] [ 8] [ 9] [ 10] [ 11] [ 12] [ 13] [ 14] [ 15] [ 16] [ 17] [ 18] [ 19] [ 20] [ 21] [ 22] [ 23] [ 24]

@

Journal of Information Science and Engineering, Vol. 27 No. 3, pp. 1123-1136 (May 2011)

Intention Learning From Human Demonstration*

HOA-YU CHAN, KUU-YOUNG YOUNG+ AND HSIN-CHIA FU
Department of Computer Science
+Department of Electrical Engineering
+Vision Research Center
National Chiao Tung University
Hsinchu, 300 Taiwan

Equipped with better sensing and learning capabilities, robots nowadays are meant to perform versatile tasks. To remove the load of detailed analysis and programming from the engineer, a concept has been proposed that the robot may learn how to execute the task from human demonstration by itself. Following the idea, in this paper, we propose an approach for the robot to learn the intention of the demonstrator from the resultant trajectory during task execution. The proposed approach identifies the portions of the trajectory that correspond to delicate and skillful maneuvering. Those portions, referred to as motion features, may implicate the intention of the demonstrator. As the trajectory may result from so many possible intentions, it poses a severe challenge on finding the correct ones. We first formulate the problem into a realizable mathematical form and then employ the method of dynamic programming for the search. Experiments based on the pouring and also fruit jam tasks are performed to demonstrate the proposed approach, in which the derived intention is used to execute the same task under different experimental settings.

Keywords: intention learning, human demonstration, motion feature, robot imitation, skill transfer

Full Text () Retrieve PDF document (201105_20.pdf)

Received July 23, 2009; revised January 20 & March 12, 2010; accepted March 24, 2010.
Communicated by Pau-Choo Chung.
* Part of this paper has been presented at National Symposium on System Science and Engineering, Taiwan, 2008. This work was supported in part by the National Science Council of Taiwan, R.O.C., under grant No. NSC 96-2628-E-009-164-MY3, and also Department of Industrial Technology under grant No. 97-EC-17-A-02-S1-032.