Lessons from human vision for robotic design
Melvyn A. Goodale
Lessons from human vision for robotic design
The visual guidance of goal-directed movements requires transformations of incoming visual information that are different from those required for visual perception. For us to grasp an object successfully, our brain must use just-in-time computations of the object’s real-world size and shape, and its orientation and disposition with respect to our hand. These requirements have led to the emergence of dedicated visuomotor modules in the posterior parietal cortex of the human brain (the dorsal visual stream) that are functionally distinct from networks in the occipito-temporal cortex (the ventral visual stream) that mediate our conscious perception of the world. Although the identification and selection of goal objects and an appropriate course of action depends on the perceptual machinery of the ventral stream and associated cognitive modules, the execution of the subsequent goal-directed action is mediated by dedicated online control systems in the dorsal stream and associated motor areas. The dorsal stream allows an observer to reach out and grasp objects with exquisite ease, but by itself, deals only with objects that are visible at the moment the action is being programmed. The ventral stream, however, allows an observer to escape the present and bring to bear information from the past – including information about the function of objects, their intrinsic properties, and their location with reference to other objects in the world. Ultimately then, both streams contribute to the production of goal-directed actions. The principles underlying this division of labour between the dorsal and ventral streams are relevant to the design and implementation of autonomous robotic systems.
Perception vs. action / Dorsal visual stream / Ventral visual stream / Tele-assistance / Grasping
[1] |
|
[2] |
|
[3] |
|
[4] |
|
[5] |
|
[6] |
|
[7] |
|
[8] |
|
[9] |
|
[10] |
|
[11] |
|
[12] |
|
[13] |
|
[14] |
|
[15] |
|
[16] |
|
[17] |
|
[18] |
|
[19] |
|
[20] |
|
[21] |
|
[22] |
|
[23] |
A. Alipour, J. Beggs, J. Brown, T. James, bioRxiv 2020.09.30.321299. https://doi.org/10.1101/2020.09.30.321299
|
[24] |
|
[25] |
|
[26] |
|
[27] |
|
[28] |
|
[29] |
|
[30] |
|
[31] |
|
[32] |
|
[33] |
|
[34] |
|
[35] |
|
[36] |
|
[37] |
|
[38] |
|
[39] |
|
[40] |
|
[41] |
|
[42] |
|
[43] |
|
[44] |
|
[45] |
|
[46] |
L.Y. Ku, L.E. Learned-Miller, R. Grupen, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2017), p. 2434
|
[47] |
R. Detry, J. Papon, L. Matthies, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2017), p. 3266
|
[48] |
|
[49] |
|
/
〈 | 〉 |