A reachability-expressive motion planning algorithm to enhance human-robot collaboration

A reachability-expressive motion planning algorithm to enhance human-robot collaboration

A team of researchers at University of California, Los Angeles (UCLA)'s Center for Vision, Cognition, Learning, and Autonomy (VCLA), led by Prof. Song-Chun Zhu, recently developed an approach that could help to align a human user's assessment of what a robot can do with its true capabilities. This approach, presented in a paper published in IEEE Robotics and Automation Letters, is based on a new algorithm that simultaneously optimizes the physical cost and expressiveness of a robot's motion, to determine how well human observers would estimate its reachable workspace.
Comments are closed.