Archive 04.03.2022

Page 55 of 64
1 53 54 55 56 57 64

How to help humans understand robots

Researchers from MIT and Harvard suggest that applying theories from cognitive science and educational psychology to the area of human-robot interaction can help humans build more accurate mental models of their robot collaborators, which could boost performance and improve safety in cooperative workspaces. Image: MIT News, iStockphoto

By Adam Zewe | MIT News Office

Scientists who study human-robot interaction often focus on understanding human intentions from a robot’s perspective, so the robot learns to cooperate with people more effectively. But human-robot interaction is a two-way street, and the human also needs to learn how the robot behaves.

Thanks to decades of cognitive science and educational psychology research, scientists have a pretty good handle on how humans learn new concepts. So, researchers at MIT and Harvard University collaborated to apply well-established theories of human concept learning to challenges in human-robot interaction.

They examined past studies that focused on humans trying to teach robots new behaviors. The researchers identified opportunities where these studies could have incorporated elements from two complementary cognitive science theories into their methodologies. They used examples from these works to show how the theories can help humans form conceptual models of robots more quickly, accurately, and flexibly, which could improve their understanding of a robot’s behavior.

Humans who build more accurate mental models of a robot are often better collaborators, which is especially important when humans and robots work together in high-stakes situations like manufacturing and health care, says Serena Booth, a graduate student in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and lead author of the paper.

“Whether or not we try to help people build conceptual models of robots, they will build them anyway. And those conceptual models could be wrong. This can put people in serious danger. It is important that we use everything we can to give that person the best mental model they can build,” says Booth.

Booth and her advisor, Julie Shah, an MIT professor of aeronautics and astronautics and the director of the Interactive Robotics Group, co-authored this paper in collaboration with researchers from Harvard. Elena Glassman ’08, MNG ’11, PhD ’16, an assistant professor of computer science at Harvard’s John A. Paulson School of Engineering and Applied Sciences, with expertise in theories of learning and human-computer interaction, was the primary advisor on the project. Harvard co-authors also include graduate student Sanjana Sharma and research assistant Sarah Chung. The research will be presented at the IEEE Conference on Human-Robot Interaction.

A theoretical approach

The researchers analyzed 35 research papers on human-robot teaching using two key theories. The “analogical transfer theory” suggests that humans learn by analogy. When a human interacts with a new domain or concept, they implicitly look for something familiar they can use to understand the new entity.

The “variation theory of learning” argues that strategic variation can reveal concepts that might be difficult for a person to discern otherwise. It suggests that humans go through a four-step process when they interact with a new concept: repetition, contrast, generalization, and variation.

While many research papers incorporated partial elements of one theory, this was most likely due to happenstance, Booth says. Had the researchers consulted these theories at the outset of their work, they may have been able to design more effective experiments.

For instance, when teaching humans to interact with a robot, researchers often show people many examples of the robot performing the same task. But for people to build an accurate mental model of that robot, variation theory suggests that they need to see an array of examples of the robot performing the task in different environments, and they also need to see it make mistakes.

“It is very rare in the human-robot interaction literature because it is counterintuitive, but people also need to see negative examples to understand what the robot is not,” Booth says.

These cognitive science theories could also improve physical robot design. If a robotic arm resembles a human arm but moves in ways that are different from human motion, people will struggle to build accurate mental models of the robot, Booth explains. As suggested by analogical transfer theory, because people map what they know — a human arm — to the robotic arm, if the movement doesn’t match, people can be confused and have difficulty learning to interact with the robot.

Enhancing explanations

Booth and her collaborators also studied how theories of human-concept learning could improve the explanations that seek to help people build trust in unfamiliar, new robots.

“In explainability, we have a really big problem of confirmation bias. There are not usually standards around what an explanation is and how a person should use it. As researchers, we often design an explanation method, it looks good to us, and we ship it,” she says.

Instead, they suggest that researchers use theories from human concept learning to think about how people will use explanations, which are often generated by robots to clearly communicate the policies they use to make decisions. By providing a curriculum that helps the user understand what an explanation method means and when to use it, but also where it does not apply, they will develop a stronger understanding of a robot’s behavior, Booth says.

Based on their analysis, they make a number recommendations about how research on human-robot teaching can be improved. For one, they suggest that researchers incorporate analogical transfer theory by guiding people to make appropriate comparisons when they learn to work with a new robot. Providing guidance can ensure that people use fitting analogies so they aren’t surprised or confused by the robot’s actions, Booth says.

They also suggest that including positive and negative examples of robot behavior, and exposing users to how strategic variations of parameters in a robot’s “policy” affect its behavior, eventually across strategically varied environments, can help humans learn better and faster. The robot’s policy is a mathematical function that assigns probabilities to each action the robot can take.

“We’ve been running user studies for years, but we’ve been shooting from the hip in terms of our own intuition as far as what would or would not be helpful to show the human. The next step would be to be more rigorous about grounding this work in theories of human cognition,” Glassman says.

Now that this initial literature review using cognitive science theories is complete, Booth plans to test their recommendations by rebuilding some of the experiments she studied and seeing if the theories actually improve human learning.

This work is supported, in part, by the National Science Foundation.

Temperature variation could help new touchscreen technology simulate virtual shapes

High-fidelity touch has the potential to significantly expand the scope of what we expect from computing devices, making new remote sensory experiences possible. The research on these advancements, led by a pair of researchers from the J. Mike Walker Department of Mechanical Engineering at Texas A&M University, could help touchscreens simulate virtual shapes.

Researchers develop robotic hand with adaptive grip, complex in-hand manipulation

When we pick something up, we'll often jostle it around a bit, searching to get the best grip. A team of researchers have now developed a robotic hand that does something similar—a breakthrough that could advance the field of assistive robots.

New implant offers promise for the paralyzed

Michel Roccati stands up and walks in Lausanne. © EPFL / Alain Herzog 2021

The images made headlines around the world in late 2018. David Mzee, who had been left paralyzed by a partial spinal cord injury suffered in a sports accident, got up from his wheelchair and began to walk with the help of a walker. This was the first proof that Courtine and Bloch’s system – which uses electrical stimulation to reactivate spinal neurons – could work effectively in patients.

Fast forward three years, and a new milestone has just been reached. The research team led by both Courtine, a professor at EPFL and member of NCCR Robotics, and Bloch, a professor and neurosurgeon at CHUV, has enhanced their system with more sophisticated implants controlled by artificial-intelligence software. These implants can stimulate the region of the spinal cord that activates the trunk and leg muscles. Thanks to this new technology, three patients with complete spinal cord injury were able to walk again outside the lab. “Our stimulation algorithms are still based on imitating nature,” says Courtine. “And our new, soft implanted leads are designed to be placed underneath the vertebrae, directly on the spinal cord. They can modulate the neurons regulating specific muscle groups. By controlling these implants, we can activate the spinal cord like the brain would do naturally to have the patient stand, walk, swim or ride a bike, for example.”

Patient with complete spinal cord injury (left) and incomplete spinal cord injury (right) walking in Lausanne, Switzerland. ©NeuroRestore – Jimmy Ravier

The new system is described in an article appearing in Nature Medicine that was also co-authored by Silvestro Micera, who leads the NCCR Robotics Wearable Robotics Grand Challenge. “Our breakthrough here is the longer, wider implanted leads with electrodes arranged in a way that corresponds exactly to the spinal nerve roots,” says Bloch. “That gives us precise control over the neurons regulating specific muscles.” Ultimately, it allows for greater selectivity and accuracy in controlling the motor sequences for a given activity.

Extensive training is obviously necessary for patients to get comfortable using the device. But the pace and scope of rehabilitation is amazing. “All three patients were able to stand, walk, pedal, swim and control their torso movements in just one day, after their implants were activated!” says Courtine. “That’s thanks to the specific stimulation programs we wrote for each type of activity. Patients can select the desired activity on the tablet, and the corresponding protocols are relayed to the pacemaker in the abdomen.”

Read the full story on the EPFL website.

How to help humans understand robots

Scientists who study human-robot interaction often focus on understanding human intentions from a robot's perspective, so the robot learns to cooperate with people more effectively. But human-robot interaction is a two-way street, and the human also needs to learn how the robot behaves.

Hands on ground robot & drone design series part I: mechanical & wheels

This is a new series looking at the detailed design of various robots. To start with we will be looking at the design of two different robots that were used for the DARPA Subterranean Challenge. Both of these robots were designed for operating in complex subterranean environments, including Caves, Mines & Urban environments. Both of these robots presented are from the Carnegie Mellon University Explorer team. While I am writing these posts, this was a team effort that required many people to be successful. (If anyone on Team Explorer is reading this, thank you for everything, you are all awesome.)

These posts are skipping the system requirements step of the design process. See here for more details on defining system requirements.

SubT Ground UGV and DS Drone ImageTeam Explorer R1 Ground Robot and DS Drone [Source]

R3 Ground robot (UGV)

For the SubT challenge three ground vehicles were developed all of a similiar design. The ground robots were known with the moniker of R#, where # is the order we built them in. The primary difference between the three versions are

R1 – Static Chassis, so the chassis has minimal ground compliance when driving over obstacles and uneven surfaces. R1 was initially supposed to have a differencing mechanism for compliance, however due to time constraints it was left out from this first version. R1 is pictured above.

R2 – Has the differencing mechanism and was designed as initially planned.

R3 – Is almost identical to R2, but smaller. This robot was built for navigating smaller areas and also to be able to climb up and down steps. It also uses different motors for the driving the wheels.

DS drone

The original drone design used by Team Explorer called their drones D1, D2, etc.. This let a combination of UGV +Drone go by joint designations such as R2D2. Early on, the team switched to a smaller drone design that was referred to as DS1, DS2, etc.. Where DS is short for Drone Small.

The drone design post are split into two sections. The first is about the actual drone platform, and the second is about the payload that sat on top of the drone.

Mechanical & wheels

Robot size decision

After we have the list of system requirements we start with the design of the mechanical structure of the robot. In this case we decided that a wheeled robot would be best. We wanted to have the largest wheels possible to help climb over obstacles, however, we also needed to keep our sensors at the top of the vehicle above the wheels and be able to fit in openings 1 x 1 meters. These requirements set the maximum size of the robot as well as the maximum size of the wheels.

The final dimensions of the first two vehicles (R1 and R2) were around (L x W x H) 1.2 x 0.8 x 0.8 meters (3.9 x 2.6 x 2.6 ft). The third smaller vehicle was around 1 x 0.6 m (3.2 x 1.9 ft) and designed to fit through 0.7×0.7 m openings.

Steering approach

Early on we also needed to determine the method of driving. Do we want wheels or tracks? Do we want to steer with ackerman steering, rocker-bogie, skid steer, etc.?

See here for more details on steering selection.

We chose to use a skid steer four wheeled drive approach for the simplicity of control and the ability to turn in place (point turns). At the start of the competition we were not focused on stair climbing, which might have changed some of our design decisions.

Suspension

The next step was to determine the suspension type. A suspension is needed so that all four of the wheels make contact with the ground. If the robot had a static fixed frame only three of the wheels might make contact with the ground when on uneven surfaces. This would reduce our stability and traction.

We decided early on that we wanted a passive suspension for the simplicity of not having active components. With a passive suspension we were looking at different type of body averaging. We roughly had two choices, front-pivot or side-to-side.

Left image shows a front-pivot approach. Right image shows a side-to-side differencing method.

We decided to choose the front-pivot method, however we decided to make the pivot be roughly centered in the vehicle. This allowed us to put all of the electronics in the front and the batteries in the rear. The front-pivot method we felt would be better for climbing up stairs and for climbing over obstacles on level’ish terrain. Also importantly this approach made it easier to carry a drone on the ground vehicle.

Chassis design

At this point we started designing the chassis. This was an important step so that we could estimate the total weight in order to spec the drive-train. Ideas for the chassis were everything from building with 80/20 to building an aluminum frame and populating it with components, to a solid welded chassis. We selected to use a welded steel tube chassis for the strength. We needed a robot that could survive anything we did to it. This proved to be a wise decision when the robot crashed or fell over cliffs. The downside of the steel was increased mass.

For the pivot we found a large crossed roller bearing that we were able to use to attach the two steel boxes together. The large bore in the middle was useful for passing wires/cables through for batteries, motors, etc…

Part of the chassis design was also determining where all of the components should mount. Having the batteries (green boxes in image above) in the rear helps us climb obstacles. Other goals were to keep the ground clearance as high as possible while keeping the center of gravity (CG) as low as possible. Since those are competing goals, part of the design process was to develop a happy medium.

In order to maintain modularity for service, each wheel module had the motor controller, motor, gear box, and bearing block as a solid unit that could be swapped between robots if there was any issues. This also allowed most of the wiring to be part of that block. The only cables that needed to be connected to each of the modules from the robot were power, CAN communications, and the emergency stop line; all of which were connectorized.

For electronics on R1 and R2 we built an electronics box that was separate from the robot and could be removed from the robot as needed. On R3 we built the electronics into the robot itself. This modular approach was very useful when we had to do some welding to the chassis post-build for modifications. The downside of the modular approach for electronics was that working in the electronics box was more difficult then in the open R3. Also the time for fabricating and wiring the R1/R2 electronics boxes was considerably more than the open R3 electronics. We also had several failures during testing related to the connectors from the electronics boxes.

Wheel design

We debated a lot about what type of wheel to use, ultimately we used motorcycle wheels due to the simplicity of obtaining them and mounting them. The wheel diameter we desired also lined up very well with motorcycle wheels. In order to get better traction and ability to climb over obstacles we liked the wider tires.

R1 and R2 had a wheel diameter of 0.55m, R3 had a wheel diameter of 0.38m. This gave R1 and R2 a ground clearance of 0.2m, and R3 a ground clearance of 0.12m.

The wheel hubs ended up being a different story. We found solid metal rims that we had to machine large amounts of metal out of in order to balance the strength and the weight.

The R1 and R2 robots were around 180kg (400lb)*, the wheels were for a vehicle significantly heavier. As such we put a small amount of pressure in the wheels to keep them from falling off, however we tried to keep the pressure low to increase the ground compliance of the wheels. This method added a very small amount of compliance, we tried removing some of the rubber from the sidewalls, but was not able to get a happy medium between limiting the wheel deforming during point turns and increasing ground compliance.

We were also concerned how the motorcycle tires would do when point turning and if we would rip the wheels from the rims. To counter this we installed a beadlock system into each of the wheels. The beadlock was a curved segment installed in multiple places to sandwich the tire to the rim. We never had a wheel separate from the rim, so our approach definitely worked, however it was a pain to install.

*R3 was around 90 kg (200 lbs). We tried using different wheels and tracks to get R3 to climb stairs well. However that story is for another post…

The black rims were solid metal that we machined the wedges into in order to lightweight them. The 3 metal posts in those wedges are the beadlock tensioning bolts. You can also see the castle nut and pin that holds the wheel to the axle. This image is from R2, you can see the gap between the front and rear sections of the robot where the pivot is.

Drive-train selection

Now that we had a mass estimate and system requirements for speed and obstacle clearance we can start to spec the drive-train. The other piece of information that we needed and had to discuss with the electrical team was the voltage of the battery. Different bus voltages greatly affects the motors available for a given speed and torque. We decided on a 51.2v nominal bus voltage. This presented a problem since it was very hard to find the speed/torques we wanted at that voltage. We ended up selecting a 400W 1/2HP motor+gearbox from Oriental Motors with a parallel 100:1 gearbox that allows us to drive at a maximum speed of 2.5m/s.

The part numbers of the motors and gearbox on R1 and R2 were BLVM640N-GFS + GFS6G100FR.

The part numbers of the motors and gearbox on the smaller R3 were Maxon EC 90 Flat + GP81A.

Next steps

Now that we know the mechanics of the robot we can start building it. In the next post we will start looking at the electronics and motor controls. While the nature of the blog makes it seem that this design is a serial process, in reality lots of things are happening in parallel. While the mechanical team is designing the chassis, the electrical team is finding the electrical components needed in order for the mechanical person to know what needs mounted.

It is also important to work with the electrical team to figure out wire routing while the chassis is being developed.


Note of the editor: This post has been merged from the posts “Hands On Ground Robot & Drone Design Series” and “Mechanical & Wheels – Hands On Ground Robot Design“.

Page 55 of 64
1 53 54 55 56 57 64