The mussel and scallop industry could be revolutionized by a new autonomous underwater drone.
A team of researchers at Istituto Italiano di Tecnologia's Bioinspired Soft Robotics Laboratory has developed a new pleat-based soft robotic actuator that can be used in a variety of sizes, down to just 1 centimeter. In their paper published in the journal Science Robotics, the group describes the technology behind their new actuator and how well it worked when they tested it under varied circumstances.
Humans behave and act in a way that other humans can recognize as human-like. If humanness has specific features, is it possible to replicate these features on a machine like a robot? Researchers at IIT-Istituto Italiano di Tecnologia (Italian Institute of Technology) tried to answer that question by implementing a non-verbal Turing test in a human-robot interaction task. They involved human participants and the humanoid robot iCub in a joint action experiment. What they found is that specific features of human behavior, namely response timing, can be translated into the robot in a way that humans cannot distinguish whether they are interacting with a person or a machine.
Training robots to complete tasks in the real-world can be a very time-consuming process, which involves building a fast and efficient simulator, performing numerous trials on it, and then transferring the behaviors learned during these trials to the real world. In many cases, however, the performance achieved in simulations does not match the one attained in the real-world, due to unpredictable changes in the environment or task.
More than a decade ago, Ted Adelson set out to create tactile sensors for robots that would give them a sense of touch. The result? A handheld imaging system powerful enough to visualize the raised print on a dollar bill. The technology was spun into GelSight, to answer an industry need for low-cost, high-resolution imaging.
Disturbing footage emerged this week of a chess-playing robot breaking the finger of a seven-year-old child during a tournament in Russia.
What factors influence the embodiment felt towards parts of our bodies controlled by others? Using a new "joint avatar" whose left and right limbs are controlled by two people simultaneously, researchers have revealed that the visual information necessary to predict the partner's intentions behind limb movements can significantly enhance the sense of embodiment towards partner-controlled limbs during virtual co-embodiment. This finding may contribute to enhancing the sense of embodiment towards autonomous prosthetic limbs.
Researchers at Ulm University in Germany have recently developed a new framework that could help to make self-driving cars safer in urban and highly dynamic environments. This framework, presented in a paper pre-published on arXiv, is designed to identify potential threats around the vehicle in real-time.
A chess-playing robot grabbed the finger of its 7-year-old opponent and broke it during last week's Moscow Chess Open tournament, Russian media reported Monday.
Teams of mobile robots could be highly effective in helping humans to complete straining manual tasks, such as manufacturing processes or the transportation of heavy objects. In recent years, some of these robots have already been tested and introduced in real-world settings, attaining very promising results.
As the underwater robot OceanOneK carefully navigated toward the upper deck railing of the sunken Italian steamship Le Francesco Crispi about 500 m below the Mediterranean's surface this month (roughly a third of a mile), Stanford University roboticist Oussama Khatib felt as though he himself was there.
The robot watched as Shikhar Bahl opened the refrigerator door. It recorded his movements, the swing of the door, the location of the fridge and more, analyzing this data and readying itself to mimic what Bahl had done.
When communication lines are open, individual agents such as robots or drones can work together to collaborate and complete a task. But what if they aren't equipped with the right hardware or the signals are blocked, making communication impossible? University of Illinois Urbana-Champaign researchers started with this more difficult challenge. They developed a method to train multiple agents to work together using multi-agent reinforcement learning, a type of artificial intelligence.
Penn State agricultural engineers have developed, for the first time, a prototype "end-effector" capable of deftly removing unwanted apples from trees—the first step toward robotic, green-fruit thinning.
Researchers from Carnegie Mellon University took an all-terrain vehicle on wild rides through tall grass, loose gravel and mud to gather data about how the ATV interacted with a challenging off-road environment.