Archive 11.09.2020

Page 4 of 5
1 2 3 4 5

Experiments reveal why human-like robots elicit uncanny feelings

 

Androids, or robots with humanlike features, are often more appealing to people than those that resemble machines—but only up to a certain point. Many people experience an uneasy feeling in response to robots that are nearly lifelike, and yet somehow not quite "right." The feeling of affinity can plunge into one of repulsion as a robot's human likeness increases, a zone known as "the uncanny valley."

The journal Perception published new insights into the cognitive mechanisms underlying this phenomenon made by psychologists at Emory University.

Since the uncanny valley was first described, a common hypothesis developed to explain it. Known as the mind-perception theory, it proposes that when people see a  with human-like features, they automatically add a mind to it. A growing sense that a machine appears to have a mind leads to the creepy feeling, according to this theory.

"We found that the opposite is true," says Wang Shensheng, first author of the new study, who did the work as a graduate student at Emory and recently received his Ph.D. in psychology. "It's not the first step of attributing a mind to an  but the next step of 'dehumanizing' it by subtracting the idea of it having a mind that leads to the uncanny valley. Instead of just a one-shot process, it's a dynamic one."

The findings have implications for both the design of robots and for understanding how we perceive one another as humans.

"Robots are increasingly entering the social domain for everything from education to healthcare," Wang says. "How we perceive them and relate to them is important both from the standpoint of engineers and psychologists."

"At the core of this research is the question of what we perceive when we look at a face," adds Philippe Rochat, Emory professor of psychology and senior author of the study. "It's probably one of the most important questions in psychology. The ability to perceive the minds of others is the foundation of human relationships. "

The research may help in unraveling the mechanisms involved in mind-blindness—the inability to distinguish between humans and machines—such as in cases of extreme autism or some psychotic disorders, Rochat says.

Co-authors of the study include Yuk Fai Cheong and Daniel Dilks, both associate professors of psychology at Emory.

Anthropomorphizing, or projecting human qualities onto objects, is common. "We often see faces in a cloud for instance," Wang says. "We also sometimes anthropomorphize machines that we're trying to understand, like our cars or a computer."

Naming one's car or imagining that a cloud is an animated being, however, is not normally associated with an uncanny feeling, Wang notes. That led him to hypothesize that something other than just anthropomorphizing may occur when viewing an android.

To tease apart the potential roles of mind-perception and dehumanization in the  phenomenon the researchers conducted experiments focused on the temporal dynamics of the process. Participants were shown three types of images—human faces, mechanical-looking robot faces and android faces that closely resembled humans—and asked to rate each for perceived animacy or "aliveness." The exposure times of the images were systematically manipulated, within milliseconds, as the participants rated their animacy.

The results showed that perceived animacy decreased significantly as a function of exposure time for android faces but not for mechanical-looking robot or human faces. And in android faces, the perceived animacy drops at between 100 and 500 milliseconds of viewing time. That timing is consistent with previous research showing that people begin to distinguish between human and artificial faces around 400 milliseconds after stimulus onset.

A second set of experiments manipulated both the exposure time and the amount of detail in the images, ranging from a minimal sketch of the features to a fully blurred image. The results showed that removing details from the images of the android faces decreased the perceived animacy along with the perceived uncanniness.

"The whole process is complicated but it happens within the blink of an eye," Wang says. "Our results suggest that at first sight we anthropomorphize an android, but within milliseconds we detect deviations and dehumanize it. And that drop in perceived animacy likely contributes to the uncanny feeling."

How the 5G network will affect AI. The short and no buzzword version

I hear a lot about how the 5G mobile network technology will change the world and especially be a big enabler for applied AI. But is that really true? Will 5G be an AI gamechanger? 

The short answer is no. 5G is nice. It's a convenient technology that will be nice for AI but in no way a big deal. The best comparison I can make is that 5G is like new roads. Asking people behind AI solutions if 5G is a gamechanger is like asking a shopkeeper the same about new roads. And that being in a location with already decent roads. The shopkeeper would probably answer something along the lines of: "It'll be nice. It'll be a little easier for my suppliers to bring goods and a little easier for my customers to reach me. And that might be especially for shops in the city, where the new roads are prioritized due to being more used. But it won't be something that can immediately be seen on my stores profits.". 

So what does that mean? It means that better infrastructure is always nice, but if it's already decent like 4G, it’s not a huge difference. 5G is an incremental improvement to the infrastructure surrounding AI. As the infrastructure is just one of many parameters that affects AI solutions and grocery stores alike, it’s nice but no biggie. 

How 5g works

So why is it just an incremental improvement? Isn't 5G ground breaking technology ? 

Well no. 5G is like 4G radio waves transmitting data. 5G radio waves are just different from 4G waves by being shorter wavelength that can carry way more data as a result. And shorter wavelengths are not groundbreaking. We have actually been able to make short waves for a very long time. 

So why haven't we done this before? Well there’s a lot of practical problems with shorter wavelengths. They reach a shorter distance and they have trouble going through walls and other material. To counter that problem you have to put more towers closer together. More towers means more costs associated with setting up and maintaining the towers. Furthermore having towers closer together brings another problem. Radio waves interfering and disturbing each other and as a result making the data transfer unstable. 

So why now?

Finally 5G is here. And that's for two reasons. I might sound like I’ve been putting 5G down as an unimpressive technology that deserves no credit. But 5G is impressive. Just not on the immediate effect on society. 5G is impressive like building a big bridge. When you build a big bridge you have to go through way more trouble figuring out how massive concrete can hold up in the conditions it’s given. That’s impressive engineering compared to a small bridge.

And that is one of the reasons we are getting 5G now. Some really impressive engineering have gone into getting around the problems with shorter wavelengths. Making the short waves reach enough ground to make 5G practically viable and making it not interfere too much is really impressive.

The other reason we get 5G now is simply that we are so many people using so much mobile network that we can share the costs of all those mobile network towers and make a good business case. 

That’s it plain and simply. 5G is nice and an improvement but it’s no crazy invention that will change the world tomorrow.

Experiments reveal why human-like robots elicit uncanny feelings

Androids, or robots with humanlike features, are often more appealing to people than those that resemble machines—but only up to a certain point. Many people experience an uneasy feeling in response to robots that are nearly lifelike, and yet somehow not quite "right." The feeling of affinity can plunge into one of repulsion as a robot's human likeness increases, a zone known as "the uncanny valley."

The AI hoax: The genius algorithm

Sometimes a very impressive algorithmic achievement is done and it should be celebrated. GPT-3 is a great example. GPT-3 is amazing engineering and data science and very well deserved it gets a lot of media attention. But for every GPT-3 there are hundreds of thousands of AI solutions, that are based on standard algorithms and not necessarily a genius achievement but a school book approach. 

It might sound like I’m having a go at many AI solutions out there but in fact it’s the other way around. Going for a groundbreaking genius solution is for the vast majority of AI-cases not the right way to go. The standard algorithms can in most cases easily be sufficient for the task at hand and everything beyond that effort is usually bad business. 

Beware when someone claims a genius or even special algorithm

Given all this I still hear a lot about the “unique”, “genius” or “special” algorithm that some company has developed to solve a problem. I often hear terms like this in the media and the fact that this is so popular makes a lot of sense. When you have a business that you want to market and sell your product at a high price. It also helps to scare competition away when they think that the entry barrier to make a certain product or solution is very high. But that is what the genius algorithm is 99 out of a 100 times. A marketing message.

In reality much of the AI out there is standard algorithms such as CNN’s, Random Forest or even logistics regression that some would claim isn’t even AI. These algorithms can be used by most novice developers by using freely available frameworks such as Tensorflow or Scikit learn. 

My primary reason to write this post is the same as for a lot of our posts I’m writing. I want to demystify AI and by killing the narrative about the genius algorithm I hope more people will have the chance to utilize AI. 

So when you hear these claims, be critical and don’t let it be the reason not to get started with AI. 

The media is at fault

I’m not usually one to call “the fake media”, but in this case of AI I fell that the media has not lived up to it’s responsibility and a a naive way followed the hype and corporate press releases without taking the critical look that is in many ways what separates the news outlets from other information sources. 

I often wonder how the danish(Where I live) news stations can have an Egypt correspondent but not an AI or even deep tech correspondent. The events in Egypt might not be as important to the everyday life in my and many other countries than AI is starting to have. 

I really hope the media will improve here and not keep AI in a mystified aura.

The future of AI-algorithms

I’m pretty sure the future for AI-algorithms are a given. The big tech companies like OpenAI, Deepmind, Google, Facebook and Apple will be the ones to develop the genius algorithms and very often release them into the wild for everyone to use. It’s already happening and we will only see more of this. So claiming to have a genius algorithm is just not very likely a true claim in the future.

A robot that controls highly flexible tools

 

RoboCut is also able to carve hearts. 

How do you calculate the coordinated movements of two robot arms so they can accurately guide a highly flexible tool? ETH researchers have integrated all aspects of the optimisation calculations into an algorithm. A hot-wire cutter will be used, among other things, to develop building blocks for a mortar-free structure.

A newborn moves its arms and hands largely in an undirected and random manner. It has to learn how to coordinate them step by step. Years of practice are required to master the finely balanced movements of a violinist or calligrapher. It is therefore no surprise that the advanced calculations for the optimal movement of two robot arms to guide a tool precisely involve extremely challenging optimisation tasks. The complexity also increases greatly when the tool itself is not rigid, but flexible in all directions and bends differently depending on its position and movement.

Simon Dünser from Stelian Coros' research group at the Institute for Intelligent Interactive Systems has worked with other researchers to develop a hot- cutter robot with a wire that bends flexibly as it works. This allows it to create much more  in significantly fewer cuts than previous systems, where the electrically heatable wire is rigid and is thus only able to cut ruled surfaces from fusible plastics with a straight line at every point.

Carving rabbits and designing façades

In contrast, the RoboCut from the ETH computer scientists is not limited to planes, cylinders, cones or saddle surfaces, but is also able to create grooves in a plastic block. The biggest advantage, however, is that the targeted bending of the wire means far fewer cuts are necessary than if the target shape had to be approximated using ruled surfaces. As a result, the bendable wire can be used to create the figure of a sitting rabbit from a polystyrene block through just ten cuts with wood carving-like accuracy. The outline of the rabbit becomes clearly recognizable after just two cuts.

In addition to the fundamental improvement on traditional hot-wire methods, the RoboCut project also has other specific application goals in mind. For example, in future the technology could be used in architecture to produce individual polystyrene molds for concrete parts. This would enable a more varied design of façades and the development of new types of modular building systems.

Three linked optimisations simultaneously

For Dünser, the scientific challenges were the focus of the project. "The complex optimisation calculations are what make RoboCut special. These are needed to find the most efficient tool paths possible while melting the desired shape from the polystyrene block as precisely as possible," explains the scientist.



ETH computer scientists have developed a hot-wire cutting robot that guides highly flexible tools so precisely that it is able to carve a rabbit. Credit: ETH Zürich / The Computational Robotics Lab

In order to move the wire in a controlled manner, it was attached to a two-armed Yumi robot from ABB. First, the reaction of the wire to the movements of the  had to be calculated. Positions that would lead to unstable wire placement or where there was a risk of wire breakage were determined by means of simulations and then eliminated.

ETH researchers were then able to develop the actual optimisation on this basis. This had to take into account three linked aspects simultaneously. On the physical level, it was important to predict the controlled bending and movement of the wire in order to carry out the desired cuts. In terms of the shape, a cutting sequence had to be determined that would effect a highly precise approximation of the surface to the target shape in as few steps as possible. Finally, collisions with robot parts or its environment and unintentional cuts had to be ruled out.

Preventing bad minima

Dünser is one of the first scientists to succeed in integrating all the parameters in this complex task into a global optimisation algorithm. To do this, he designed a structured methodology based on the primary objective that the wire should always cut as close as possible to the surface of the target object. All other restrictions were then assigned costs and these were optimized as a total.

Without further devices, however, such calculations always fall into local minima, which lead to a pointless end result. To prevent this, in a first step Dünser ironed out the cost function, so to speak, and began the calculation with a cut that was initially only roughly adapted to the target shape. The cutting simulation was then gradually brought closer towards the target shape until the desired accuracy was achieved.

Method with versatile potential

The method developed by Dünser is not just limited to hot-wire cutting. The design of tool paths for other cutting and milling technologies could also benefit from it in the future. The method creates a much greater degree of scope for simulation, particularly in the generation of complex non-rotationally symmetrical shapes.

Electrical discharge machining with wires could benefit directly from this, as this technology enables high-precision cutting of electrically conductive materials via spark ablation. In the future, this could involve bendable electrode wires. This means that—as with the hot-wire cutting of plastics—more complicated and thus more efficient cuts can be made more easily than with today's rigid wires.

One specific application for RoboCut is being planned jointly with a research group from EPF Lausanne. With the help of a large-scale version of the hot-wire cutting robot, systematic  for building structures free of mortar and fastening technologies will be developed. The elements themselves must hold together in a stable manner. In the future, the robot should also be used to cut the polystyrene molds with which the various bricks are cast in concrete. The clever plastic cutter thus also creates the concrete construction technology of tomorrow.

#318: Humanized Intelligence in Academia and Industry, with Ayanna Howard


In this episode, Lauren Klein interviews Ayanna Howard, Professor and Chair of the School of Interactive Computing at Georgia Tech. Professor Howard describes her wide range of work in robotics, from robots that assist children with special needs to trust in autonomous systems. She also discusses her path through the field of robotics in both academia and business, and the importance of conducting in-the-wild robotics research.


Ayanna Howard
Ayanna Howard is a Professor and Chair of the School of Interactive Computing at Georgia Tech. Professor Howard is the director and founder of the Human-Automation Systems (HumAnS) Laboratory. Her research focuses on humanized intelligence, with a wide range of applications from Human-Robot Interaction to science-driven robotics. Prior to Georgia Tech, she led research projects at NASA’s Jet Propulsion Laboratory. Professor Howard is also a founder and the CTO of the educational robotics company Zyrobotics.

Links

A robot that controls highly flexible tools

How do you calculate the coordinated movements of two robot arms so they can accurately guide a highly flexible tool? ETH researchers have integrated all aspects of the optimisation calculations into an algorithm. A hot-wire cutter will be used, among other things, to develop building blocks for a mortar-free structure.

First-of-a-kind electronic skin mimics human pain response

 

Electronic skins that perform the same sensory functions as human skin could mean big things for the fields of robotics and medical devices, and scientists are not solely focused on just the pleasant ones. Researchers in Australia have succeeded in developing an artificial skin that responds to painful stimuli in the same way real skin does, which they see as an important step towards intelligent machines and prosthetics.

It mightn’t seem like the most practical of goals, but researchers have been working to develop electronic skins that allow robots and prostheses to feel pain for quite some time. These technologies could enable amputees to know if they are picking up something sharp or dangerous, for example, or could make robots more durable and safer for humans to be around.

The researchers behind the latest breakthrough, from Australia’s Royal Melbourne Institute of Technology, believe they have created a first of-a-kind device that can replicate the feedback loop of painful stimuli in unprecedented detail. Just as nerve signals travel to the brain at warp speed to inform it that we've encountered something sharp or hot, the new artificial skin does so with great efficiency, and with an ability to distinguish between less and more severe forms of pain.

“We’ve essentially created the first electronic somatosensors – replicating the key features of the body’s complex system of neurons, neural pathways and receptors that drive our perception of sensory stimuli,” says PhD researcher Md Ataur Rahman. “While some existing technologies have used electrical signals to mimic different levels of pain, these new devices can react to real mechanical pressure, temperature and pain, and deliver the right electronic response. It means our artificial skin knows the difference between gently touching a pin with your finger or accidentally stabbing yourself with it – a critical distinction that has never been achieved before electronically.”

The artificial skin actually incorporates three separate sensing technologies the team has been working on. It consists of a stretchable electronic material made of biocompatible silicone that is as thin as a sticker, temperature-reactive coatings that transform in response to heat, and electronic memory cells designed to mimic the way the brain stores information.

“We’re sensing things all the time through the skin but our pain response only kicks in at a certain point, like when we touch something too hot or too sharp,” says lead researcher Professor Madhu Bhaskaran. “No electronic technologies have been able to realistically mimic that very human feeling of pain – until now. Our artificial skin reacts instantly when pressure, heat or cold reach a painful threshold. It’s a critical step forward in the future development of the sophisticated feedback systems that we need to deliver truly smart prosthetics and intelligent robotics.”

With further work, the team imagines the electronic skin could one day also be used as an option for non-invasive skin grafts.

A technique allows robots to determine whether they are able to lift a heavy box

 

Humanoid robots, those with bodies that resemble humans, could soon help people to complete a wide variety of tasks. Many of the tasks that these robots are designed to complete involve picking up objects of different shapes, weights and sizes.

While many humanoid robots developed up to date are capable of picking up small and light objects, lifting bulky or heavy objects has often proved to be more challenging. In fact, if an  is too large or heavy, a robot might end up breaking or dropping it.

With this in mind, researchers at Johns Hopkins University and National University of Singapore (NUS) recently developed a technique that allows robots to determine whether or not they will be able to lift a heavy box with unknown physical properties. This technique, presented in a paper pre-published on arXiv, could enable the development of robots that can lift objects more efficiently, reducing the risk that they will pick up things that they cannot support or carry.

"We were particularly interested in how a humanoid robot can reason about the feasibility of lifting a box with unknown physical parameters," Yuanfeng Han, one of the researchers who carried out the study, told TechXplore."To achieve such a complex , the robot usually needs to first identify the physical parameters of the box, then generate a whole body motion trajectory that is safe and stable to lift up the box."

The process through which a robot generates motion trajectories that allow it to lift objects can be computationally demanding. In fact, humanoid robots typically have a high amount of degrees of freedom and the motion that their body needs to make to lift an object should meet several different constraints. This means that if a box is too heavy or its center of mass is too far away from the robot, the robot will most likely be unable to complete this motion.

"Think about us humans, when we try to reason about whether we can lift up a heavy object, such as a dumbbell," Han explained. "We first interact with the dumbbell to get a certain feeling of the object. Then, based on our previous experience, we kind of know if it is too heavy for us to lift or not. Similarly, our method starts by constructing a trajectory table, which saves different valid lifting motions for the robot corresponding to a range of physical parameters of the box using simulations. Then the robot considers this table as the knowledge of its previous experience."

The technique developed by Han, in collaboration with his colleague Ruixin Li and his supervisor Gregory S. Chirikjian (Professor and Head of the Department of Mechanical Engineering at NUS) allows a robot to get a sense of the inertia parameters of a box after briefly interacting with it. Subsequently, the robot looks back at the trajectory table generated by the method and checks whether it includes a lifting motion that would allow it to lift a box with these estimated parameters.

If this motion or trajectory exists, then lifting the box is considered to be feasible and the robot can immediately complete the task. If it does not exist, then the robot considers the task beyond its capacity.

"Essentially, the trajectory table that our method constructs offline saves the valid whole-body lifting motion trajectories according to a box's range of inertia parameters," Han said. "Subsequently, we developed a physical-interaction based algorithm that helps the  interact with the box safely and estimate the inertia parameters of the box."

The new technique allows robots to rapidly determine whether they are able to complete a lifting-related task. It thus saves time and computational power, as it prevents robots from having to generate whole-body motions before every lifting attempt, even unsuccessful ones.

Han and his colleagues evaluated the approach they developed in a series of tests using NAO, a renowned  developed by SoftBank Robotics. In these trials, NEO quickly and effectively identified objects that were impossible or very difficult to lift via the new technique. In the future, the same technique could be applied to other humanoid robots to make them more reliable and efficient in completing tasks that involve lifting large or heavy objects.

"Our method can significantly increase the working efficiency for practical pick-and-place tasks, especially for repeatable tasks," Han said. "In our future work, we plan to apply our approach to different objects or lifting tasks."

A technique allows robots to determine whether they are able to lift a heavy box

Humanoid robots, those with bodies that resemble humans, could soon help people to complete a wide variety of tasks. Many of the tasks that these robots are designed to complete involve picking up objects of different shapes, weights and sizes.
Page 4 of 5
1 2 3 4 5