Physical AI uses both sight and touch to manipulate objects like a human

Physical AI uses both sight and touch to manipulate objects like a human

In everyday life, it's a no-brainer to be able to grab a cup of coffee from the table. Multiple sensory inputs such as sight (seeing how far away the cup is) and touch are combined in real-time. However, recreating this in artificial intelligence (AI) is not quite as easy.
Comments are closed.