Page 1 of 5
1 2 3 5

Crossing the Uncanny Valley: Breakthrough in technology for lifelike facial expressions in androids

Even highly realistic androids can cause unease when their facial expressions lack emotional consistency. Traditionally, a 'patchwork method' has been used for facial movements, but it comes with practical limitations. A team developed a new technology using 'waveform movements' to create real-time, complex expressions without unnatural transitions. This system reflects internal states, enhancing emotional communication between robots and humans, potentially making androids feel more humanlike.

Breaking barriers: Study uses AI to interpret American Sign Language in real-time

A study is the first-of-its-kind to recognize American Sign Language (ASL) alphabet gestures using computer vision. Researchers developed a custom dataset of 29,820 static images of ASL hand gestures. Each image was annotated with 21 key landmarks on the hand, providing detailed spatial information about its structure and position. Combining MediaPipe and YOLOv8, a deep learning method they trained, with fine-tuning hyperparameters for the best accuracy, represents a groundbreaking and innovative approach that hasn't been explored in previous research.

Empowering older adults with home-care robots

The rapidly increasing aging population will lead to a shortage of care providers in the future. While robotic technologies are a potential alternative, their widespread use is limited by poor acceptance. In a new study, researchers have examined a user-centric approach to understand the factors influencing user willingness among caregivers and recipients in Japan, Ireland, and Finland. Users' perspectives can aid the development of home-care robots with better acceptance.

Scientists create AI that ‘watches’ videos by mimicking the brain

Imagine an artificial intelligence (AI) model that can watch and understand moving images with the subtlety of a human brain. Now, scientists have made this a reality by creating MovieNet: an innovative AI that processes videos much like how our brains interpret real-life scenes as they unfold over time.

Black-box forgetting: A new method for tailoring large AI models

Pretrained large-scale AI models need to 'forget' specific information for privacy and computational efficiency, but no methods exist for doing so in black-box vision-language models, where internal details are inaccessible. Now, researchers addressed this issue through a strategy based on latent context sharing, successfully getting an image classifier to forget multiple classes it was trained on. Their findings could expand the use cases of large-scale AI models while safeguarding end users' privacy.

Smallest walking robot makes microscale measurements

Researchers have created the smallest walking robot yet. Its mission: to be tiny enough to interact with waves of visible light and still move independently, so that it can maneuver to specific locations -- in a tissue sample, for instance -- to take images and measure forces at the scale of some of the body's smallest structures.

Inside the ‘swat team’ — how insects react to virtual reality gaming

Humans get a real buzz from the virtual world of gaming and augmented reality but now scientists have trialled the use of these new-age technologies on small animals, to test the reactions of tiny hoverflies and even crabs. In a bid to comprehend the aerodynamic powers of flying insects and other little-understood animal behaviors, the study is gaining new perspectives on how invertebrates respond to, interact with and navigate virtual 'worlds' created by advanced entertainment technology.

Researchers highlight Nobel-winning AI breakthroughs and call for interdisciplinary innovation

A new article examines the convergence of physics, chemistry, and AI, highlighted by recent Nobel Prizes. It traces the historical development of neural networks, emphasizing the role of interdisciplinary research in advancing AI. The authors advocate for nurturing AI-enabled polymaths to bridge the gap between theoretical advancements and practical applications, driving progress toward artificial general intelligence.

The future of edge AI: Dye-sensitized solar cell-based synaptic device

Physical reservoir computing (PRC) utilizing synaptic devices shows significant promise for edge AI. Researchers from the Tokyo University of Science have introduced a novel self-powered dye-sensitized solar cell-based device that mimics human synaptic behavior for efficient edge AI processing, inspired by the eye's afterimage phenomenon. The device has light intensity-controllable time constants, helping it achieve high performance during time-series data processing and motion recognition tasks. This work is a major step toward multiple time-scale PRC.
Page 1 of 5
1 2 3 5