Imagine an artificial intelligence (AI) model that can watch and understand moving images with the subtlety of a human brain. Now, scientists have made this a reality by creating MovieNet: an innovative AI that processes videos much like how our brains interpret real-life scenes as they unfold over time.
Pretrained large-scale AI models need to 'forget' specific information for privacy and computational efficiency, but no methods exist for doing so in black-box vision-language models, where internal details are inaccessible. Now, researchers addressed this issue through a strategy based on latent context sharing, successfully getting an image classifier to forget multiple classes it was trained on. Their findings could expand the use cases of large-scale AI models while safeguarding end users' privacy.
Physicists have devised an algorithm that provides a mathematical framework for how learning works in lattices called mechanical neural networks.
Researchers have built a drone that can walk, hop, and jump into flight with the aid of birdlike legs, greatly expanding the range of potential environments accessible to unmanned aerial vehicles.
An innovative algorithm called Spectral Expansion Tree Search helps autonomous robotic systems make optimal choices on the move.
Researchers have created the smallest walking robot yet. Its mission: to be tiny enough to interact with waves of visible light and still move independently, so that it can maneuver to specific locations -- in a tissue sample, for instance -- to take images and measure forces at the scale of some of the body's smallest structures.
Humans get a real buzz from the virtual world of gaming and augmented reality but now scientists have trialled the use of these new-age technologies on small animals, to test the reactions of tiny hoverflies and even crabs. In a bid to comprehend the aerodynamic powers of flying insects and other little-understood animal behaviors, the study is gaining new perspectives on how invertebrates respond to, interact with and navigate virtual 'worlds' created by advanced entertainment technology.
A new article examines the convergence of physics, chemistry, and AI, highlighted by recent Nobel Prizes. It traces the historical development of neural networks, emphasizing the role of interdisciplinary research in advancing AI. The authors advocate for nurturing AI-enabled polymaths to bridge the gap between theoretical advancements and practical applications, driving progress toward artificial general intelligence.
Physical reservoir computing (PRC) utilizing synaptic devices shows significant promise for edge AI. Researchers from the Tokyo University of Science have introduced a novel self-powered dye-sensitized solar cell-based device that mimics human synaptic behavior for efficient edge AI processing, inspired by the eye's afterimage phenomenon. The device has light intensity-controllable time constants, helping it achieve high performance during time-series data processing and motion recognition tasks. This work is a major step toward multiple time-scale PRC.
Researchers study the importance of enunciation when using speech-to-text software in medical situations.
Trust between humans and robots is improved when the movement between both is harmonized, researchers have discovered.
In order to use remote locations to record and assess the behavior of wildlife and environmental conditions, the GAIA Initiative developed an artificial intelligence (AI) algorithm that reliably and automatically classifies behaviors of white-backed vultures using animal tag data. As scavengers, vultures always look for the next carcass. With the help of tagged animals and a second AI algorithm, the scientists can now automatically locate carcasses across vast landscapes.
Quantum-science advances using AI can measure very small surfaces and distances -- opening a world of medical, manufacturing and other applications.
Researchers have developed a robot that identifies different plant species at various stages of growth by 'touching' their leaves with an electrode. The robot can measure properties such as surface texture and water content that cannot be determined using existing visual approaches. The robot identified ten different plant species with an average accuracy of 97.7% and identified leaves of the flowering bauhinia plant with 100% accuracy at various growth stages.
Linguistics and computer science researchers have discovered some of the root causes of why AI large language models perform poorly in human-like conversations.