Researchers have developed a self-powered 'bug' that can skim across the water, and they hope it will revolutionize aquatic robotics.
Scientists have identified an automatic behavior in flies that helps them assess wind conditions -- its presence and direction -- before deploying a strategy to follow a scent to its source. The fact that they can do this is surprising -- can you tell if there's a gentle breeze if you stick your head out of a moving car? Flies aren't just reacting to an odor with a preprogrammed response: they are responding in context-appropriate manner. This knowledge potentially could be applied to train more sophisticated algorithms for scent-detecting drones to find the source of chemical leaks.
Artificial intelligence (AI) is hot right now. Also hot: the data centers that power the technology. And keeping those centers cool requires a tremendous amount of energy. The problem is only going to grow as high-powered AI-based computers and devices become commonplace. That's why researchers are devising a new type of cooling system that promises to dramatically reduce energy demands.
Engineers have shown that something as simple as the flow of air through open-cell foam can be used to perform digital computation, analog sensing and combined digital-analog control in soft textile-based wearable systems.
An international collaboration seeks to innovate the future of how a mechanical man's best friend interacts with its owner, using a combination of AI and edge computing called edge intelligence. The overarching project goal is to make the dog come 'alive' by adapting wearable-based sensing devices that can detect physiological and emotional stimuli inherent to one's personality and traits, such as introversions, or transient states, including pain and comfort levels.
Researchers are calling for regulation to guide the responsible and ethical development of bio-hybrid robotics -- a ground-breaking science which fuses artificial components with living tissue and cells.
A new essay explores which conditions must be met for consciousness to exist. At least one of them can't be found in a computer.
Researchers have developed a new formal description of internal world models, thereby enabling interdisciplinary research. Internal world models help to make predictions about new situations based on previous experience and to help find one's bearings. The new formalized view helps to compare world models of humans, animals and AI and to eliminate deficits.
Have you ever wondered how insects are able to go so far beyond their home and still find their way? The answer to this question is not only relevant to biology but also to making the AI for tiny, autonomous robots. Drone-researchers felt inspired by biological findings on how ants visually recognize their environment and combine it with counting their steps in order to get safely back home. They have used these insights to create an insect-inspired autonomous navigation strategy for tiny, lightweight robots. It allows such robots to come back home after long trajectories, while requiring extremely little computation and memory (0.65 kiloByte per 100 m). In the future, tiny autonomous robots could find a wide range of uses, from monitoring stock in warehouses to finding gas leaks in industrial sites.
In an era when the creation of artificial intelligence (AI) images is at the fingertips of the masses, the ability to detect fake pictures -- particularly deepfakes of people -- is becoming increasingly important. So what if you could tell just by looking into someone's eyes? That's the compelling finding of new research which suggests that AI-generated fakes can be spotted by analyzing human eyes in the same way that astronomers study pictures of galaxies.
How do people like to interact with robots when navigating a crowded environment? And what algorithms should roboticists use to program robots to interact with humans? These are the questions that a team of mechanical engineers and computer scientists sought to answer in a recent study.
Artificial intelligence (AI) chatbots have frequently shown signs of an 'empathy gap' that puts young users at risk of distress or harm, raising the urgent need for 'child-safe AI', according to a new study. The research urges developers and policy actors to prioritize AI design that take greater account of children's needs. It provides evidence that children are particularly susceptible to treating chatbots as lifelike, quasi-human confidantes, and that their interactions with the technology can go awry when it fails to respond to their unique needs and vulnerabilities. The study links that gap in understanding to recent reports of cases in which interactions with AI led to potentially dangerous situations for young users.
Engineers have developed a new soft, flexible device that makes robots move by expanding and contracting -- just like a human muscle. To demonstrate their new device, called an actuator, the researchers used it to create a cylindrical, worm-like soft robot and an artificial bicep. In experiments, the cylindrical soft robot navigated the tight, hairpin curves of a narrow pipe-like environment, and the bicep was able to lift a 500-gram weight 5,000 times in a row without failing.
With a new surgical intervention and neuroprosthetic interface, researchers restored a natural walking gait in people with amputations below the knee. Seven patients were able to walk faster, avoid obstacles, and climb stairs more naturally than people with a traditional amputation.
Researchers have developed nanorobots that kill cancer cells in mice. The robot's weapon is hidden in a nanostructure and is exposed only in the tumour microenvironment, sparing healthy cells.