Page 1 of 575
1 2 3 575

Aerial microrobot can fly as fast as a bumblebee

In the future, tiny flying robots could be deployed to aid in the search for survivors trapped beneath the rubble after a devastating earthquake. Like real insects, these robots could flit through tight spaces larger robots can't reach, while simultaneously dodging stationary obstacles and pieces of falling rubble.

New control system teaches soft robots the art of staying safe

Imagine having a continuum soft robotic arm bend around a bunch of grapes or broccoli, adjusting its grip in real time as it lifts the object. Unlike traditional rigid robots that generally aim to avoid contact with the environment as much as possible and stay far away from humans for safety reasons, this arm senses subtle forces, stretching and flexing in ways that mimic more of the compliance of a human hand. Its every motion is calculated to avoid excessive force while achieving the task efficiently.

New robotic eyeball could enhance visual perception of embodied AI

Embodied artificial intelligence (AI) systems are robotic agents that rely on machine learning algorithms to sense their surroundings, plan their actions and execute them. A key aspect of these systems are visual perception modules, which allow them to analyze images captured by cameras and interpret them.

Researchers develop new method for modeling complex sensor systems

A research team at Kumamoto University (Japan) has unveiled a new mathematical framework that makes it possible to accurately model systems using multiple sensors that operate at different sensing rates. This breakthrough could pave the way for safer autonomous vehicles, smarter robots, and more reliable sensor networks.

Optimizing Wheel Drives for AGVs and AMRs: What OEMs Need to Know About Motion Control

The motor and actuator selection behind each wheel can make or break the success of the entire system. In this post, we’ll explore the core challenges in mobile robot drive systems and how customized motion control solutions from DINGS' Motion USA can help you meet them.

AUCTION – FACILITY CLOSURE – MAJOR ROBOTICS AUTOMATION COMPANY

BTM Industrial is a leading asset disposition company assisting manufacturing companies with their surplus asset needs. Founded in 2011, it is a fully licensed-and-regulated, commission-based auction and liquidation company. The company’s full asset disposition programs provide customers with the ability to efficiently manage all aspects of their surplus and achieve higher value.

Artificial tendons give muscle-powered robots a boost

Our muscles are nature's actuators. The sinewy tissue is what generates the forces that make our bodies move. In recent years, engineers have used real muscle tissue to actuate "biohybrid robots" made from both living tissue and synthetic parts. By pairing lab-grown muscles with synthetic skeletons, researchers are engineering a menagerie of muscle-powered crawlers, walkers, swimmers, and grippers.

Why companies don’t share AV crash data – and how they could

An illustration in intense colors in a gloomy mood showing a collage of two mirrored cars, street signs and mathematical symbolsAnton Grabolle / Autonomous Driving / Licenced by CC-BY 4.0

By Susan Kelley

Autonomous vehicles (AVs) have been tested as taxis for decades in San Francisco, Pittsburgh and around the world, and trucking companies have enormous incentives to adopt them.

But AV companies rarely share the crash- and safety-related data that is crucial to improving the safety of their vehicles – mostly because they have little incentive to do so.

Is AV safety data an auto company’s intellectual asset or a public good? It can be both – with a little tweaking, according to a team of Cornell researchers.

The team has created a roadmap outlining the barriers and opportunities to encourage AV companies to share the data to make AVs safer, from untangling public versus private data knowledge, to regulations to creating incentive programs.

“The core of AV market competition involves who has that crash data, because once you have that data, it’s much easier for you to train your AI to not make that error. The hope is to first make this data transparent and then use it for public good, and not just profit,” said Hauke Sandhaus, M.S. ’24, a doctoral candidate at Cornell Tech and co-author of “My Precious Crash Data,” published Oct. 16 in ACM on Human-Computer Interaction and presented at the ACM SIGCHI Conference on Computer-Supported Cooperative Work & Social Computing.

His co-authors are Qian Yang, assistant professor at the Cornell Ann S. Bowers College of Computing and Information Science; Wendy Ju, associate professor of information science and design tech at Cornell Tech, the Cornell Ann S. Bowers College of Computing and Information Science and the Jacobs Technion-Cornell Institute; and Angel Hsing-Chi Hwang, a former postdoctoral associate at Cornell and now assistant professor of communication at the University of Southern California, Annenberg.

The team interviewed 12 AV company employees who work on safety in AV design and deployment, to understand how they currently manage and share safety data, the data sharing challenges and concerns they face, and their ideal data-sharing practices.

The interviews revealed the AV companies have a surprising diversity of approaches, Sandhaus said. “Everyone really has some niche, homegrown data set, and there’s really not a lot of shared knowledge between these companies,” he said. “I expected there would be much more commonality.”

The research team discovered two key barriers to sharing data – both underscoring a lack of incentives. First, crash and safety data includes information about the machine-learning models and infrastructure that the company uses to improve safety. “Data sharing, even within a company, is political and fraught,” the team wrote in the paper. Second, the interviewees believed AV safety knowledge is private and brings their company a competitive edge. “This perspective leads them to view safety knowledge embedded in data as a contested space rather than public knowledge for social good,” the team wrote.

And U.S. and European regulations are not helping. They require only information such as the month when the crash occurred, the manufacturer and whether there were injuries. That doesn’t capture the underlying unexpected factors that often cause accidents, such as a person suddenly running onto the street, drivers violating traffic rules, extreme weather conditions or lost cargo blocking the road.

To encourage more data-sharing, it’s crucial to untangle safety knowledge from proprietary data, the researchers said. For example, AV companies could share information about the accident, but not raw video footage that would reveal the company’s technical infrastructure.

Companies could also come up with “exam questions” that AVs would have to pass in order to take the road. “If you have pedestrians coming from one side and vehicles from the other side, then you can use that as a test case that other AVs also have to pass,” Sandhaus said.

Academic institutions could act as data intermediaries with which AV companies could leverage strategic collaborations. Independent research institutions and other civic organizations have set precedents working with industry partners’ public knowledge. “There are arrangements, collaboration, patterns for higher ed to contribute to this without necessarily making the entire data set public,” Qian said.

The team also proposes standardizing AV safety assessment via more effective government regulations. For example, a federal policymaking agency could create a virtual city as a testing ground, with busy traffic intersections and pedestrian-heavy roads that every AV algorithm would have to be able to navigate, she said.

Federal regulators could encourage car companies to contribute scenarios to the testing environment. “The AV companies might say, ‘I want to put my test cases there, because my car probably has passed those tests.’ That can be a mechanism for encouraging safer vehicle development,” Yang said. “Proposing policy changes always feels a little bit distant, but I do think there are near-future policy solutions in this space.”

The research was funded by the National Science Foundation and Schmidt Sciences.

“Cleanest Prose I’ve Ever Seen”

One Writer’s Take on Gemini 3.0

Extensive creative writing tests by ‘The Nerdy Novelist’ – known for its take-no-prisoners evaluation of AI writing – have revealed that Gemini 3.0 is head-and-shoulders above all others when it comes to being the go-to for writers.

Essentially, the author behind the channel – Jason Hamilton – found that no other AI even came close to delivering Gemini 3.0’s exquisite prose when he put each through its paces.

For an in-depth look at how Hamilton came up with his Gemini 3.0 recommendation, check-out this 36-minute video.

In other news and analysis on AI writing:

*ChatGPT Voice: Now Even Easier to Use: ChatGPT’s maker is out with an upgrade to its voice mode, which enables you to talk with ChatGPT without leaving the ChatGPT interface.

Previously, voice users needed to interact with a separate screen if they wanted to use voice.

*Killer Image App Nano Banana Gets an Upgrade: Fresh-off its take-the-world-by-storm campaign as the globe’s most preferred image editor, ‘Nano Banana’ is out with a new ‘Pro’ version.

Officially known as ‘Gemini 3 Pro Image,’ the tool has grabbed the AI image-making crown with its ability to create extremely detailed images, engage in extremely precise editing – and do it all with incredible speed.

Observes writer Abner Li: “The new model is also coming to AI Mode for subscribers in the U.S., while it’s available to paid NotebookLM users globally. Nano Banana Pro will be available in Flow with Google AI Ultra.”

*AI Research Tool Perplexity Adds AI Assistance With Memory: Perplexity is out with a major new feature to its AI research tool, which embeds AI assistants – with memory – into its research mix.

Like many AI tools, Perplexity now remembers key details of your chats on its service in an effort to ensure responses are sharper and more personalized.

The new feature is optional and can be turned-off at any time.

*ChatGPT Competitor Releases Major Upgrade: Anthropic is out with a major update of one of its key AI engines: Claude Opus, now in version 4.5.

Framed as an inexpensive alternative that offers infinite chats, the AI engine has also scored high marks with amped-up reasoning skills.

Anthropic’s AI primarily targets the enterprise market and is known for killer coding capabilities.

*ChatGPT Voice: Now Even Easier to Use: ChatGPT’s maker is out with an upgrade to its voice mode, which enables you to talk with ChatGPT without leaving the ChatGPT interface.

Previously, voice users needed to interact with a separate screen if they wanted to use ChatGPT voice.

Interestingly, voice mode still relies on an older – and some say more creative – mode of ChatGPT to talk: ChatGPT-4.0.

*New AI Singer Number One on Christian Music Chart: Add virtual AI singer Solomon Ray to the increasing number of AI artists who are minting number one song hits.

Marketed as a ‘soul singer,’ the AI has a full album, dubbed “A Soulful Christmas,” with tunes like “Soul To the World” and “Jingle Bell Soul.”

Other AI singers have also been crowding-out mere fleshbags lately with number one hits on the Country charts and R&B charts.

*AI Can Already Eliminate 12% of U.S. Workforce: A new study from MIT finds that AI can already eliminate 12% of everyday jobs.

Dubbed the “Iceberg Index,” the study simulated AI’s ability to handle – or partially handle – nearly 1,000 occupations that are currently worked by more than 150 million in the U.S.

Observes writer Megan Cerullo: “AI is also already doing some of the entry-level jobs that have historically been reserved for recent college graduates or relatively inexperienced workers.”

*He’s No Tool: Show Your New AI ‘Colleague’ Some Respect: A new study finds that 76% of business leaders now see AI as your office ‘colleague’ – and not a tool.

Specifically, those leaders are referring to agentic AI – an advanced form of the tech that can ideally perform a number of tasks to complete a mission without the need of human supervision.

Even so, real-world tests show agents regularly hallucinate, mis-route data or misinterpret a mission’s goals on their way from here- to-there.

*U.S. Congress Seeks Answers on Alleged Chinese AI CyberAttack: The CEO of a major competitor of ChatGPT – Anthropic – will be testifying before the U.S Congress this month about a recent cyberattack that relied on Anthropic AI to infiltrate finance and government servers.

The attack – allegedly orchestrated by Chinese state actors – hacked Anthropic AI’s agentic abilities to penetrate the servers.

Observes writer Sam Sabin: “As AI rapidly intensifies the cyber threat landscape, lawmakers are just starting to wrap their heads around the problem.”

*AI Big Picture: This Generation’s Manhattan Project: The Genesis Mission: The Trump Administration has embraced AI as a key defense initiative in what it is calling “The Genesis Mission.”

Observes writer Chuck Brooks: “This mission is not merely another government program: it represents a bold strategic move that aligns with my belief that science, data, and computing should be regarded as essential components of our national strength rather than optional extras.

“For too long, we have considered science and technology to be secondary to our national strategy. The Genesis Mission reverses that idea.”

Share a Link:  Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.

Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.

Never Miss An Issue
Join our newsletter to be instantly updated when the latest issue of Robot Writers AI publishes
We respect your privacy. Unsubscribe at any time -- we abhor spam as much as you do.

The post “Cleanest Prose I’ve Ever Seen” appeared first on Robot Writers AI.

Robots combine AI learning and control theory to perform advanced movements

When it comes to training robots to perform agile, single-task motor skills, such as handstands or backflips, artificial intelligence methods can be very useful. But if you want to train your robot to perform multiple tasks—say, performing a backward flip into a handstand—things get a little more complicated.

Scientists uncover the brain’s hidden learning blocks

Princeton researchers found that the brain excels at learning because it reuses modular “cognitive blocks” across many tasks. Monkeys switching between visual categorization challenges revealed that the prefrontal cortex assembles these blocks like Legos to create new behaviors. This flexibility explains why humans learn quickly while AI models often forget old skills. The insights may help build better AI and new clinical treatments for impaired cognitive adaptability.

Robot Talk Episode 135 – Robot anatomy and design, with Chapa Sirithunge

Claire chatted to Chapa Sirithunge from University of Cambridge about what robots can teach us about human anatomy, and vice versa.

Chapa Sirithunge is a Marie Sklodowska-Curie fellow in robotics at the University of Cambridge. She has an undergraduate degree and PhD  in Electrical Engineering from the University of Moratuwa. Before joining the University of Cambridge in 2022, she was a lecturer at Sri Lanka Technological Campus and a visiting lecturer at the University of Moratuwa Sri Lanka. Her research interests span assistive robotics, soft robots and physical human-robot interaction. In addition to her research, she founded Women in Robotics Cambridge to help young minds navigate their path into robotics.

Page 1 of 575
1 2 3 575