Beyond Protection: How Advanced Sealing Solutions Are Enhancing Robotic Performance
My robot therapist: The ethics of AI mental health chatbots for kids
Engineers create world’s smallest wireless flying robot
RobotLAB Unveils Breakthrough in Humanoid Robotics with Launch of BroBot™
AI can be a powerful tool for scientists. But it can also fuel research misconduct
Nadia Piet & Archival Images of AI + AIxDESIGN / Model Collapse / Licenced by CC-BY 4.0
By Jon Whittle, CSIRO and Stefan Harrer, CSIRO
In February this year, Google announced it was launching “a new AI system for scientists”. It said this system was a collaborative tool designed to help scientists “in creating novel hypotheses and research plans”.
It’s too early to tell just how useful this particular tool will be to scientists. But what is clear is that artificial intelligence (AI) more generally is already transforming science.
Last year for example, computer scientists won the Nobel Prize for Chemistry for developing an AI model to predict the shape of every protein known to mankind. Chair of the Nobel Committee, Heiner Linke, described the AI system as the achievement of a “50-year-old dream” that solved a notoriously difficult problem eluding scientists since the 1970s.
But while AI is allowing scientists to make technological breakthroughs that are otherwise decades away or out of reach entirely, there’s also a darker side to the use of AI in science: scientific misconduct is on the rise.
AI makes it easy to fabricate research
Academic papers can be retracted if their data or findings are found to no longer valid. This can happen because of data fabrication, plagiarism or human error.
Paper retractions are increasing exponentially, passing 10,000 in 2023. These retracted papers were cited over 35,000 times.
One study found 8% of Dutch scientists admitted to serious research fraud, double the rate previously reported. Biomedical paper retractions have quadrupled in the past 20 years, the majority due to misconduct.
AI has the potential to make this problem even worse.
For example, the availability and increasing capability of generative AI programs such as ChatGPT makes it easy to fabricate research.
This was clearly demonstrated by two researchers who used AI to generate 288 complete fake academic finance papers predicting stock returns.
While this was an experiment to show what’s possible, it’s not hard to imagine how the technology could be used to generate fictitious clinical trial data, modify gene editing experimental data to conceal adverse results or for other malicious purposes.
Fake references and fabricated data
There are already many reported cases of AI-generated papers passing peer-review and reaching publication – only to be retracted later on the grounds of undisclosed use of AI, some including serious flaws such as fake references and purposely fabricated data.
Some researchers are also using AI to review their peers’ work. Peer review of scientific papers is one of the fundamentals of scientific integrity. But it’s also incredibly time-consuming, with some scientists devoting hundreds of hours a year of unpaid labour. A Stanford-led study found that up to 17% of peer reviews for top AI conferences were written at least in part by AI.
In the extreme case, AI may end up writing research papers, which are then reviewed by another AI.
This risk is worsening the already problematic trend of an exponential increase in scientific publishing, while the average amount of genuinely new and interesting material in each paper has been declining.
AI can also lead to unintentional fabrication of scientific results.
A well-known problem of generative AI systems is when they make up an answer rather than saying they don’t know. This is known as “hallucination”.
We don’t know the extent to which AI hallucinations end up as errors in scientific papers. But a recent study on computer programming found that 52% of AI-generated answers to coding questions contained errors, and human oversight failed to correct them 39% of the time.
Maximising the benefits, minimising the risks
Despite these worrying developments, we shouldn’t get carried away and discourage or even chastise the use of AI by scientists.
AI offers significant benefits to science. Researchers have used specialised AI models to solve scientific problems for many years. And generative AI models such as ChatGPT offer the promise of general-purpose AI scientific assistants that can carry out a range of tasks, working collaboratively with the scientist.
These AI models can be powerful lab assistants. For example, researchers at CSIRO are already developing AI lab robots that scientists can speak with and instruct like a human assistant to automate repetitive tasks.
A disruptive new technology will always have benefits and drawbacks. The challenge of the science community is to put appropriate policies and guardrails in place to ensure we maximise the benefits and minimise the risks.
AI’s potential to change the world of science and to help science make the world a better place is already proven. We now have a choice.
Do we embrace AI by advocating for and developing an AI code of conduct that enforces ethical and responsible use of AI in science? Or do we take a backseat and let a relatively small number of rogue actors discredit our fields and make us miss the opportunity?
Jon Whittle, Director, Data61, CSIRO and Stefan Harrer, Director, AI for Science, CSIRO
This article is republished from The Conversation under a Creative Commons license. Read the original article.
ChatGPT’s New AI Image-Maker: ‘Astounding’
ChatGPT’s new AI-image generator – perfect for writers looking to add supplemental images to their copy — has become a viral sensation across the Web.
Simultaneously embraced by millions of users as AI imaging’s ‘Next Big Thing,’ the new tool has been described as an ‘astounding’ leap forward by Al Samson, a graphic artist with 15+ years experience.
Essentially, the new tool features stunning imaging, extreme detail and much more control over the final image users are looking to create, according to Samson.
A few of the near-infinite number of use cases available with the AI imager include:
~precise image rendering in a photo-realistic or illustration style
~the ability to tweak an image of yourself to make yourself
look ‘more handsome,’ ‘more beautiful’ – or more or less any number of other qualities
~the ability to drop a reliable image of your product into any scene you can imagine
~instant-rendering of any image in your brand colors
~instantly recognizable caricatures of celebrities and the famous
~instant creation of a comic-strip in your desired style
While not perfect, Samson says the new imaging tool – which replaces ChatGPT imaging that used to run on the DALL-E AI imaging engine has grabbed the throne as “the best image-generation tool on the market.”
(Fans of DALL-E can still find that imaging tool in ChatGPT’s “GTPs” section.)
For an extremely informed and nuanced overview of everything ChatGPT’s new imaging tool has to offer, check-out Samson’s in-depth, extremely insightful, 29-minute video on the upgrade.
In other news and analysis on AI writing:
*35% of Office Workers Now Use ChatGPT: Apparently, being first with a magical new tech has its advantages.
A new study from DeskTime finds that 35% of office workers worldwide now use ChatGPT in some capacity.
In contrast, office worker use of ChatGPT competitors – like Google Gemini, Anthropic Claude and Xai Grok – pales in comparison.
*AI Now a Major Force in Press Release Writing: A new study from Stanford University finds that 24% of press releases are now written by AI.
Observes Stanford University researcher Weixin Liang: “Even high-level international organizations like the United Nations showed roughly 14% LLM (AI chatbot) usage in its press releases.”
Adds writer Tor Constantino: “The research is among the largest empirical investigations of AI writing adoption, reviewing more than 300 million online documents and posts between 2022 and 2024.”
*Google Lunges Ahead to Number One Spot with New Chatbot Upgrade: In the never-ending horserace among the top AI chatbots, Google has lunged into first-place with its new Gemini 2.5 upgrade.
With the overhaul, the Gemini 2.5 chatbot beats-by-a-nose fierce competitors like ChatGPT, Anthropic Claude and Xai Grok.
Observes writer Amanda Caswell: “Gemini 2.5 is designed to comprehend vast amounts of data and handle complex problems across various information sources — including text, audio, images, video and even code repositories.”
*Another Columnist Discovers ChatGPT Can Do His Job: Add columnist Harley Hays to the growing cadre of writers who are discovering – often uncomfortably – that they have nothing on ChatGPT.
Hays asked ChatGPT to try its hand at writing the kind of columns he writes – and also to write columns using his personal writing style.
Hays’ reaction: Yikes!
*AI Writing Tools, On-the-Cheap: A new aggregation app – dubbed ‘Together Chat’ – is now offering free access to a number of AI writers/chatbots – although they’re not state-of-the art.
Writers looking to try-out a number of AI writers at no cost can sample an early version of the DeepSeek chatbot using the free Together Chat app – as well as versions of Llama, Qwen and Flux Schnell.
Observes Hassan El Mghari: “Whether you’re brainstorming ideas, drafting code, doing research with the Web, generating images, or exploring creative writing, Together Chat puts cutting-edge open-source AI at your fingertips.”
*Google Docs to Get AI Summaries: Google is promising to soup-up Google Docs with AI designed to summarize its documents for you.
Observes Jorge A. Aguilar: “Gemini’s summary can be added directly to the document, and users can update it if they change the original content.
“The AI summary tool is basically meant to make long documents easier to read and understand.”
*New Guide Released on ChatGPT for Work: OpenAI has dropped a new AI video primer on how to get the most from ChatGPT at work.
The video explores how ChatGPT has evolved since its introduction – including its enhancements in interactivity and customization.
The video also explores how ChatGPT can be used to work independently on specific tasks for you at work.
*The AI Takeover of Customer Service is Well Underway: A new study from Forrester Consulting finds that 52% of business decision-makers are looking to integrate AI into their customer service.
Observes Pete Lavache, CMO, Avaya – the company that commissioned the Forrester study:
“Companies know exceptional customer experiences drive revenue.
“The major hurdle is being able to actually orchestrate those experiences leveraging any — or every — AI tool they choose.”
*AI BIG PICTURE: Chinese ChatGPT Competitor: Ferociously Closing the Gap: China – once perceived as a year or more behind the U.S. in AI development – has quickly closed the gap with its new AI chatbot DeepSeek.
These days, China is probably just three months behind the U.S. in some facets of AI development because of its products like DeepSeek, according to AI expert Lee Kai-fu.
Moreover, Kai-fu added that on a few AI frontiers, China has actually pulled ahead of the U.S. in AI sophistication.

Share a Link: Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.
–Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.
The post ChatGPT’s New AI Image-Maker: ‘Astounding’ appeared first on Robot Writers AI.
PAWS: Four-legged robot can reproduce animal movement with fewer actuators
Artificial neurons organize themselves
Robot Talk Episode 115 – Robot dogs working in industry, with Benjamin Mottis

Claire chatted to Benjamin Mottis from ANYbotics about deploying their four-legged ANYmal robot in a variety of industries.
Benjamin Mottis is a Robotics Engineer in charge of ANYmal Research at ANYbotics. After graduating in robotics from EPFL, he joined ANYbotics as a Field Engineer in 2023. He specializes in deploying ANYmal and training customers across all ANYbotics verticals (Oil & Gas, Nuclear, Metals, Chemicals, etc.). Since 2024, as the Global Research Community Manager, he has been working on expanding the ANYmal Research Community and helping world-leading researchers push the boundaries of robotics with ANYmal.