Page 1 of 542
1 2 3 542

Matryoshka doll-like robot changes its shape in real time and in situ

Until now, when scientists created magnetic robots, their magnetization profiles were generally fixed, enabling only a specific type of shape programming capability using applied external magnetic fields. Researchers at the Max Planck Institute for Intelligent Systems (MPI-IS) have now proposed a new magnetization reprogramming method that can drastically expand the complexity and diversity of the shape-programming capabilities of such robots.

Social robots can help relieve the pressures felt by caregivers

People who care informally for sick or disabled friends and relatives often become invisible in their own lives. Focusing on the needs of those they care for, they rarely get the chance to talk about their own emotions or challenges, and this can lead to them feeling increasingly stressed and isolated.

#ICML2025 outstanding position paper: Interview with Jaeho Kim on addressing the problems with conference reviewing

At this year’s International Conference on Machine Learning (ICML2025), Jaeho Kim, Yunseok Lee and Seulki Lee won an outstanding position paper award for their work Position: The AI Conference Peer Review Crisis Demands Author Feedback and Reviewer Rewards. We hear from Jaeho about the problems they were trying to address, and their proposed author feedback mechanism and reviewer reward system.

Could you say something about the problem that you address in your position paper?

Our position paper addresses the problems plaguing current AI conference peer review systems, while also raising questions about the future direction of peer review.

The imminent problem with the current peer review system in AI conferences is the exponential growth in paper submissions driven by increasing interest in AI. To put this with numbers, NeurIPS received over 30,000 submissions this year, while ICLR saw a 59.8% increase in submissions in just one year. This huge increase in submissions has created a fundamental mismatch: while paper submissions grow exponentially, the pool of qualified reviewers has not kept pace.

Submissions to some of the major AI conferences over the past few years.

This imbalance has severe consequences. The majority of papers are no longer receiving adequate review quality, undermining peer review’s essential function as a gatekeeper of scientific knowledge. When the review process fails, inappropriate papers and flawed research can slip through, potentially polluting the scientific record.

Considering AI’s profound societal impact, this breakdown in quality control poses risks that extend far beyond academia. Poor research that enters the scientific discourse can mislead future work, influence policy decisions, and ultimately hinder genuine knowledge advancement. Our position paper focuses on this critical question and proposes methods on how we can enhance the quality of review, thus leading to better dissemination of knowledge.

What do you argue for in the position paper?

Our position paper proposes two major changes to tackle the current peer review crisis: an author feedback mechanism and a reviewer reward system.

First, the author feedback system enables authors to formally evaluate the quality of reviews they receive. This system allows authors to assess reviewers’ comprehension of their work, identify potential signs of LLM-generated content, and establish basic safeguards against unfair, biased, or superficial reviews. Importantly, this isn’t about penalizing reviewers, but rather creating minimal accountability to protect authors from the small minority of reviewers who may not meet professional standards.

Second, our reviewer incentive system provides both immediate and long-term professional value for quality reviewing. For short-term motivation, author evaluation scores determine eligibility for digital badges (such as “Top 10% Reviewer” recognition) that can be displayed on academic profiles like OpenReview and Google Scholar. For long-term career impact, we propose novel metrics like a “reviewer impact score” – essentially an h-index calculated from the subsequent citations of papers a reviewer has evaluated. This treats reviewers as contributors to the papers they help improve and validates their role in advancing scientific knowledge.

Could you tell us more about your proposal for this new two-way peer review method?

Our proposed two-way peer review system makes one key change to the current process: we split review release into two phases.

The authors’ proposed modification to the peer-review system.

Currently, authors submit papers, reviewers write complete reviews, and all reviews are released at once. In our system, authors first receive only the neutral sections – the summary, strengths, and questions about their paper. Authors then provide feedback on whether reviewers properly understood their work. Only after this feedback do we release the second part containing weaknesses and ratings.

This approach offers three main benefits. First, it’s practical – we don’t need to change existing timelines or review templates. The second phase can be released immediately after the authors give feedback. Second, it protects authors from irresponsible reviews since reviewers know their work will be evaluated. Third, since reviewers typically review multiple papers, we can track their feedback scores to help area chairs identify (ir)responsible reviewers.

The key insight is that authors know their own work best and can quickly spot when a reviewer hasn’t properly engaged with their paper.

Could you talk about the concrete reward system that you suggest in the paper?

We propose both short-term and long-term rewards to address reviewer motivation, which naturally declines over time despite starting enthusiastically.

Short-term: Digital badges displayed on reviewers’ academic profiles, awarded based on author feedback scores. The goal is making reviewer contributions more visible. While some conferences list top reviewers on their websites, these lists are hard to find. Our badges would be prominently displayed on profiles and could even be printed on conference name tags.
Example of a badge that could appear on profiles.

Long-term: Numerical metrics to quantify reviewer impact at AI conferences. We suggest tracking measures like an h-index for reviewed papers. These metrics could be included in academic portfolios, similar to how we currently track publication impact.

The core idea is creating tangible career benefits for reviewers while establishing peer review as a professional academic service that rewards both authors and reviewers.

What do you think could be some of the pros and cons of implementing this system?

The benefits of our system are threefold. First, it is a very practical solution. Our approach doesn’t change current review schedules or review burdens, making it easy to incorporate into existing systems. Second, it encourages reviewers to act more responsibly, knowing their work will be evaluated. We emphasize that most reviewers already act professionally – however, even a small number of irresponsible reviewers can seriously damage the peer review system. Third, with sufficient scale, author feedback scores will make conferences more sustainable. Area chairs will have better information about reviewer quality, enabling them to make more informed decisions about paper acceptance.

However, there is strong potential for gaming by reviewers. Reviewers might optimize for rewards by giving overly positive reviews. Measures to counteract these problems are definitely needed. We are currently exploring solutions to address this issue.

Are there any concluding thoughts you’d like to add about the potential future
of conferences and peer-review?

One emerging trend we’ve observed is the increasing discussion of LLMs in peer review. While we believe current LLMs have several weaknesses (e.g., prompt injection, shallow reviews), we also think they will eventually surpass humans. When that happens, we will face a fundamental dilemma: if LLMs provide better reviews, why should humans be reviewing? Just as the rapid rise of LLMs caught us unprepared and created chaos, we cannot afford a repeat. We should start preparing for this question as soon as possible.

About Jaeho

Jaeho Kim is a Postdoctoral Researcher at Korea University with Professor Changhee Lee. He received his Ph.D. from UNIST under the supervision of Professor Seulki Lee. His main research focuses on time series learning, particularly developing foundation models that generate synthetic and human-guided time series data to reduce computational and data costs. He also contributes to improving the peer review process at major AI conferences, with his work recognized by the ICML 2025 Outstanding Position Paper Award.

Read the work in full

Position: The AI Conference Peer Review Crisis Demands Author Feedback and Reviewer Rewards, Jaeho Kim, Yunseok Lee, Seulki Lee.

ChatGPT Competitor Amps-Up Performance

Users on higher tier plans can now use the Claude chatbot to do intensive research on the Web, bring back raw data and then transform what it finds into written insights, statistical analysis and charts.

Currently, access to the new feature is available to Claude Max users and Claude Team users – with access for Claude Pro users promised soon, according to writer Emila David.

Meanwhile, Claude has also been outfitted with a new memory feature for its Team and Enterprise users, which enables the app to remember projects, preferences and priorities.

In other news and analysis on AI writing:

*Major Survey App Gets AI Upgrade: SurveyMonkey – a key leader in automated surveying for years – has added a new suite of AI tools to its mix.

Users engaging in survey research with the tool can now:

–Use AI chat to surface instant insights and sophisticated data segmentation from the tool’s automated surveys

–Sift for themes in data brought back by SurveyMonkey using a new beta tool dubbed ‘Thematic Analysis.’

*AI Talking Heads Get Even More Lifelike: AI-generated, photorealistic talking heads – the kind that human news anchors up at night – are getting even more natural looking, accord to writer Rhiannon Williams.

Observes Williams, who tried out the latest generation of AI talking heads from Synthesia: “I found the video demonstrating my avatar as unnerving as it is technically impressive.

“It’s slick enough to pass as a high-definition recording of a chirpy corporate speech. And if you didn’t know me, you’d probably think that’s exactly what it was.

“This demonstration shows how much harder it’s becoming to distinguish the artificial from the real.”

*Skepticism Over the ‘Magic’ of AI Agents Persists: Despite blue-sky promises, AI agents – designed in a perfect world to handle tasks autonomously for you on the Web and elsewhere – are still getting a bad rap.

Observes writer Rory Bathgate: “Let’s be very clear here: AI agents are still not very good at their ‘jobs’, or at least pretty terrible at producing returns on investment.”

In fact, tech market research firm Gartner is predicting that 40% of agents currently used by business will be ‘put out to pasture’ by 2027.

*Top 20 Tools in AI Search Optimization (SEO): India-based business pub OfficeChai has come out with its list of the best AI tools right now for SEO.

Here are the top five:

–Surfer SEO
–Jasper
–Semrush
–MarketMuse
–Frase.io

*Embracing AI: A Leadership Guide: ChatGPT-maker OpenAI – which knows a thing or two about the tech – is out with a new guide for business leaders considering bringing in AI.

The easy-to-read 15-page guide offers tips on bringing management and staff onboard, ramping up and making the most of the tech.

The guide also features links to a number of key AI reports and case studies of successful AI implementations.

*OpenAI’s Speech-to-Text AI Gets Some Polish: Whisper – a speech-to-text transcriber from ChatGPT’s maker – just got more accurate.

Thanks to an upgrade from a group of outside researchers, the app is now much better transcribing speech as it happens in real-time.

Ever better, the tech is now able to deliver those transcriptions when run on everyday office computers.

*Microsoft Adds ChatGPT Competitor’s Tech to Office 365: In an interesting move, Microsoft is adding AI to some features of its Office 365 from ChatGPT rival Anthropic.

Specifically, Microsoft will be injecting Anthropic’s AI – which runs the Claude chatbot – into Office 365 apps like Excel, Powerpoint and Word.

Currently, Microsoft uses AI from a number of AI leaders to help run Office 365 and its in-house chatbot, Copilot.

*Oracle’s AI Play Stuns Investors: Half century old Oracle – a provider of database and cloud software – has suddenly emerged as a key player in AI.

The company – which helps companies like ChatGPT’s maker run their AI – announced last week that many of those AI contracts should swell its cloud revenue to $114 billion by 2029.

The result: Oracle’s stock, already up 45% for 2025, surged another 40% in just one day last week, according to writer Dan Gallagher.

*AI Big Picture: Arab Nation UAE Joins AI Open Source Movement: United Arab Emirates has released open source AI – or AI available for anyone to use for free – it says competes with the latest AI from ChatGPT’s maker.

Observes writer Cade Metz: “The Emirates is among several nations pouring billions of dollars into computer data centers and research to compete with leading nations like the United States and China in artificial intelligence.

“Countries such as Saudi Arabia and Singapore are embracing the idea that the A.I. is so important, each should have its own version of the technology.”

Share a Link:  Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.

Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.

Never Miss An Issue
Join our newsletter to be instantly updated when the latest issue of Robot Writers AI publishes
We respect your privacy. Unsubscribe at any time -- we abhor spam as much as you do.

The post ChatGPT Competitor Amps-Up Performance appeared first on Robot Writers AI.

When robots are integrated into household spaces and rituals, they acquire emotional value

Social companion robots are no longer just science fiction. In classrooms, libraries and homes, these small machines are designed to read stories, play games or offer comfort to children. They promise to support learning and companionship, yet their role in family life often extends beyond their original purpose.

Humans sense a collaborating robot as part of their ‘extended’ body

Researchers from the Istituto Italiano di Tecnologia (IIT) in Genoa (Italy) and Brown University in Providence (U.S.) have discovered that people sense the hand of a humanoid robot as part of their body schema, particularly when it comes to carrying out a task together, like slicing a bar of soap.

Apertus: a fully open, transparent, multilingual language model

By Melissa Anchisi and Florian Meyer

In July, EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS) announced their joint initiative to build a large language model (LLM). Now, this model is available and serves as a building block for developers and organisations for future applications such as chatbots, translation systems, or educational tools.

The model is named Apertus – Latin for “open” – highlighting its distinctive feature: the entire development process, including its architecture, model weights, and training data and recipes, is openly accessible and fully documented.

AI researchers, professionals, and experienced enthusiasts can either access the model through the strategic partner Swisscom or download it from Hugging Face – a platform for AI models and applications – and deploy it for their own projects. Apertus is freely available in two sizes – featuring 8 billion and 70 billion parameters, the smaller model being more appropriate for individual usage. Both models are released under a permissive open-source license, allowing use in education and research as well as broad societal and commercial applications.

A fully open-source LLM

As a fully open language model, Apertus allows researchers, professionals and enthusiasts to build upon the model and adapt it to their specific needs, as well as to inspect any part of the training process. This distinguishes Apertus from models that make only selected components accessible.

“With this release, we aim to provide a blueprint for how a trustworthy, sovereign, and inclusive AI model can be developed,” says Martin Jaggi, Professor of Machine Learning at EPFL and member of the Steering Committee of the Swiss AI Initiative. The model will be regularly updated by the development team which includes specialized engineers and a large number of researchers from CSCS, ETH Zurich and EPFL.

A driver of innovation

With its open approach, EPFL, ETH Zurich and CSCS are venturing into new territory. “Apertus is not a conventional case of technology transfer from research to product. Instead, we see it as a driver of innovation and a means of strengthening AI expertise across research, society and industry,” says Thomas Schulthess, Director of CSCS and Professor at ETH Zurich. In line with their tradition, EPFL, ETH Zurich and CSCS are providing both foundational technology and infrastructure to foster innovation across the economy.

Trained on 15 trillion tokens across more than 1,000 languages – 40% of the data is non-English – Apertus includes many languages that have so far been underrepresented in LLMs, such as Swiss German, Romansh, and many others.

“Apertus is built for the public good. It stands among the few fully open LLMs at this scale and is the first of its kind to embody multilingualism, transparency, and compliance as foundational design principles”, says Imanol Schlag, technical lead of the LLM project and Research Scientist at ETH Zurich.

“Swisscom is proud to be among the first to deploy this pioneering large language model on our sovereign Swiss AI Platform. As a strategic partner of the Swiss AI Initiative, we are supporting the access of Apertus during the Swiss {ai} Weeks. This underscores our commitment to shaping a secure and responsible AI ecosystem that serves the public interest and strengthens Switzerland’s digital sovereignty”, commented Daniel Dobos, Research Director at Swisscom.

Accessibility

While setting up Apertus is straightforward for professionals and proficient users, additional components such as servers, cloud infrastructure or specific user interfaces are required for practical use. The upcoming Swiss {ai} Weeks hackathons will be the first opportunity for developers to experiment hands-on with Apertus, test its capabilities, and provide feedback for improvements to future versions.

Swisscom will provide a dedicated interface to hackathon participants, making it easier to interact with the model. As of today, Swisscom business customers will be able to access the Apertus model via Swisscom’s sovereign Swiss AI platform.

Furthermore, for people outside of Switzerland, the Public AI Inference Utility will make Apertus accessible as part of a global movement for public AI. “Currently, Apertus is the leading public AI model: a model built by public institutions, for the public interest. It is our best proof yet that AI can be a form of public infrastructure like highways, water, or electricity,” says Joshua Tan, Lead Maintainer of the Public AI Inference Utility.

Transparency and compliance

Apertus is designed with transparency at its core, thereby ensuring full reproducibility of the training process. Alongside the models, the research team has published a range of resources: comprehensive documentation and source code of the training process and datasets used, model weights including intermediate checkpoints – all released under the permissive open-source license, which also allows for commercial use. The terms and conditions are available via Hugging Face.

Apertus was developed with due consideration to Swiss data protection laws, Swiss copyright laws, and the transparency obligations under the EU AI Act. Particular attention has been paid to data integrity and ethical standards: the training corpus builds only on data which is publicly available. It is filtered to respect machine-readable opt-out requests from websites, even retroactively, and to remove personal data, and other undesired content before training begins.

The beginning of a journey

“Apertus demonstrates that generative AI can be both powerful and open,” says Antoine Bosselut, Professor and Head of the Natural Language Processing Laboratory at EPFL and Co-Lead of the Swiss AI Initiative. “The release of Apertus is not a final step, rather it’s the beginning of a journey, a long-term commitment to open, trustworthy, and sovereign AI foundations, for the public good worldwide. We are excited to see developers engage with the model at the Swiss {ai} Weeks hackathons. Their creativity and feedback will help us to improve future generations of the model.”

Future versions aim to expand the model family, improve efficiency, and explore domain-specific adaptations in fields like law, climate, health and education. They are also expected to integrate additional capabilities, while maintaining strong standards for transparency.

Robots to the rescue: miniature robots offer new hope for search and rescue operations

Small two-wheeled robots, equipped with high-tech sensors, will help to find survivors faster in the aftermath of disasters. © Tohoku University, 2023.

By Michael Allen

In the critical 72 hours after an earthquake or explosion, a race against the clock begins to find survivors. After that window, the chances of survival drop sharply.

When a powerful earthquake hit central Italy on 24 August 2016, killing 299 people, over 5 000 emergency workers were mobilised in search and rescue efforts that saved dozens from the rubble in the immediate aftermath.

The pressure to move fast can create risks for first responders, who often face unstable environments with little information about the dangers ahead. But this type of rescue work could soon become safer and more efficient thanks to a joint effort by EU and Japanese researchers.

Supporting first responders

Rescue organisations, research institutes and companies from both Europe and Japan worked together from 2019 to 2023 to develop a new generation of tools blending robotics, drone technology and chemical sensing to transform how emergency teams operate in disaster zones.

It is a prototype technology that did not exist before.
– Tiina Ristmäe, CURSOR

Their work was part of a four-year EU-funded international research initiative called CURSOR, which included partners from six EU countries, Norway and the UK. It also included Tohoku University, whose involvement was funded by the Japan Science and Technology Agency.

The researchers hope that the sophisticated rescue kit they have developed will help rescue workers locate trapped survivors faster, while also improving their own safety.

“In the field of search and rescue, we don’t have many technologies that support first responders, and the technologies that we do have, have a lot of limitations,” said Tiina Ristmäe, a research coordinator at the German Federal Agency for Technical Relief and vice president of the International Forum to Advance First Responder Innovation.

Meet the rescue bots

At the heart of the researcher’s work is a small robot called Soft Miniaturised Underground Robotic Finder (SMURF). The robot is designed to navigate through collapsed buildings and rubble piles to locate people who may be trapped underneath.

The idea is to allow rescue teams to do more of their work remotely, localising and finding humans from the most hazardous areas in the early stages of a rescue operation. The SMURF can be remotely controlled by operators who stay at a safe distance from the rubble.

“It is a prototype technology that did not exist before,” said Ristmäe. “We don’t send people, we send machines – robots – to do the often very dangerous job.”

The SMURF is compact and lightweight, with a two-wheel design that allows it to manoeuvre over debris and climb small obstacles.

“It moves and drops deep into the debris to find victims, with multiple robots covering the whole rubble pile,” said Professor Satoshi Tadokoro, a robotics expert at Tohoku University and one of the project’s lead scientists.

The development team tested many designs before settling on the final SMURF prototype.

“We investigated multiple options – multiple wheels or tracks, flying robots, jumping robots – but we concluded that this two-wheeled design is the most effective,” said Tadokoro.

Sniffing for survivors

The SMURF’s small “head” is packed with technology: video and thermal cameras, microphones and speakers for two-way communication, and a powerful chemical sensor known as the SNIFFER.

This sensor is capable of detecting substances that humans naturally emit, such as C02 and ammonia, and can even distinguish between living and deceased individuals.

Put to the test in real-world conditions, the SNIFFER has proved able to provide reliable information even when surrounded by competing stimuli, like smoke or rain.

According to the first responders who worked with the researchers, the information provided by the SNIFFER is highly valuable: it helps them to prioritise getting help to those who are still alive, said Ristmäe.

Drone delivery

To further improve the reach of the SMURF, the researchers also integrated drone support into the system. Customised drones are used to deliver the robots directly to the areas where they’re needed most – places that may be hard or dangerous to access on foot.

Ιt moves and drops deep into the debris to find victims, with multiple robots covering the whole rubble pile.
– Professor Satoshi Tadokoro, Tohoku University

“You can transport several robots at the same time and drop them in different locations,” said Ristmäe.

Alongside these delivery drones, the CURSOR team developed a fleet of aerial tools designed to survey and assess disaster zones. One of the drones, dubbed the “mothership,” acts as a flying communications hub, linking all the devices on the ground with the rescue team’s command centre.

Other drones carry ground-penetrating radar to detect victims buried beneath debris. Additional drones capture overlapping high-definition footage that can be stitched together into detailed 3D maps of the affected area, helping teams to visualise the layout and plan their operations more strategically.

Along with speeding up search operations, these steps should slash the time emergency workers spend in dangerous locations like collapsed buildings.

Testing in the field

The combined system has already undergone real-world testing, including large-scale field trials in Japan and across Europe.

One of the most comprehensive tests took place in November 2022 in Afidnes, Greece, where the full range of CURSOR technologies was used in a simulated disaster scenario.

Though not yet commercially available, the prototype rescue kit has sparked global interest.

“We’ve received hundreds of requests from people wanting to buy it,” said Ristmäe. “We have to explain it’s not deployable yet, but the demand is there.”

The CURSOR team hopes to secure more funding to further enhance the technology and eventually bring it to market, potentially transforming the future of disaster response.

Research in this article was funded by the EU’s Horizon Programme. The views of the interviewees don’t necessarily reflect those of the European Commission. If you liked this article, please consider sharing it on social media.


This article was originally published in Horizon, the EU Research and Innovation magazine.

Page 1 of 542
1 2 3 542