First publicly available Japanese AI dialogue system can speak and listen simultaneously
Wriggling robot worms team up to crawl up walls and cross obstacles
Tackling the 3D Simulation League: an interview with Klaus Dorer and Stefan Glaser
A screenshot from the new simulator that will be trialled for a special challenge at RoboCup2025.
The annual RoboCup event, where teams gather from across the globe to take part in competitions across a number of leagues, will this year take place in Brazil, from 15-21 July. In advance of kick-off, we spoke to two members of the RoboCup Soccer 3D Simulation League: Executive Committee Member Klaus Dorer, and Stefan Glaser, who is on the Maintenance Committee and who has been recently developing a new simulator for the League.
Could start by just giving us a quick introduction to the Simulation League?
Klaus Dorer: There are two Simulation Leagues in Soccer: the 2D Simulation League and the 3D Simulation League. The 2D Simulation League, as the name suggests, is a flat league where the players and ball are simulated with simplified physics and the main focus is on team strategy. The 3D Simulation League is much closer to real robots; it simulates 11 versus 11 Nao robots. The level of control is like with real robots, where you move each motor of the legs and the arms and so on to achieve movement.
I understand that you have been working on a new simulator for the 3D League. What was the idea behind this new simulator?
Klaus: The aim is to bring us closer to the hardware leagues so that the simulator can be more useful. The current simulator that we use in the 3D Simulation League is called SimSpark. It was created in the early 2000s with the aim of making it possible to play 11 vs 11 players. With the hardware constraints of that time, there had to be some compromises on the physics to be able to simulate 22 players at the same time. So the simulation is physically somewhat realistic, but not in the sense that it’s easy to transpose it to a real Nao robot.
Stefan Glaser: The idea for developing a new simulator has been around for a few years. SimSpark is a very powerful simulation framework. The base framework is domain independent (not soccer specific) and specific simulations are realized via plugins. It supports multiple physics engines in the backend and provides a flexible scripting interface for configuration and adaptations of the simulation. However, all this flexibility comes with the price of complexity. In addition to that, SimSpark uses custom robot model specifications and communication protocols, limiting the amount of available robot models and requiring teams to develop custom communication layers only for communicating with SimSpark. As a result of this, SimSpark has not been widely adopted in the RoboCup community.
With the new simulator, I would like to address these two major issues: complexity and standardization. In the ML community, the MuJoCo physics engine has become a very popular choice for learning environments after Google DeepMind acquired it and released it open source. Its standards for world and robot model specifications are widely adopted in the community and there exist a lot of ready-to-use robot model specifications for a wide variety of virtual as well as real-world robots. In the middle of last year, they (MuJoCo) added a feature which allows you to manipulate the world representation during simulation (adding and removing objects to / from the simulation while preserving the simulation state). This is one essential requirement we have in the simulation league, where we start with an empty field and then the agents connect on demand and form the teams. When this feature has been added, I decided to make a step forward and try to implement a new simulator for the 3D Simulation League based on MuJoCo. Initially, I wanted to start development in C/C++ to achieve maximum performance, but then decided to start in Python to reduce complexity and make it more accessible for other developers. I started development on Easter Monday so it’s not even three months old!
I think it might be useful to explain a little bit more about the setup of our league and the requirements of the simulator. If we take the FIFA game (on your favorite gaming device) as an example, there is one simulation happening which simulates 22 players and the decision making is part of the simulation having full access to the state of the world. In the 3D Simulation League we have two teams with 11 robots on the field, but we also have 22 individual agent softwares which are connected to the simulation server, each controlling one single robot. Each connected agent only receives sensor information related to their robot in the simulation. They are also only allowed to communicate via the server – there is no direct communication between the agents allowed in Simulation League. So we have a general setup where the simulation server has to be able to accept up to 22 connections and manage the situation there. This functionality has been the major focus for me for the last couple of months and this part is already working well. Teams can connect their agents, which will receive sensor information and can actuate joints of the robot in the simulation and so on. They are also able to select different robot models if they like.
An illustration of the simulator set-up.
Presumably the new simulator has a better representation of the physics of a real robot.
Klaus: Exactly. For example, how the motors are controlled is now a bit different and much closer to real robots. So when I did my first experiments, I saw the robot collapse and I thought it was exactly how a real robot would collapse! In SimSpark we also had falling robots but the motor control in the new simulator is different. Now you can control the motors by speed, by force, by position, which is much more flexible – it’s closer to what we know from real robots.
I think that, at least initially, it will be more difficult for the Simulation League teams to get the robots to do what they want them to do, because it’s more realistic. For example, in SimSpark the ground contact was much more forgiving. So if you step hard on the ground, you don’t fall immediately with a SimSpark robot but with a MuJoCo robot this will be much more realistic. Indeed, in real robots ground contact is somewhat less forgiving.
I had a question about the vision aspect – how do the individual agents “see” the position of the other agents on the field?
Stefan: We simulate a virtual vision pipeline on the server side. You have a restricted field of view of ±60° horizontally and vertically. Within that field of view you will detect the head, the arms, the feet of other players, or the ball, for example, or different features of the field. Similar to common real-world vision pipelines, each detection consists of a label, a direction vector and the distance information. The information has some noise on it like real robots have, too, but teams don’t need to process camera images. They get the detections directly from the simulation server.
We’ve previously had a discussion about moving towards getting camera images of the simulation to integrate into the vision pipeline on the agent side. This was never really realistic in SimSpark with the implementation we had there. However, it should be possible with MuJoCo. However, for the first version, I used the same way the traditional simulator treated the vision. This means that teams don’t need to train a vision model, and don’t need to handle camera images to get started. This reduces the load significantly and also shifts the focus of the problem towards motion and decision making.
Will the simulator be used at RoboCup 2025?
Stefan: We plan to have a challenge with a new simulator and I will try to provide some demo games. At the moment it’s not really in a state where you can play a whole competition.
Klaus: That’s usually how we proceed with new simulators. We would not move from one to the other without any intermediate step. We will have a challenge this year at RoboCup 2025 with the new MuJoCo simulator where each participating team will try to teach the robot to kick as far as possible. So, we will not be playing a whole game, we won’t have multiple robots, just a single robot stepping in front of the ball and kicking the ball. That’s the technical challenge for this year. Teams will get an idea of how the simulator works, and we’ll get an idea of what has to be changed in the simulator to proceed.
This new challenge will be voluntary, so we are not sure how many teams will participate. Our team (MagmaOffenburg) will certainly take part. It will be interesting to see how well the teams perform because no one knows how far a good kick is in this simulator. It’s a bit like in Formula One when the rules change and no one knows which team will be the leading team.
Do you have an idea of how much adaptation teams will have to make if and when you move to the new simulator for the full matches?
Stefan: As a long-term member of 3D Simulation League, I know the old simulator SimSpark pretty well, and know the protocols involved and how the processes work. So the first version of the new simulator is designed to use the same basic protocol, the same sensor information, and so on. The idea is that the teams can use the new simulator with minimal effort in adapting their current agent software. So they should be able to get started pretty fast.
Although, when designing a new platform, I would like to take the opportunity to make a step forward in terms of protocols, because I also want to integrate other Leagues in the long-term. They usually have other control mechanisms, and they don’t use the same protocol that is prominent in 3D Simulation. Therefore there has to be some flexibility in the future. But for the first version, the idea was to get the Simulation League ready with minimal effort.
Klaus: The big idea is that this is not just used in the 3D Simulation league, but also as a useful simulator for the Humanoid League and also for the Standard Platform League (SPL). So if that turns out to be true, then it will be completely successful. For the Kick Challenge this year, for example, we use a T1 robot that is a Humanoid League robot.
Could you say something about this simulation to real world (Sim2Real) aspect?
Stefan: We’d like it to be possible for the motions and behaviors in the simulator to be ported to real robots. From my point of view, it would be useful the other way round too.
We, as a Simulation League, usually develop for the Simulation League and therefore would like to get the behaviors running on a real robot. But the hardware teams usually have a similar issue when they want to test high-level decision making. They might have two to five robots on the field, and if they want to play a high-level decision-making match and train in that regard, they always have to deploy a lot of robots. If they also want to have an opponent, they have to double the amount of robots in order to play a game to see how the strategy would turn out. The Sim2Real aspect is also interesting for these teams, because they should be able to take what they deployed on the real robot and it should also work in the simulation. They can then use the simulation to train high-level skills like team play, player positioning and so on, which is a challenging aspect for the real robot leagues like SPL or the Humanoid Leagues.
Klaus: And the reason we know this is because we have a team in the Simulation League and we have a team in the Humanoid League. So that’s another reason why we are keen to bring these things closer together.
How does the refereeing work in the Simulation League?
Klaus: A nice thing about Simulation Leagues is that there is a program which knows the real state of the world so we can build in the referee inside the simulator and it will not fail. For things like offside, whether the ball passed the goal line, that’s fail safe. All the referee decisions are taken by the system itself. We have a human referee but they never need to intervene. However, there are situations where we would like artificial intelligence to play a role. This is not currently the case in SimSpark because the rules are all hard coded. We have a lot of fouls that are debatable. For example, there are many fouls that teams agree should not have been a foul, and other fouls that are not called that should have been. It would be a nice AI learning task to get some situations judged by human referees and then train an AI model to better determine the rules for what is a foul and what isn’t a foul. But this is currently not the case.
Stefan: On the new simulator I am not that far into the development that I have implemented the automatic referee yet. I have some basic set of rules which progress the game as such, but judging fouls and deciding on special situations is not yet implemented in the new simulator.
What are the next steps for developing the simulator?
Stefan: One of the next major steps will be to refine the physics simulation. For instance, even though there exists a ball in the simulation, it is not yet really well refined. There are a lot of physics parameters which we have to decide on to reflect the real world as good as possible. This will likely require a series of experiments in order to get to the correct values for various aspects. In this aspect I’m hoping for some engagement of the community, as it is a great research opportunity and I personally would prefer the community to decide on a commonly accepted parameter set based on a level of evidence that I can’t easily provide all by myself. So in case someone is interested in refining the physics of the simulation such that it best reflects the real world, you are welcome to join!
Another major next step will be the development of the automatic referee of the soccer simulation, deciding on fouls, handling misbehaving agents and so on. In the first version, foul conditions will likely be judged by an expert system specifically designed for this purpose. The simulation league has developed a set of foul condition specifications which I plan to adapt. In a second step, I would like to integrate and support the development of AI based foul detection models. But yeah, one step after the other.
What are you particularly looking forward to at RoboCup2025?
Klaus: Well, with our team we have been vice world champion seven times in a row. This year we are really hoping to make it to world champion. We are very experienced in getting losses in finals and this year we are looking forward to changing that, from a team perspective.
Stefan: I’m going to Brazil in order to promote the simulator, not just for the Simulation League, but also across the boundaries for the Humanoid Leagues and the SPL Leagues. I think that this simulator is a great chance to bring people from all the leagues together. I’m particularly interested in the specific requirements of all the teams of the different leagues. This understanding will help me tailor the new simulator towards their needs. This is one of my major highlights for this year, I would say.
You can find out more about the new simulator at the project webpage, and from the documentation.
![]() |
Klaus Dorer is professor for artificial intelligence, autonomous systems and software engineering at Offenburg University, Germany. He is also a member of the Institute for Machine Learning and Analytics IMLA. He has been team leader of the RoboCup simulation league teams magmaFreiburg (since 1999), living systems, magmaFurtwangen and is now team leader of magmaOffenburg since 2009. Since 2014, he has also been part of the humanoid adult size league team Sweaty. |
![]() |
Stefan Glaser is teaching assistant for artificial intelligence and intelligent autonomous systems at the Offenburg University, Germany. He has been part of the RoboCup simulation league team magmaOffenburg since 2009 and the RoboCup humanoid adult size league team Sweaty since 2014. |
10 things to consider when exploring Offline Robot Programming software solutions
New simulation system generates thousands of training examples for robotic hands and arms
RealSense Completes Spinout from Intel, Raises $50 Million to Accelerate AI-Powered Vision for Robotics and Biometrics
This AI-powered lab runs itself—and discovers new materials 10x faster
Elon Musk’s New AI: Number One
Move over OpenAI, Elon Musk’s new AI — dubbed Grok 4 — is now top dog.
Released last week, Grok 4 has passed all competitors in an average of key benchmark tests, as documented by ArtificialAnalysis.ai.
X (formerly Twitter) subscribers can get access to Grok 4 via chatbot at the X Premium level ($8/month) or Premium+ level ($40/month).
There’s also a seriously enhanced version of Grok 4 that goes for a cool $300/month.
In other news and analysis on AI writing:
*Grammarly Beefs-Up With AI Powered Email: AI pioneer Grammarly, which is evolving from an AI writer/proofreader into a full-fledged AI productivity suite, is adding AI-powered email to the mix.
Specifically, the AI goliath has inked a deal to acquire AI email provider Superhuman.
Superhuman “claims its users send and respond to 72% more emails per hour,” according to writer Krystal Hu.
*Research Powerhouse Perplexity Launches ‘Comet’ AI Browser: Attempting to go one better on Google’s new ‘AI Mode,’ Perplexity is out with a new browser that delivers AI summaries in response to queries.
Observes writer Maxwell Zeff: “Users can also access Comet Assistant, a new AI agent from Perplexity that lives in the web browser and aims to automate routine tasks.
“Perplexity says the assistant can summarize emails and calendar events, manage tabs and navigate web pages on behalf of users.”
*Ready or Not, Here Come The AI Browser Wars: Writer Grant Harvey offers an excellent look at the latest wrinkle in AI research: AI-powered browsers.
Besides Perplexity’s Comet AI browser, writers can now also try out the beta version of the DIA AI – and should expect an AI browser from OpenAI soon, according to Harvey.
Observes Harvey: “It’s already a three-way cage match.”
*One Researcher’s Take: Dump Perplexity for Consensus AI: Academic researcher Andy Stapleton – who is rabidly fascinated in all things AI research – advises that Perplexity users should instead opt for Consensus AI.
Consensus AI is not only faster, according to this 11-minute video from Stapleton.
Consensus AI has also come up with a way to deliver AI research results completely devoid of AI hallucinations, according to Stapleton.
*AI Agents: Still Not Ready for Prime Time?: Add Futurism magazine to the growing list of naysayers who believe AI agents are being over-hyped.
Ideally, AI agents are designed to work independently on a number of tasks for you – such as researching, writing and continually updating an article, all on its own.
But writer Joe Wilkins finds that “the failure rate is absolutely painful,” with OpenAI’s AI agent failing 91% of the time, Meta’s AI agent failing 93% of the time and Google’s AI agent failing 70% of the time.
*Google Gemini Now Transforms An Image Into Video: A new feature added to the Gemini AI chatbot now allows you to transform any image – say a headshot of yourself – into a video.
Observes writer Jess Weatherbed: “The new photo-to-video capability is powered by Google’s Veo 3 video model.
“It can transform reference images into eight-second videos complete with AI-generated audio, including background noises, environmental sounds, and speech.”
*American Federation of Teachers: We’re All-In on AI: Looks like the debate over the wisdom of using of AI in education – at least at the K-12 level in the U.S. – is over.
The American Federation of Teachers – the U.S.’ second largest teachers union – has been gifted $23 million from some of the biggest players in AI to start a National Academy for AI Instruction, based in New York City.
Observes writer Natasha Singer: “The industry funding is part of a drive by U.S. tech companies to reshape education with generative AI chatbots.”
And that.
As they say.
Is that.
*Google Releases ‘Gemini for Education:’ Google is out with a unique version of its Gemini chatbot – designed especially for students and teachers.
Observes Akshay Kirtikar, a senior product manager at Google: “Gemini for Education provides default access to our premium AI models, soon with significantly higher limits than what consumers get at no cost, plus enterprise-grade data protection and an admin-managed experience as a core Workspace service.”
*AI BIG PICTURE: Ford CEO: 50% of Jobs Will Be Wiped Away by AI: Stick a fork in it: The days of AI as a cheery collaborator are officially but a wistful memory.
Ask Ford CEO Jim Farely — the latest of industry titans of who are talking tough on AI and jobs.
Farely’s version of the unvarnished truth: As many as half of all jobs will be lost to AI.
Observes writer Craig Hale: “Dario Amodei, CEO of AI giant Anthropic, also predicted that around half of entry-level, white-collar jobs could be at risk — leading to unemployment rates 10-20% higher within five years.”

Share a Link: Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.
–Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.
The post Elon Musk’s New AI: Number One appeared first on Robot Writers AI.
Animal-inspired AI robot learns to navigate unfamiliar terrain
The three-layer AI strategy for supply chains
Everyone’s talking about AI agents and natural language interfaces. The hype is loud, and the pressure to keep up is real.
For supply chain leaders, the promise of AI isn’t just about innovation. It’s about navigating a relentless storm of disruption and avoiding costly missteps.
Volatile demand, unreliable lead times, aging systems — these aren’t abstract challenges. They’re daily operational risks.
When the foundation isn’t ready, chasing the next big thing in AI can do more harm than good. Real transformation in supply chain decision-making starts with something far less flashy: structure.
That’s why a practical, three-layer AI strategy deserves more attention. It’s a smarter path that meets supply chains where they are, not where the hype cycle wants them to be.
1. The data layer: build the foundation
Let’s be honest: if your data is chaotic, incomplete, or scattered across a dozen spreadsheets, no algorithm in the world can fix it.
This first layer is about getting your data house in order. Structured or unstructured, it has to be clean, consistent, and accessible.
That means resolving legacy-system headaches, cleaning up duplicative data, and standardizing formats so downstream AI tools don’t fail due to bad inputs.
It’s the least glamorous step, but it’s the one that determines whether your AI will produce anything useful down the line.
2. The contextual layer: teach your data to think
Once you’ve locked down trustworthy data, it’s time to add context. Think of this layer as applying machine learning and predictive models to uncover patterns, trends, and probabilities.
This is where demand forecasting, lead-time estimation, and predictive maintenance start to flourish.
Instead of raw numbers, you now have data enriched with insights, the kind of context that helps planners, buyers, and analysts make smarter decisions.
It’s the muscle of your stack, turning that data foundation into something more than an archive of what happened yesterday.
3. The interactive layer: connect humans with artificial intelligence
Finally, you get to the piece everyone wants to talk about: agents, copilots, and conversational interfaces that feel futuristic.
But these tools can only deliver value if they stand on solid layers one and two.
If you rush to launch a chatbot on top of bad data and missing context, it’ll be like hiring an eager intern with no training. It might sound impressive, but it won’t help your team make better calls.
When you build an interactive layer on a trustworthy, well-contextualized data foundation, you enable planners and operators to work hand in hand with AI.
That’s when the magic happens.
Humans stay in control while offloading the repetitive grunt work to their AI helpers.
Why a layered approach beats chasing shiny things
It’s tempting to jump straight to agentic AI, especially with the hype swirling around these tools. But if you ignore the layers underneath, you risk rolling out AI that fails spectacularly — or worse, quietly undermines confidence in your systems.
A three-layer approach helps supply chain teams scale responsibly, build trust, and prioritize business impact.
It’s not about slowing down; it’s about setting yourself up to move faster, with fewer costly mistakes.
Curious how this framework looks in action?
Watch our on-demand webinar with Norfolk Iron & Metal for a deeper dive into layered AI strategies for supply chains.
The post The three-layer AI strategy for supply chains appeared first on DataRobot.