News

Page 4 of 527
1 2 3 4 5 6 527

An interview with Nicolai Ommer: the RoboCupSoccer Small Size League

Kick-off in a Small Size League match. Image credit: Nicolai Ommer.

RoboCup is an international scientific initiative with the goal of advancing the state of the art of intelligent robots, AI and automation. The annual RoboCup event is due to take place from 15-21 July in Salvador, Brazil. The Soccer component of RoboCup comprises a number of Leagues, with one of these being the Small Size League (SSL). We caught up with Executive Committee member Nicolai Ommer to find out more about the SSL, how the auto referees work, and how teams use AI.

Could start by giving us a quick introduction to the Small Size League?

In the Small Size League (SSL) we have 11 robots per team – the only physical RoboCup soccer league to have the full number of players. The robots are small, cylindrical robots on wheels and they can move in any direction. They are self-built by the teams, so teams have to do both the hardware and the programming, and a lot of things have to work together to make a team work. The AI is central. We don’t have agents, so teams have a central computer at the field where they can do all the computation and then they send the commands to the robots in different abstractions. Some teams will just send velocity commands, other teams send a target.

We have a central vision system – this is maintained by the League, and has been since 2010. There are cameras above the field to track all the robots and the ball, so everyone knows where the robots are.

The robots can move up to 4 meters per second (m/s), after this point it gets quite unstable for the robots. They can change direction very quickly, and the ball can be kicked at 6.5 m/s. It’s quite fast and we’ve already had to limit the kick speed. Previously we had a limit of 8 m/s and before that 10m/s. However, no robot can catch a ball with this speed, so we decided to reduce it and put more focus on passing. This gives the keeper and the defenders a chance to actually intercept a kick.

It’s so fast that for humans it’s quite difficult to understand all the things that are going on. And that’s why, some years ago, we introduced auto refs, which help a lot in tracking, especially things like collisions and so on, where the human referee can’t watch everything at the same time.

How do the auto refs work then, and is there more than one operating at the same time?

When we developed the current system, to keep things fair, we decided to have multiple implementations of an auto ref system. These independent systems implement the same rules and then we do a majority vote on the decisions.

To do this we needed a middle component, so some years ago I started this project to have a new game controller. This is the user interface (UI) for the human referee who sits at a computer. In the UI you see the current game state, you can manipulate the game state, and this component coordinates the auto refs. The auto refs can connect and report fouls. If only one auto ref detects the foul, it won’t count it. But, if both auto refs report the foul within the time window, then it is counted. Part of the challenge was to make this all visual for the operator to understand. The human referee has the last word and makes the final decision.

We managed to establish two implementations. The aim was to have three implementations, which makes it easier to form a majority. However, it still works with just two implementations and we’ve had this for multiple years now. The implementations are from two different teams who are still active.

How do the auto refs deal with collisions?

We can detect collisions from the data. However, even for human referees it’s quite hard to determine who was at fault when two robots collide. So we had to just define a rule, and all the implementations of the auto ref implement the same rule. We wrote in the rulebook really specifically how you calculate if a collision happened and who was at fault. The first consideration is based on the velocity – below 1.5m/s it’s not a collision, above 1.5m/s it is. There is also another factor, relating to the angle calculation, that we also take into account to determine which robot was at fault.

What else do the auto refs detect?

Other fouls include the kick speed, and then there’s fouls relating to the adherence to normal game procedure. For example, when the other team has a free kick, then the opposing robots should maintain a certain distance from the ball.

The auto refs also observe non-fouls, in other words game events. For example, when the ball leaves the field. That’s the most common event. This one is actually not so easy to detect, particularly if there is a chip kick (where the ball leaves the playing surface). With the camera lens, the parabola of the ball can make it look like it’s outside the field of play when it isn’t. You need a robust filter to deal with this.

Also, when the auto refs detect a goal, we don’t trust them completely. When a goal is detected, we call it a “possible goal”. The match is halted immediately, all the robots stop, and the human referee can check all the available data before awarding the goal.

You’ve been involved in the League for a number of years. How has the League and the performance of the robots evolved over that time?

My first RoboCup was in 2012. The introduction of the auto refs has made the play a lot more fluent. Before this, we also introduced the concept of ball placement, so the robots would place the ball themselves for a free kick, or kick off, for example.

From the hardware side, the main improvement in recent years has been dribbling the ball in one-on-one situations. There has also been an improvement in the specialized skills performed by robots with a ball. For example, some years ago, one team (ZJUNlict) developed robots that could pull the ball backwards with them, move around defenders and then shoot at the goal. This was an unexpected movement, which we hadn’t seen before. Before this you had to do a pass to trick the defenders. Our team, TIGERs Mannheim, has also improved in this area now. But it’s really difficult to do this and requires a lot of tuning. It really depends on the field, the carpet, which is not standardized. So there’s a little bit of luck that your specifically built hardware is actually performing well on the competition carpet.

The Small Size League Grand Final at RoboCup 2024 in Eindhoven, Netherlands. TIGERs Mannheim vs. ZJUNlict. Video credit: TIGERs Mannheim. You can find the TIGERs’ YouTube channel here.

What are some of the challenges in the League?

One big challenge, and also maybe it’s a good thing for the League, is that we have a lot of undergraduate students in the teams. These students tend to leave the teams after their Bachelor’s or Master’s degree, the team members all change quite regularly, and that means that it’s difficult to retain knowledge in the teams. It’s a challenge to keep the performance of the team; it’s even hard to reproduce what previous members achieved. That’s why we don’t have large steps forward, because teams have to repeat the same things when new members join. However, it’s good for the students because they really learn a lot from the experience.

We are continuously working on identifying things which we can make available for everyone. In 2010 the vision system was established. It was a huge factor, meaning that teams didn’t have to do computer vision. And we are currently looking at establishing standards for wireless communication – this is currently done by everyone on their own. We want to advance the League, but at the same time, we also want to have this nature of being able to learn, being able to do all the things themselves if they want to.

You really need to have a team of people from different areas – mechanical engineering, electronics, project management. You also have to get sponsors, and you have to promote your project, get interested students in your team.

Could you talk about some of the AI elements to the League?

Most of our software is script-based, but we apply machine learning for small, subtle problems.

In my team, for example, we do model calibration with quite simple algorithms. We have a specific model for the chip kick, and another for the robot. The wheel friction is quite complicated, so we come up with a model and then we collect the data and use machine learning to detect the parameters.

For the actual match strategy, one nice example is from the team CMDragons. One year you could really observe that they had trained their model so that, once they scored goal, they upvoted the strategy that they applied before that. You could really see that the opponent reacted the same way all the time. They were able to score multiple goals, using the same strategy again and again, because they learned that if one strategy worked, they could use it again.

For our team, the TIGERs, our software is very much based on calculating scores for how good a pass is, how well can a pass be intercepted, and how we can improve the situation with a particular pass. This is hard-coded sometimes, with some geometry-based calculations, but there is also some fine-tuning. If we score a goal then we track back and see where the pass came from and we give bonuses on some of the score calculations. It’s more complicated than this, of course, but in general it’s what we try to do by learning during the game.

People often ask why we don’t do more with AI, and I think the main challenge is that, compared to other use cases, we don’t have that much data. It’s hard to get the data. In our case we have real hardware and we cannot just do matches all day long for days on end – the robots would break, and they need to be supervised. During a competition, we only have about five to seven matches in total. In 2016, we started to record all the games with a machine-readable format. All the positions are encoded, along with the referee decisions, and everything is in a log file which we publish centrally. I hope that with this growing amount of data we can actually apply some machine learning algorithms to see what previous matches and previous strategies did, and maybe get some insights.

What plans do you have for your team, the TIGERs?

We have actually won the competition for the last four years. We hope that there will be some other teams who can challenge us. Our defence has not really been challenged so we have a hard time finding weaknesses. We actually play against ourselves in simulation.

One thing that we want to improve on is precision because there is still some manual work to get everything calibrated and working as precisely as we want it. If some small detail is not working, for example the dribbling, then it risks the whole tournament. So we are working on making all these calibration processes easier, and to do more automatic data processing to determine the best parameters. In recent years we’ve worked a lot on dribbling in the 1 vs 1 situations. This has been a really big improvement for us and we are still working on that.

About Nicolai

Nicolai Ommer is a Software Engineer and Architect at QAware in Munich, specializing in designing and building robust software systems. He holds a B.Sc. in Applied Computer Science and an M.Sc. in Autonomous Systems. Nicolai began his journey in robotics with Team TIGERs Mannheim, participating in his first RoboCup in 2012. His dedication led him to join the RoboCup Small Size League Technical Committee and, in 2023, the Executive Committee. Passionate about innovation and collaboration, Nicolai combines academic insight with practical experience to push the boundaries of intelligent systems and contribute to the global robotics and software engineering communities.

Brain-computer interface robotic hand control reaches new finger-level milestone

Robotic systems have the potential to greatly enhance daily living for the over one billion individuals worldwide who experience some form of disability. Brain-computer interfaces or BCIs present a compelling option by enabling direct communication between the brain and external devices, bypassing traditional muscle-based control.

The physics of popping: Building better jumping robots

Inspired by a simple children's toy, a jumping popper toy, researchers have unlocked a key to designing more agile and predictable soft robots. Soft robots, made from flexible materials, hold immense promise for delicate tasks, but their complex movements have been difficult to predict and control, especially dynamic actions like jumping.

Quantum computers just beat classical ones — Exponentially and unconditionally

A research team has achieved the holy grail of quantum computing: an exponential speedup that’s unconditional. By using clever error correction and IBM’s powerful 127-qubit processors, they tackled a variation of Simon’s problem, showing quantum machines are now breaking free from classical limitations, for real.

ChatGPT Slays Microsoft Copilot in the Workplace

Despite the fact that Microsoft has its own AI writer/assistant that competes directly with ChatGPT, many of its customers prefer ChatGPT.

In some cases, the preference is so pronounced, many companies are opting for ChatGPT even though they have existing contracts with Microsoft to use its in-house alternative, MS Copilot, according to Bloomberg.

The trend must be an especially tough pill to swallow for Microsoft, given that Microsoft essentially helped put ChatGPT’s maker – OpenAI – on the map by investing $13.5 billion in OpenAI.

In other news and analysis on AI writing:

ChatGPT Now Works With Digital Designer Canva Onboard: Canva – a design tool for Web sites, social media and other digital content used by 240 million – is now fully integrated into ChatGPT.

The fusion enables ChatGPT users with Canva accounts to do all their Canva design work from within the ChatGPT interface – essentially enabling them to simultaneously combine the power of both ChatGPT and Canva as they design.

Observes Anwar Haneef, head of ecosystem, Canva: “We’re embedding Canva directly into the AI tools people use every day so they can brainstorm, create, and publish content faster.

“This is a major step in our vision to make the complex simple and build an all-in-one AI workflow that’s secure and accessible to all.”

*AI Reasoning Engines: Maybe Not as Smart as First Thought: CNBC reports that the latest round of AI engines designed to specialize in high-end reasoning may be less snazzy than imagined.

The problem: Turns-out, many of the reasoning models are good at solving somewhat complex problems.

But challenge a reasoning model with a substantial problem, and they often ‘give up’ after discovering that finding the answer is going to require a bit of work.

*ChatGPT Competitor Morphing Into a No-Code Programming Tool: Claude – an AI assistant that once competed directly with ChatGPT – has made plans to become a no-code development tool.

Essentially, Claude is being redesigned so that people with absolutely no computer coding experience can design their own apps by simply using everyday language prompts.

Observes writer Michael Nunez: “Early adopters are creating games with non-player characters that remember choices and adapt storylines, smart tutors that adjust explanations based on user understanding, and data analyzers that answer plain-English questions about uploaded spreadsheets.”

*Dream Recorder: For AI Fanatics Who Think They Have It All: Achieving an entirely new level of niche marketing, the creators behind Dream Recorder have put together a plan for an app designed to archive your dream as a video in a matter of minutes.

Users waking up from a dream simply speak into the glow-in-the-dark device, triggering it to auto-produce an AI video version of the dream.

Moreover, the creators of the Dream Recorder assure the curious that making the device is simple.

Advises the DreamRecorder.ai Web site: “Download the open-source code, gather the off-the-shelf hardware components, 3D print the shell, and assemble everything. No soldering required.”

*Mac Users Can Now Transcribe Audio With ChatGPT: A new “ChatGPT Record” feature enables Mac users to record, transcribe and/or summarize audio.

The feature enables users to work with up to 120 minutes of audio and performs best in English.

You do need to be an elite paying subscriber for access though: Users of ChatGPT Pro, Team, Enterprise and Edu all qualify.

*ChatGPT: Your Work Can Stay Private – But it Will be Archived: While ChatGPT users can now set the app to delete all chat inputs and outputs after they exit, ChatGPT’s maker is still being forced to keep an offline archive of that data indefinitely.

The reason: The New York Times and other publishers, which are fighting ChatGPT’s maker – OpenAI — in a copyright lawsuit, say they have the right to use those outputs as evidence of copyright infringement — and a judge’s order has upheld that request.

Observes writer AJ Dellinger: “OpenAI is expected to continue trying to fight the order as the case moves forward.”

*Rupert Murdoch’s News Corp Goes All-In on AI Writing Tools: While most newspaper publishers experimenting with AI tools like ChatGPT tend to publicly downplay their interest in AI tools that directly automate writing, News Corp is not among them.

Writer Amanda Mead reports that a writing automation tool – dubbed NewsGPT — has been introduced to editors and writers at the Australian, Courier Mail and Daily Telegraph newspapers in Australia that:

–Writes articles from the perspective of various personas

–Writes articles using various writing styles

–Reconfigures leads and fresh angles for a story – as an editor would do

–Includes a “Story Cutter,” which can edit and produce copy, effectively removing or reducing the need for subeditors

Observes Mead: “The Media Entertainment and Arts Alliance said the AI programs were not only a threat to jobs but also threatened to undermine accountable journalism.”

*Snapshot: Top Ten AI Writers for 2025: OfficeChai has just released its picks for the best AI for writers in 2025.

The list offers a number of names that have earned similar accolades on many other lists, including Jasper, Copy.ai, Writesonic, Rytr, Google Gemini, Anyword, ClosersCopy and Peppertype.ai.

Two names that may be new to some are Grammarly – which has evolved from a proofreader to an AI assistant – and Simplified AI Writer.

*AI Big Picture: The Tsunami of Mediocre AI Content Has Arrived: HBO Comic John Oliver skewers creators of the tidal wave of AI slop that is affronting Web and social media users in this spot-on, hilarious, in-depth, 29-minute video.

As feared, given the ever-increasing ease that AI has given even the most untalented to create the written word, audio, music and videos, there appears no end in sight to the torrent of low quality content, deep fakes, misleading content – and worse – currently flooding the digital universe.

The solution? Looks like we’re still looking for one.

Share a Link:  Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.

Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.

Never Miss An Issue
Join our newsletter to be instantly updated when the latest issue of Robot Writers AI publishes
We respect your privacy. Unsubscribe at any time -- we abhor spam as much as you do.

The post ChatGPT Slays Microsoft Copilot in the Workplace appeared first on Robot Writers AI.

Exploring skill generalization with an extra robotic arm for motor augmentation

According to a recent study published in Advanced Intelligent Systems, the brain can adapt to an artificial third arm and use it for simple tasks. This keeps alive the dream of precision mechanics and surgeons for people to deftly use a third arm sometime in the future.

3D-printed humanoid robot offers affordable, customizable platform for beginners

As an undergraduate student, Yufeng Chi (B.S.'23 EECS) was captivated by humanoid and legged robots. Eager to learn more, he would watch YouTube videos and dive into class projects, but getting hands-on experience and tinkering on his own was not easy.

How a Psychology Background Makes for Better AI Adoption

If your LinkedIn feed is like mine, 80% of the content is gushing about how the latest AI model will revolutionize their business. But for me, this matters almost zero – folks have got it backwards. The thing that will […]

The post How a Psychology Background Makes for Better AI Adoption appeared first on TechSpective.

Page 4 of 527
1 2 3 4 5 6 527