Page 199 of 526
1 197 198 199 200 201 526

#RoboCup2023 in tweets – part 2

robocup2023 logo

As this year’s RoboCup draws to a close, we take a look back at some of the highlights from the second half of the conference. Over the course of the weekend, the event focussed on the latter stages of the competitions, with the winners in all the different leagues decided. If you missed our round-up of the first half of RoboCup, you can catch up here.

#RoboCup2023 in tweets – part 1

AIhub | Tweets round-up

This year’s RoboCup kicked off on 4 July and will run until 10 July. Taking place in Bordeaux, the event will see around 2500 participants, from 45 different countries take part in competitions, training sessions, and a symposium. Find out what attendees have been up to in preparation for, and in the first half of, the event.

ROSE: A revolutionary, nature-inspired soft embracing robotic gripper

Although grasping objects is a relatively straightforward task for us humans, there is a lot of mechanics involved in this simple task. Picking up an object requires fine control of the fingers, of their positioning, and of the pressure each finger applies, which in turn necessitates intricate sensing capabilities. It's no wonder that robotic grasping and manipulation is a very active research area within the field of robotics.

New algorithm helps robots avoid collisions

A new approach to autonomous robot navigation is reported in the International Journal of Computational Science and Engineering, which could help avoid collisions and accidents in a variety of future applications in various environments, such as industrial buildings and warehouses, agricultural fields, and in the urban self-driving vehicle landscape, search and rescue sites, in health care settings, and even in the home and garden.

Simbe Raises $28M Series B, Led by Eclipse, to Continue Transforming Retail Operations Through AI and Automation

Simbe, the company behind the world’s first autonomous shelf-scanning retail robot and the industry’s most comprehensive business intelligence solution, will use the funds to accelerate global expansion and product innovation

Training Diffusion Models with <br> Reinforcement Learning

Training Diffusion Models with Reinforcement Learning

Diffusion models have recently emerged as the de facto standard for generating complex, high-dimensional outputs. You may know them for their ability to produce stunning AI art and hyper-realistic synthetic images, but they have also found success in other applications such as drug design and continuous control. The key idea behind diffusion models is to iteratively transform random noise into a sample, such as an image or protein structure. This is typically motivated as a maximum likelihood estimation problem, where the model is trained to generate samples that match the training data as closely as possible.

However, most use cases of diffusion models are not directly concerned with matching the training data, but instead with a downstream objective. We don’t just want an image that looks like existing images, but one that has a specific type of appearance; we don’t just want a drug molecule that is physically plausible, but one that is as effective as possible. In this post, we show how diffusion models can be trained on these downstream objectives directly using reinforcement learning (RL). To do this, we finetune Stable Diffusion on a variety of objectives, including image compressibility, human-perceived aesthetic quality, and prompt-image alignment. The last of these objectives uses feedback from a large vision-language model to improve the model’s performance on unusual prompts, demonstrating how powerful AI models can be used to improve each other without any humans in the loop.

Read More
Page 199 of 526
1 197 198 199 200 201 526