Page 179 of 524
1 177 178 179 180 181 524

My Jewellery embraces the future of warehouse automation with innovative RoboShuttle solution by Geek+

This new system maximises space and reduces the metres made by staff at the warehouse thereby improving their ergonomic work space. It also allows My Jewellery to adapt to both B2B and B2C operations, another key factor in developing this AMR solution.

Interview with Marek Šuppa: insights into RoboCupJunior

robocupjunior soccer match in actionA RoboCupJunior soccer match in action.

In July this year, 2500 participants congregated in Bordeaux for RoboCup2023. The competition comprises a number of leagues, and among them is RoboCupJunior, which is designed to introduce RoboCup to school children, with the focus being on education. There are three sub-leagues: Soccer, Rescue and OnStage.

Marek Šuppa serves on the Executive Committee for RoboCupJunior, and he told us about the competition this year and the latest developments in the Soccer league.

What is your role in RoboCupJunior and how long have you been involved with this league?

I started with RoboCupJunior quite a while ago: my first international competition was in 2009 in Graz, where I was lucky enough to compete in Soccer for the first time. Our team didn’t do all that well in that event but RoboCup made a deep impression and so I stayed around: first as a competitor and later to help organise the RoboCupJunior Soccer league. Right now I am serving as part of the RoboCupJunior Execs who are responsible for the organisation of RoboCupJunior as a whole.

How was the event this year? What were some of the highlights?

I guess this year’s theme or slogan, if we were to give it one, would be “back to normal”, or something like that. Although RoboCup 2022 already took place in-person in Thailand last year after two years of a pandemic pause, it was in a rather limited capacity, as COVID-19 still affected quite a few regions. It was great to see that the RoboCup community was able to persevere and even thrive throughout the pandemic, and that RoboCup 2023 was once again an event where thousands of robots and roboticists meet.

It would also be difficult to do this question justice without thanking the local French organisers. They were actually ready to organise the event in 2020 but it got cancelled due to COVID-19. But they did not give up on the idea and managed to put together an awesome event this year, for which we are very thankful.

robots used for robocupjuniorExamples of the robots used by the RoboCupJunior Soccer teams.

Turning to RoboCupJunior Soccer specifically, could you talk about the mission of the league and how you, as organisers, go about realising that mission?

The mission of RoboCupJunior consists of two competing objectives: on one hand, it needs to be a challenge that’s approachable, interesting and relevant for (mostly) high school students and at the same time it needs to be closely related to the RoboCup “Major” challenges, which are tackled by university students and their mentors. We are hence continuously trying to both make it more compelling and captivating for the students and at the same time ensure it is technical enough to help them grow towards the RoboCup “Major” challenges.

One of the ways we do that is by introducing what we call “SuperTeam” challenges, in which teams from respective countries form a so-called “SuperTeam” and compete against another “SuperTeam” as if these were distinct teams. In RoboCupJunior Soccer the “SuperTeams” are composed of four to five teams and they compete on a field that is six times larger than the “standard” fields that are used for the individual games. While in the individual matches each team can play with two robots at most (resulting in a 2v2 game) in a SuperTeam match each SuperTeam fields five robots, meaning there are 10 robots that play on the SuperTeam field during a SuperTeam match. The setup is very similar to the Division B of the Small Size League of RoboCup “Major”.

The SuperTeam games have existed in RoboCupJunior Soccer since 2013, so for quite a while, and the feedback we received on them was overwhelmingly positive: it was a lot of fun for both the participants as well as the spectators. But compared to the Small Size League games there were still two noticeable differences: the robots did not have a way of communicating with one another and additionally, the referees did not have a way of communicating with the robots. The result was that not only was there little coordination among robots of the same SuperTeam, whenever the game needed to be stopped, the referees had to physically run after the robots on the field to catch them and do a kickoff after a goal was scored. Although hilarious, it’s far from how we would imagine the SuperTeam games to look.

The RoboCupJunior Soccer Standard Communication Modules aim to do both. The module itself is a small device that is attached to each robot on the SuperTeam field. These devices are all connected via Bluetooth to a single smartphone, through which the referee can send commands to all robots on the field. The devices themselves also support direct message exchange between robots on a single SuperTeam, meaning the teams do not have to invest into figuring out how to communicate with the other robots but can make use of a common platform. The devices, as well as their firmware, are open source, meaning not only that everyone can build their own Standard Communication Module if they’d like but also that the community can participate in its development, which makes it an interesting addition to RoboCupJunior Soccer.

two teams setting up their robotsRoboCupJunior Soccer teams getting ready for the competition.

How did this new module work out in the competition? Did you see an improvement in experience for the teams and organisers?

In this first big public test we focused on exploring how (and whether) these modules can improve the gameplay – especially the “chasing robots at kickoff”. Although we’ve done “lab experiments” in the past and had some empirical evidence that it should work rather well, this was the first time we tried it in a real competition.

All in all, I would say that it was a very positive experiment. The modules themselves did work quite well and for some of us, who happened to have experience with “robot chasing” mentioned above, it was sort of a magical feeling to see the robots stop right on the main referee’s whistle.

We also found out potential areas for improvement in the future. The modules themselves do not have a power source of their own and were powered by the robots themselves. We didn’t think this would be a problem but in the “real world” test it transpired that the voltage levels the robots are capable of providing fluctuates significantly – for instance when the robot decides to aggressively accelerate – which in turn means some of the modules disconnect when the voltage is lowered significantly. However, it ended up being a nice lesson for everyone involved, one that we can certainly learn from when we design the next iterations.


The livestream from Day 4 of RoboCupJunior Soccer 2023. This stream includes the SuperTeam finals and the technical challenges. You can also view the livestream of the semifinals and finals from day three here.

Could you tell us about the emergence of deep-learning models in the RoboCupJunior leagues?

This is something we started to observe in recent years which surprised us organisers, to some extent. In our day-to-day jobs (that is, when we are not organising RoboCup), many of us, the organisers, work in areas related to robotics, computer science, and engineering in general – with some of us also doing research in artificial intelligence and machine learning. And while we always thought that it would be great to see more of the cutting-edge research being applied at RoboCupJunior, we always dismissed it as something too advanced and/or difficult to set up for the high school students that comprise the majority of RoboCupJunior students.

Well, to our great surprise, some of the more advanced teams have started to utilise methods and technologies that are very close to the current state-of-the-art in various areas, particularly computer vision and deep learning. A good example would be object detectors (usually based on the YOLO architecture), which are now used across all three Junior leagues: in OnStage to detect various props, robots and humans who perform on the stage together, in Rescue to detect the victims the robots are rescuing and in Soccer to detect the ball, the goals, and the opponents. And while the participants generally used an off-the-shelf implementations, they still needed to do all the steps necessary for a successful deployment of this technology: gather a dataset, finetune the deep-learning model and deploy it on their robots – all of which is far from trivial and is very close to how these technologies get used in both research and industry.

Although we have seen only the more advanced teams use deep-learning models at RoboCupJunior, we expect that in the future we will see it become much more prevalent, especially as the technology and the tooling around it becomes more mature and robust. It does show, however, that despite their age, the RoboCupJunior students are very close to cutting-edge research and state-of-the-art technologies.

Two teams ready to start - robots on the fieldAction from RoboCupJunior Soccer 2023.

How can people get involved in RCJ (either as a participant or an organiser?)

A very good question!

The best place to start would be the RoboCupJunior website where one can find many interesting details about RoboCupJunior, the respective leagues (such as Soccer, Rescue and OnStage), and the relevant regional representatives who organise regional events. Getting in touch with a regional representative is by far the easiest way of getting started with RoboCup Junior.

Additionally, I can certainly recommend the RoboCupJunior forum, where many RoboCupJunior participants, past and present, as well as the organisers, discuss many related topics in the open. The community is very beginner friendly, so if RoboCupJunior sounds interesting, do not hesitate to stop by and say hi!

About Marek Šuppa

Marek Suppa

Marek stumbled upon AI as a teenager when building soccer-playing robots and quickly realised he is not smart enough to do all the programming by himself. Since then, he’s been figuring out ways to make machines learn by themselves, particularly from text and images. He currently serves as the Principal Data Scientist at Slido (part of Cisco), improving the way meetings are run around the world. Staying true to his roots, he tries to provide others with a chance to have a similar experience by organising the RoboCupJunior competition as part of the Executive Committee.

Rethinking the Role of PPO in RLHF

Rethinking the Role of PPO in RLHF

TL;DR: In RLHF, there’s tension between the reward learning phase, which uses human preference in the form of comparisons, and the RL fine-tuning phase, which optimizes a single, non-comparative reward. What if we performed RL in a comparative way?


Figure 1: This diagram illustrates the difference between reinforcement learning from absolute feedback and relative feedback. By incorporating a new component - pairwise policy gradient, we can unify the reward modeling stage and RL stage, enabling direct updates based on pairwise responses.

Large Language Models (LLMs) have powered increasingly capable virtual assistants, such as GPT-4, Claude-2, Bard and Bing Chat. These systems can respond to complex user queries, write code, and even produce poetry. The technique underlying these amazing virtual assistants is Reinforcement Learning with Human Feedback (RLHF). RLHF aims to align the model with human values and eliminate unintended behaviors, which can often arise due to the model being exposed to a large quantity of low-quality data during its pretraining phase.

Proximal Policy Optimization (PPO), the dominant RL optimizer in this process, has been reported to exhibit instability and implementation complications. More importantly, there’s a persistent discrepancy in the RLHF process: despite the reward model being trained using comparisons between various responses, the RL fine-tuning stage works on individual responses without making any comparisons. This inconsistency can exacerbate issues, especially in the challenging language generation domain.

Given this backdrop, an intriguing question arises: Is it possible to design an RL algorithm that learns in a comparative manner? To explore this, we introduce Pairwise Proximal Policy Optimization (P3O), a method that harmonizes the training processes in both the reward learning stage and RL fine-tuning stage of RLHF, providing a satisfactory solution to this issue. Read More

Easing job jitters in the digital revolution

The world’s fourth industrial revolution is ushering in big shifts in the workplace. © demaerre, iStock.com

Professor Steven Dhondt has a reassurance of sorts for people in the EU worried about losing their jobs to automation: relax.

Dhondt, an expert in work and organisational change at the Catholic University Leuven in Belgium, has studied the impact of technology on jobs for the past four decades. Fresh from leading an EU research project on the issue, he stresses opportunities rather than threats.

Right vision

‘We need to develop new business practices and welfare support but, with the right vision, we shouldn’t see technology as a threat,’ Dhondt said. ‘Rather, we should use it to shape the future and create new jobs.’

The rapid and accelerating advance in digital technologies across the board is regarded as the world’s fourth industrial revolution, ushering in fundamental shifts in how people live and work.

If the first industrial revolution was powered by steam, the second by electricity and the third by electronics, the latest will be remembered for automation, robotics and artificial intelligence, or AI. It’s known as “Industry 4.0”.

‘Whether it was the Luddite movement in the 1800s through the introduction of automatic spinning machines in the wool industry or concerns about AI today, questions about technology’s impact on jobs really reflect wider ones about employment practices and the labour market,’ said Dhondt.

He is also a senior scientist at a Netherlands-based independent research organisation called TNO.

The EU project that Dhondt led explored how businesses and welfare systems could better adapt to support workers in the face of technological changes. The initiative, called Beyond4.0, began in January 2019 and wrapped up in June 2023.

While the emergence of self-driving cars and AI-assisted robots holds big potential for economic growth and social progress, they also sound alarm bells.

More than 70% of EU citizens fear that new technologies will “steal” people’s jobs, according to a 2019 analysis by the European Centre for the Development of Vocational Training.

Local successes

The Beyond4.0 researchers studied businesses across Europe that have taken proactive and practical steps to empower employees.

“We shouldn’t see technology as a threat – rather we should use it to shape the future and create new jobs.”

– Professor Steven Dhondt, BEYOND4.0

One example is a family-run Dutch glass company called Metaglas, which decided that staying competitive in the face of technological changes required investing more in its own workforce.

Metaglas offered workers greater openness with management and a louder voice on the company’s direction and product development.

The move, which the company named “MetaWay”, has helped it retain workers while turning a profit that is being reinvested in the workforce, according to Dhondt.

He said the example shows the importance in the business world of managers’ approach to the whole issue.

‘The technology can be an enabler, not a threat, but the decision about that lies with management in organisations,’ Dhondt said. ‘If management uses technology to downgrade the quality of jobs, then jobs are at risk. If management uses technology to enhance jobs, then you can see workers and organisations learn and improve.’

The Metaglas case has fed into a “knowledge bank” meant to inform business practices more broadly.

Dhondt also highlighted the importance of regions in Europe where businesses and job trainers join forces to support people.

BEYOND4.0 studied the case of the Finnish city of Oulu – once a leading outpost of mobile-phone giant Nokia. In the 2010s, the demise of Nokia’s handset business threatened Oulu with a “brain drain” as the company’s engineers were laid-off.

But collaboration among Nokia, local universities and policymakers helped grow new businesses including digital spin-offs and kept hundreds of engineers in the central Finnish region, once a trading centre for wood tar, timber and salmon.

Some Nokia engineers went to the local hospital to work on electronic healthcare services – “e-health” – while others moved to papermaker Stora Enso, according to Dhondt.

Nowadays there are more high-tech jobs in Oulu than during Nokia’s heyday. The BEYOND4.0 team held the area up as a successful “entrepreneurial ecosystem” that could help inform policies and practices elsewhere in Europe.

Income support

In cases where people were out of work, the project also looked to new forms of welfare support.

Dhondt’s Finnish colleagues examined the impact of a two-year trial in Finland of a “universal basic income” – or UBI – and used this to assess the feasibility of a different model called “participation income.”

In the UBI experiment, participants each received a monthly €560 sum, which was paid unconditionally. Although UBI is often touted as an answer to automation, BEYOND4.0’s evaluation of the Finnish trial was that it could weaken the principle of solidarity in society.

The project’s participation income approach requires recipients of financial support to undertake an activity deemed useful to society. This might include, for example, care for the elderly or for children.

While detailed aspects are still being worked out, the BEYOND4.0 team discussed participation income with the government of Finland and the Finnish parliament has put the idea on the agenda for debate.

Dhondt hopes the project’s findings, including on welfare support, will help other organisations better navigate the changing tech landscape.

Employment matchmakers

Another researcher keen to help people adapt to technological changes is Dr Aisling Tuite, a labour-market expert at the South East Technical University in Ireland.

“We wanted to develop a product that could be as useful for people looking for work as for those supporting them.”

– Dr Aisling Tuite, HECAT

Tuite has looked at how digital technologies can help job seekers find suitable work.

She coordinated an EU-funded project to help out-of-work people find jobs or develop new skills through a more open online system.

Called HECAT, the project ran from February 2020 through July 2023 and brought together researchers from Denmark, France, Ireland, Slovenia, Spain and Switzerland.

In recent years, many countries have brought in active labour-market policies that deploy computer-based systems to profile workers and help career counsellors target people most in need of help.

While this sounds highly targeted, Tuite said that in reality it often pushes people into employment that might be unsuitable for them and is creating job-retention troubles.

‘Our current employment systems often fail to get people to the right place – they just move people on,’ she said. ‘What people often need is individualised support or new training. We wanted to develop a product that could be as useful for people looking for work as for those supporting them.’

Ready to run

HECAT’s online system combines new vacancies with career counselling and current labour-market data.

The system was tested during the project and a beta version is now available via My Labour Market and can be used in all EU countries where data is available.

It can help people figure out where there are jobs and how to be best positioned to secure them, according to Tuite.

In addition to displaying openings by location and quality, the system offers detailed information about career opportunities and labour-market trends including the kinds of jobs on the rise in particular areas and the average time it takes to find a position in a specific sector.

Tuite said feedback from participants in the test was positive.

She recalled one young female job seeker saying it had made her more confident in exploring new career paths and another who said knowing how long the average “jobs wait” would be eased the stress of hunting.

Looking ahead, Tuite hopes the HECAT researchers can demonstrate the system in governmental employment-services organisations in numerous EU countries over the coming months. 

‘There is growing interest in this work from across public employment services in the EU and we’re excited,’ she said.


(This article was updated on 21 September 2023 to include a reference to Steven Dhondt’s role at TNO in the Netherlands)

Research in this article was funded by the EU.

This article was originally published in Horizon, the EU Research and Innovation magazine.

ep.366: Deep Learning Meets Trash: Amp Robotics’ Revolution in Materials Recovery, with Joe Castagneri

In this episode, Abate flew to Denver, Colorado, to get a behind-the-scenes look at the future of recycling with Joe Castagneri, the head of AI at Amp Robotics. With Materials Recovery Facilities (MRFs) processing a staggering 25 tons of trash per hour, robotic sorting is the clear long-term solution.

Recycling is a for-profit industry. When the margins don’t make sense, the items will not be recycled. This is why Amp’s mission to use robotics and AI to bring down the cost of recycling and increase the number of items that can be sorted for recycling is so impactful.

Joe Castagneri
Joe Castagneri graduated with his Master of Science in Applied Mathematics, with an undergrad degree in Physics. While still in university, he first joined the team at Amp Robotics in 2016 where he worked on Machine Learning models to identify recyclables in video streams of Trash in Materials Recovery Facilities (MRFs). Today, he is the Head of AI at Amp Robotics where he is changing the economics of recycling through automation.

Robot Talk Episode 57 – Kate Devlin

Claire chatted to Kate Devlin from King’s College London about the social and ethical implications of robotics and AI.

Kate Devlin is Reader in Artificial Intelligence & Society in the Department of Digital Humanities, King’s College London. She is an interdisciplinary computer scientist investigating how people interact with and react to technologies, both past and future. Kate is the author of Turned On: Science, Sex and Robots, which examines the ethical and social implications of technology and intimacy. She is Creative and Outreach lead for the UKRI Responsible Artificial Intelligence UK programme — an international research and innovation ecosystem for responsible AI.

New algorithms for intelligent and efficient robot navigation among the crowd

Service robots have started to appear in various daily tasks such as parcel delivery, as guide dogs for the visually impaired, as public servants at airports, or as seen in Joensuu: in the inspection of construction works. Robots are able to move in different ways: on legs, on wheels or by flying. They know the shortest or easiest route to the destination. A guide dog can search for bus schedules or even order a taxi when needed.

How drone submarines are turning the seabed into a future battlefield

A 12-ton fishing boat weighs anchor three kilometers off the port of Adelaide. A small crew huddles over a miniature submarine, activates the controls, primes the explosives, and releases it into the water. The underwater drone uses sensors and sonar to navigate towards its pre-programmed target: the single, narrow port channel responsible for the state's core fuel supply.

#IROS2023 awards finalists and winners + IROS on Demand free for one year

Did you have the chance to attend the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023) in Detroit? Here we bring you the papers that received an award this year in case you missed them. And good news: you can read all the papers because IROS on Demand is open to the public and freely available for one year from Oct 9th. Congratulations to all the winners and finalists!

IROS 2023 Best Overall and Best Student Paper

Winner of the IROS 2023 Best Paper

  • Autonomous Power Line Inspection with Drones via Perception-Aware MPC, by Jiaxu Xing, Giovanni Cioffi, Javier Hidalgo Carrio, Davide Scaramuzza.

Winner of the IROS 2023 Best Student Paper

  • Controlling Powered Prosthesis Kinematics over Continuous Transitions Between Walk and Stair Ascent, by Shihao Cheng, Curt A. Laubscher, Robert D. Gregg.

Finalists

  • Learning Contact-Based State Estimation for Assembly Tasks, by Johannes Pankert, Marco Hutter.
  • Swashplateless-elevon Actuation for a Dual-rotor Tail-sitter VTOL UAV, by Nan Chen, Fanze Kong, Haotian Li, Jiayuan Liu, Ziwei Ye, Wei Xu, Fangcheng Zhu, Ximin Lyu, Fu Zhang.
  • Towards Legged Locomotion on Steep Planetary Terrain, by Giorgio Valsecchi, Cedric Weibel, Hendrik Kolvenbach, Marco Hutter.
  • Decentralized Swarm Trajectory Generation for LiDAR-based Aerial Tracking in Cluttered Environments, by Longji Yin, Fangcheng Zhu, Yunfan Ren, Fanze Kong, Fu Zhang.
  • Open-Vocabulary Affordance Detection in 3D Point Clouds, by Toan Nguyen, Minh Nhat Vu, An Vuong, Dzung Nguyen, Thieu Vo, Ngan Le, Anh Nguyen.
  • Discovering Symbolic Adaptation Algorithms from Scratch, by Stephen Kelly, Daniel Park, Xingyou Song, Mitchell McIntire, Pranav Nashikkar, Ritam Guha, Wolfgang Banzhaf, Kalyanmoy Deb, Vishnu Boddeti, Jie Tan, Esteban Real.
  • Parallel cell array patterning and target cell lysis on an optoelectronic micro-well device, by Chunyuan Gan, Hongyi Xiong, Jiawei Zhao, Ao Wang, Chutian Wang, Shuzhang Liang, Jiaying Zhang, Lin Feng.
  • FATROP: A Fast Constrained Optimal Control Problem Solver for Robot Trajectory Optimization and Control, by Lander Vanroye, Ajay Suresha Sathya, Joris De Schutter, Wilm Decré.
  • GelSight Svelte: A Human Finger-Shaped Single-Camera Tactile Robot Finger with Large Sensing Coverage and Proprioceptive Sensing, by Jialiang Zhao, Edward Adelson.
  • Shape Servoing of a Soft Object Using Fourier Series and a Physics-based Model, by Fouad Makiyeh, Francois Chaumette, Maud Marchal, Alexandre Krupa.

IROS Best Paper Award on Agri-Robotics sponsored by YANMAR

Winner

  • Visual, Spatial, Geometric-Preserved Place Recognition for Cross-View and Cross-Modal Collaborative Perception, by Peng Gao, Jing Liang, Yu Shen, Sanghyun Son, Ming C. Lin.

Finalists

  • Online Self-Supervised Thermal Water Segmentation for Aerial Vehicles, by Connor Lee, Jonathan Gustafsson Frennert, Lu Gan, Matthew Anderson, Soon-Jo Chung.
  • Relative Roughness Measurement based Real-time Speed Planning for Autonomous Vehicles on Rugged Road, by Liang Wang, Tianwei Niu, Shuai Wang, Shoukun Wang, Junzheng Wang.

IROS Best Application Paper Award sponsored by ICROS

Winner

  • Autonomous Robotic Drilling System for Mice Cranial Window Creation: An Evaluation with an Egg Model, by Enduo Zhao, Murilo Marques Marinho, Kanako Harada.

Finalists

  • Visuo-Tactile Sensor Enabled Pneumatic Device Towards Compliant Oropharyngeal Swab Sampling, by Shoujie Li, MingShan He, Wenbo Ding, Linqi Ye, xueqian WANG, Junbo Tan, Jinqiu Yuan, Xiao-Ping Zhang.
  • Improving Amputee Endurance over Activities of Daily Living with a Robotic Knee-Ankle Prosthesis: A Case Study, by Kevin Best, Curt A. Laubscher, Ross Cortino, Shihao Cheng, Robert D. Gregg.
  • Dynamic hand proprioception via a wearable glove with fabric sensors, by Lily Behnke, Lina Sanchez-Botero, William Johnson, Anjali Agrawala, Rebecca Kramer-Bottiglio.
  • Active Capsule System for Multiple Therapeutic Patch Delivery: Preclinical Evaluation, by Jihun Lee, Manh Cuong Hoang, Jayoung Kim, Eunho Choe, Hyeonwoo Kee, Seungun Yang, Jongoh Park, Sukho Park.

IROS Best Entertainment and Amusement Paper Award sponsored by JTCF

Winner

  • DoubleBee: A Hybrid Aerial-Ground Robot with Two Active Wheels, by Muqing Cao, Xinhang Xu, Shenghai Yuan, Kun Cao, Kangcheng Liu, Lihua Xie.

Finalists

  • Polynomial-based Online Planning for Autonomous Drone Racing in Dynamic Environments, by Qianhao Wang, Dong Wang, Chao Xu, Alan Gao, Fei Gao.
  • Bistable Tensegrity Robot with Jumping Repeatability based on Rigid Plate-shaped Compressors, by Kento Shimura, Noriyasu Iwamoto, Takuya Umedachi.

IROS Best Industrial Robotics Research for Applications sponsored by Mujin Inc.

Winner

  • Toward Closed-loop Additive Manufacturing: Paradigm Shift in Fabrication, Inspection, and Repair, by Manpreet Singh, Fujun Ruan, Albert Xu, Yuchen Wu, Archit Rungta, Luyuan Wang, Kevin Song, Howie Choset, Lu Li.

Finalists

  • Learning Contact-Based State Estimation for Assembly Tasks, by Johannes Pankert, Marco Hutter.
  • Bagging by Learning to Singulate Layers Using Interactive Perception, by Lawrence Yunliang Chen, Baiyu Shi, Roy Lin, Daniel Seita, Ayah Ahmad, Richard Cheng, Thomas Kollar, David Held, Ken Goldberg.
  • Exploiting the Kinematic Redundancy of a Backdrivable Parallel Manipulator for Sensing During Physical Human-Robot Interaction, by Arda Yigit, Tan-Sy Nguyen, Clement Gosselin.

IROS Best Paper Award on Cognitive Robotics sponsored by KROS

Winner

  • Extracting Dynamic Navigation Goal from Natural Language Dialogue, by Lanjun Liang, Ganghui Bian, Huailin Zhao, Yanzhi Dong, Huaping Liu.

Finalists

  • EasyGaze3D: Towards Effective and Flexible 3D Gaze Estimation from a Single RGB Camera, by Jinkai Li, Jianxin Yang, Yuxuan Liu, ZHEN LI, Guang-Zhong Yang, Yao Guo.
  • Team Coordination on Graphs with State-Dependent Edge Cost, by Sara Oughourli, Manshi Limbu, Zechen Hu, Xuan Wang, Xuesu Xiao, Daigo Shishika.
  • Is Weakly-supervised Action Segmentation Ready For Human-Robot Interaction? No, Let’s Improve It With Action-union Learning, by Fan Yang, Shigeyuki Odashima, Shochi Masui, Shan Jiang.
  • Exploiting Spatio-temporal Human-object Relations using Graph Neural Networks for Human Action Recognition and 3D Motion Forecasting, by Dimitrios Lagamtzis, Fabian Schmidt, Jan Reinke Seyler, Thao Dang, Steffen Schober.

IROS Best Paper Award on Mobile Manipulation sponsored by OMRON Sinic X Corp.

Winner

  • A perching and tilting aerial robot for precise and versatile power tool work on vertical walls, by Roman Dautzenberg, Timo Küster, Timon Mathis, Yann Roth, Curdin Steinauer, Gabriel Käppeli, Julian Santen, Alina Arranhado, Friederike Biffar, Till Kötter, Christian Lanegger, Mike Allenspach, Roland Siegwart, Rik Bähnemann.

Finalists

  • Placing by Touching: An empirical study on the importance of tactile sensing for precise object placing, by Luca Lach, Niklas Wilhelm Funk, Robert Haschke, Séverin Lemaignan, Helge Joachim Ritter, Jan Peters, Georgia Chalvatzaki.
  • Efficient Object Manipulation Planning with Monte Carlo Tree Search, by Huaijiang Zhu, Avadesh Meduri, Ludovic Righetti.
  • Sequential Manipulation Planning for Over-actuated UAMs, by Yao Su, Jiarui Li, Ziyuan Jiao, Meng Wang, Chi Chu, Hang Li, Yixin Zhu, Hangxin Liu.
  • On the Design of Region-Avoiding Metrics for Collision-Safe Motion Generation on Riemannian Manifolds, by Holger Klein, Noémie Jaquier, Andre Meixner, Tamim Asfour.

IROS Best RoboCup Paper Award sponsored by RoboCup Federation

Winner

  • Sequential Neural Barriers for Scalable Dynamic Obstacle Avoidance, by Hongzhan Yu, Chiaki Hirayama, Chenning Yu, Sylvia Herbert, Sicun Gao.

Finalists

  • Anytime, Anywhere: Human Arm Pose from Smartwatch Data for Ubiquitous Robot Control and Teleoperation, by Fabian Clemens Weigend, Shubham Sonawani, Drolet Michael, Heni Ben Amor.
  • Effectively Rearranging Heterogeneous Objects on Cluttered Tabletops, by Kai Gao, Justin Yu, Tanay Sandeep Punjabi, Jingjin Yu.
  • Prioritized Planning for Target-Oriented Manipulation via Hierarchical Stacking Relationship Prediction, by Zewen Wu, Jian Tang, Xingyu Chen, Chengzhong Ma, Xuguang Lan, Nanning Zheng.

IROS Best Paper Award on Robot Mechanisms and Design sponsored by ROBOTIS

Winner

  • Swashplateless-elevon Actuation for a Dual-rotor Tail-sitter VTOL UAV, by Nan Chen, Fanze Kong, Haotian Li, Jiayuan Liu, Ziwei Ye, Wei Xu, Fangcheng Zhu, Ximin Lyu, Fu Zhang.

Finalists

  • Hybrid Tendon and Ball Chain Continuum Robots for Enhanced Dexterity in Medical Interventions, by Giovanni Pittiglio, Margherita Mencattelli, Abdulhamit Donder, Yash Chitalia, Pierre Dupont.
  • c^2: Co-design of Robots via Concurrent-Network Coupling Online and Offline Reinforcement Learning, by Ci Chen, Pingyu Xiang, Haojian Lu, Yue Wang, Rong Xiong.
  • Collision-Free Reconfiguration Planning for Variable Topology Trusses using a Linking Invariant, by Alexander Spinos, Mark Yim.
  • eViper: A Scalable Platform for Untethered Modular Soft Robots, by Hsin Cheng, Zhiwu Zheng, Prakhar Kumar, Wali Afridi, Ben Kim, Sigurd Wagner, Naveen Verma, James Sturm, Minjie Chen.

IROS Best Paper Award on Safety, Security, and Rescue Robotics in memory of Motohiro Kisoi sponsored by IRSI

Winner

  • mCLARI: A Shape-Morphing Insect-Scale Robot Capable of Omnidirectional Terrain-Adaptive Locomotion, by Heiko Dieter Kabutz, Alexander Hedrick, William Parker McDonnell, Kaushik Jayaram.

Finalists

  • Towards Legged Locomotion on Steep, Planetary Terrain, by Giorgio Valsecchi, Cedric Weibel, Hendrik Kolvenbach, Marco Hutter.
  • Global Localization in Unstructured Environments using Semantic Object Maps Built from Various Viewpoints, by Jacqueline Ankenbauer, Parker C. Lusk, Jonathan How.
  • EELS: Towards Autonomous Mobility in Extreme Environments with a Novel Large-Scale Screw Driven Snake Robot, by Rohan Thakker, Michael Paton, Marlin Polo Strub, Michael Swan, Guglielmo Daddi, Rob Royce, Matthew Gildner, Tiago Vaquero, Phillipe Tosi, Marcel Veismann, Peter Gavrilov, Eloise Marteau, Joseph Bowkett, Daniel Loret de Mola Lemus, Yashwanth Kumar Nakka, Benjamin Hockman, Andrew Orekhov, Tristan Hasseler, Carl Leake, Benjamin Nuernberger, Pedro F. Proença, William Reid, William Talbot, Nikola Georgiev, Torkom Pailevanian, Avak Archanian, Eric Ambrose, Jay Jasper, Rachel Etheredge, Christiahn Roman, Daniel S Levine, Kyohei Otsu, Hovhannes Melikyan, Richard Rieber, Kalind Carpenter, Jeremy Nash, Abhinandan Jain, Lori Shiraishi, Ali-akbar Agha-mohammadi, Matthew Travers, Howie Choset, Joel Burdick, Masahiro Ono.
  • Multi-IMU Proprioceptive Odometry for Legged Robots, by Shuo Yang, Zixin Zhang, Benjamin Bokser, Zachary Manchester.

Making rad maps with robot dogs

In 2013, researchers carried a Microsoft Kinect camera through houses in Japan's Fukushima Prefecture. The device's infrared light traced the contours of the buildings, making a rough 3D map. On top of this, the team layered information from an early version of a hand-held gamma-ray imager, displaying the otherwise invisible nuclear radiation from the Fukushima Daiichi Nuclear Power Plant accident.

Could robots control whips? Researchers test the extremes of human motor control to advance robotics

On any given day, Richards Hall on Northeastern University's Boston campus is filled with the sound of students' shuffling feet or energetic class discussions, but this week you might have heard something else: a whip cracking.

Sam Bankman-Fried Criminal Trial: What Really Happened?

Sam Bankman-Fried, the co-founder and former CEO of the cryptocurrency exchange FTX, is currently on trial in the United States on charges of wire fraud, securities fraud, and money laundering. The trial is expected to last several weeks, and Bankman-Fried faces a maximum sentence of 110 years in prison if convicted. The full story behind...

The post Sam Bankman-Fried Criminal Trial: What Really Happened? appeared first on 1redDrop.

Page 179 of 524
1 177 178 179 180 181 524