Page 2 of 5
1 2 3 4 5

Tamim Asfour’s Keynote talk – Learning humanoid manipulation from humans

Through manipulation, a robotic system can physically change the state of the world. This is intertwined with intelligence, which is the ability whereby such system can detect and adapt to change. In his talk, Tamim Asfour gives an overview of the developments in manipulation for robotic systems his lab has done by learning manipulation task models from human observations, and the challenges and open questions associated with this.

Bio: Tamim Asfour is full Professor at the Institute for Anthropomatics and Robotics, where he holds the chair of Humanoid Robotics Systems and is head of the High Performance Humanoid Technologies Lab (H2T). His current research interest is high performance 24/7 humanoid robotics. Specifically, his research focuses on engineering humanoid robot systems integrating the mechano-informatics of such systems with the capabilities of predicting, acting and learning from human demonstration and sensorimotor experience. He is developer and the leader of the development team of the ARMAR humanoid robot family.

Interview with Huy Ha and Shuran Song: CoRL 2021 best system paper award winners

Congratulations to Huy Ha and Shuran Song who have won the CoRL 2021 best system paper award!

Their work, FlingBot: the unreasonable effectiveness of dynamic manipulations for cloth unfolding, was highly praised by the judging committee. “To me, this paper constitutes the most impressive account of both simulated and real-world cloth manipulation to date.”, commented one of the reviewers.

Below, the authors tell us more about their work, the methodology, and what they are planning next.

What is the topic of the research in your paper?

In my most recent publication with my advisor, Professor Shuran Song, we studied the task of cloth unfolding. The goal of the task is to manipulate a cloth from a crumpled initial state to an unfolded state, which is equivalent to maximizing the coverage of the cloth on the workspace.

Could you tell us about the implications of your research and why it is an interesting area for study?

Historically, most robotic manipulation research topics, such as grasp planning, are concerned with rigid objects, which have only 6 degrees of freedom since their geometry does not change. This allows one to apply the typical state estimation – task & motion planning pipeline in robotics. In contrast, deformable objects could bend and stretch in arbitrary directions, leading to infinite degrees of freedom. It’s unclear what the state of the cloth should even be. In addition, deformable objects such as clothes could experience severe self occlusion – given a crumpled piece of cloth, it’s difficult to identify whether it’s a shirt, jacket, or pair of pants. Therefore, cloth unfolding is a typical first step of cloth manipulation pipelines, since it reveals key features of the cloth for downstream perception and manipulation.

Despite the abundance of sophisticated methods for cloth unfolding over the years, they typically only address the easy case (where the cloth already starts off mostly unfolded) or take upwards of a hundred steps for challenging cases. These prior works all use single arm quasi-static actions, such as pick and place, which is slow and limited by the physical reach range of the system.

Could you explain your methodology?

In our daily lives, humans typically use both hands to manipulate cloths, and with as little as a single high velocity fling or two, we can unfold an initially crumpled cloth. Based on this observation, our key idea is simple: Use dual arm dynamic actions for cloth unfolding.

FlingBot is a self-supervised framework for cloth unfolding which uses a pick, stretch, and fling primitive for a dual-arm setup from visual observations. There are three key components to our approach. First is the decision to use a high velocity dynamic action. By relying on cloths’ mass combined with a high-velocity throw to do most of its work, a dynamic flinging policy can unfold cloths much more efficiently than a quasi-static policy. Second is a dual-arm grasp parameterization which makes satisfying collision safety constraints easy. By treating a dual-arm grasp not as two points but as a line with a rotation and length, we can directly constrain the rotation and length of the line to ensure arms do not cross over each other and do not try to grasp too close to each other. Third is our choice of using Spatial Action Maps, which learns translational, rotational, and scale equivariant value maps, and allows for sample efficient learning.

What were your main findings?

We found that dynamic actions have three desirable properties over quasi-static actions for the task of cloth unfolding. First, they are efficient – FlingBot achieves over 80% coverage within 3 actions on novel cloths. Second, they are generalizable – trained on only square cloths, FlingBot also generalizes to T-shirts. Third, they expand the system’s effective reach range – even when FlingBot can’t fully lift or stretch a cloth larger than the system’s physical reach range, it’s able to use high velocity flings to unfold the cloth.

After training and evaluating our model in simulation, we deployed and finetuned our model on a real world dual-arm system, which achieves above 80% coverage for all cloth categories. Meanwhile, the quasi-static pick & place baseline was only able to achieve around 40% coverage.

What further work are you planning in this area?

Although we motivated cloth unfolding as a precursor for downstream modules such as cloth state estimation, unfolding could also benefit from state estimation. For instance, if the system is confident it has identified the shoulders of the shirt in its state estimation, the unfolding policy could directly grasp the shoulders and unfold the shirt in one step. Based on this observation, we are currently working on a cloth unfolding and state estimation approach which can learn in a self-supervised manner in the real world.


About the authors

Huy Ha is a Ph.D. student in Computer Science at Columbia University. He is advised by Professor Shuran Song and is a member of the Columbia Artificial Intelligence and Robotics (CAIR) lab.

Shuran Song is an assistant professor in computer science department at Columbia University, where she directs the Columbia Artificial Intelligence and Robotics (CAIR) Lab. Her research focuses on computer vision and robotics. She’s interested in developing algorithms that enable intelligent systems to learn from their interactions with the physical world, and autonomously acquire the perception and manipulation skills necessary to execute complex tasks and assist people.


Find out more

  • Read the paper on arXiv.
  • The videos of the real-world experiments and code are available here, as is a video of the authors’ presentation at CoRL.
  • Read more about the winning and shortlisted papers for the CoRL awards here.

Top tweets from the Conference on Robot Learning #CoRL2021

The Conference on Robot Learning (CoRL) is an annual international conference specialised in the intersection of robotics and machine learning. The fifth edition took place last week in London and virtually around the globe. Apart from the novelty of being a hybrid conference, this year the focus was put on openness. OpenReview was used for the peer review process, meaning that the reviewers’ comments and replies from the authors are public, for anyone to see. The research community suggests that open review could encourage mutual trust, respect, and openness to criticism, enable constructive and efficient quality assurance, increase transparency and accountability, facilitate wider, and more inclusive discussion, give reviewers recognition and make reviews citable [1]. You can access all CoRL 2021 papers and their corresponding reviews here. In addition, you may want to listen to all presentations, available in the conference YouTube channel.

In this post we bring you a glimpse of the conference through the most popular tweets written last week. Cool robot demos, short and sweet explanation of papers and award finalists to look forward to next year’s edition in New Zealand. Enjoy!

Robots, robots, robots!

Papers and presentations

Awards

References

We are delighted to announce the launch of Scicomm – a joint science communication project from Robohub and AIhub

Scicomm.io is a science communication project which aims to empower people to share stories about their robotics and AI work. The project is a joint effort from Robohub and AIhub, both of which are educational platforms dedicated to connecting the robotics and AI communities to the rest of the world.

This project focuses on training the next generation of communicators in robotics and AI to build a strong connection with the outside world, by providing effective communication tools.

People working in the field are developing an enormous array of systems and technologies. However, due to a relative lack of high quality, impartial information in the mainstream media, the general public receive a lot hyped news which ends up causing fear and / or unrealistic expectations surrounding these technologies.

Scicomm.io has been created to facilitate the connection between the robotics and AI world and the rest of the world through teaching how to establish truthful, honest and hype-free communication. One that brings benefit to both sides.

Scicommm bytes

With our series of bite-sized videos you can quickly learn about science communication for robotics and AI. Find out why science communication is important, how to talk to the media, and about some of the different ways in which you can communicate your work. We have also produced guides with tips for turning your research into blog post and for avoiding hype when promoting your research.

Training

Training the next generation of science communicators is an important mission for scicomm.io (and indeed Robohub and AIhub). As part of scicomm.io, we run training courses to empower researchers to communicate about their work. When done well, stories about AI and robotics can help increase the visibility and impact of the work, lead to new connections, and even raise funds. However, most researchers don’t engage in science communication, due to a lack of skills, time, and reach that makes the effort worthwhile.

With our workshops we aim to overcome these barriers and make communicating robotics and AI ‘easy’. This is done through short training sessions with experts, and hands-on practical exercises to help students begin their science communication journey with confidence.

scicomm workshop in actionA virtual scicomm workshop in action.

During the workshops, participants will hear why science communication matters, learn the basic techniques of science communication, build a story around their own research, and find out how to connect with journalists and other communicators. We’ll also discuss different science communication media, how to use social media, how to prepare blog posts, videos and press releases, how to avoid hype, and how to communicate work to a general audience.

For more information about our workshops, contact the team by email.

Find out more about the scicomm.io project here.

IEEE 17th International Conference on Automation Science and Engineering paper awards (with videos)

The IEEE International Conference on Automation Science and Engineering (CASE) is the flagship automation conference of the IEEE Robotics and Automation Society and constitutes the primary forum for cross-industry and multidisciplinary research in automation. Its goal is to provide a broad coverage and dissemination of foundational research in automation among researchers, academics, and practitioners. Here we bring you the online presentations by the finalists of the four awards given at the conference. Congratulations to all the finalists and winners!

Best student paper award

Winner

  • Designing a User-Centred and Data-Driven Controller for Pushrim-Activated Power-Assisted Wheels: A Case Study
    Mahsa Khalili, H.F. Machiel Van der Loos and Jaimie Borisoff

Finalists

  • Including Sparse Production Knowledge into Variational Autoencoders to Increase Anomaly Detection Reliability
    Tom Hammerbacher, Markus Lange-Hegermann, Gorden Platz

  • Synthesis and Implementation of Distributed Supervisory Controllers with Communication Delays
    Lars Moormann, Reinier Hendrik Jacob Schouten, Joanna Maria Van de Mortel-Fronczak, Wan Fokkink, Jacobus E. Rooda

  • Optimal Planning of Internet Data Centers Decarbonized by Hydrogen-Water-Based Energy Systems
    Jinhui Liu, Zhanbo Xu, Jiang Wu, kun liu, Xunhang Sun, Xiaohong Guan

  • Deep Reinforcement Learning for Prefab Assembly Planning in Robot-Based Prefabricated Construction
    Zhu Aiyu, Gangyan Xu, Pieter Pauwels, Bauke de Vries, Meng Fang

  • Singularity-Aware Motion Planning for Multi-Axis Additive Manufacturing
    Charlie C.L. Wang, Tianyu Zhang, Xiangjia Chen, Guoxin Fang, Yingjun Tian

Best conference paper award

Winner

  • Extended Fabrication-Aware Convolution Learning Framework for Predicting 3D Shape Deformation in Additive Manufacturing
    Yuanxiang Wang, Cesar Ruiz, Qiang Huang

Finalists

  • Probabilistic Movement Primitive Control Via Control Barrier Functions
    Mohammadreza Davoodi, Asif Iqbal, Joe Cloud, William Beksi, Nicholas Gans

  • Efficient Optimization-Based Falsification of Cyber-Physical Systems with Multiple Conjunctive Requirements
    Logan Mathesen, Giulia Pedrielli, Georgios Fainekos

Best application paper award

Winner

  • A Seamless Workflow for Design and Fabrication of Multimaterial Pneumatic Soft Actuators
    Lawrence Smith, Travis Hainsworth, Zachary Jordan, Xavier Bell, Robert MacCurdy

Finalists

  • Dynamic Multi-Goal Motion Planning with Range Constraints for Autonomous Underwater Vehicles Following Surface Vehicles
    James McMahon, Erion Plaku

  • OpenUAV Cloud Testbed: a Collaborative Design Studio for Field Robotics
    Harish Anand, Stephen A. Rees, Zhiang Chen, Ashwin Jose Poruthukaran, Sarah Bearman, Lakshmi Gana Prasad Antervedi, Jnaneshwar Das

Best healthcare automation paper award

Winner

  • Hospital Beds Planning and Admission Control Policies for COVID-19 Pandemic: A Hybrid Computer Simulation Approach
    Yiruo Lu, Yongpei Guan, Xiang Zhong, Jennifer Fishe, Thanh Hogan

Finalists

  • Rollout-Based Gantry Call-Back Control for Proton Therapy Systems
    Feifan Wang, Yu-Li Huang, Feng Ju

  • Progress in Development of an Automated Mosquito Salivary Gland Extractor: A Step Forward to Malaria Vaccine Mass Production
    Wanze Li, Zhuoqun Zhang, Zhuohong He, Parth Vora, Alan Lai, Balazs Vagvolgyi, Simon Leonard, Anna Goodridge, Ioan Iulian Iordachita, Stephen L. Hoffman, Sumana Chakravarty, B Kim Lee Sim, Russell H. Taylor

Robotics Today latest talks – Raia Hadsell (DeepMind), Koushil Sreenath (UC Berkeley) and Antonio Bicchi (Istituto Italiano di Tecnologia)

Robotics Today held three more online talks since we published the one from Amanda Prorok (Learning to Communicate in Multi-Agent Systems). In this post we bring you the last talks that Robotics Today (currently on hiatus) uploaded to their YouTube channel: Raia Hadsell from DeepMind talking about ‘Scalable Robot Learning in Rich Environments’, Koushil Sreenath from UC Berkeley talking about ‘Safety-Critical Control for Dynamic Robots’, and Antonio Bicchi from the Istituto Italiano di Tecnologia talking about ‘Planning and Learning Interaction with Variable Impedance’.

Raia Hadsell (DeepMind) – Scalable Robot Learning in Rich Environments

Abstract: As modern machine learning methods push towards breakthroughs in controlling physical systems, games and simple physical simulations are often used as the main benchmark domains. As the field matures, it is important to develop more sophisticated learning systems with the aim of solving more complex real-world tasks, but problems like catastrophic forgetting and data efficiency remain critical, particularly for robotic domains. This talk will cover some of the challenges that exist for learning from interactions in more complex, constrained, and real-world settings, and some promising new approaches that have emerged.

Bio: Raia Hadsell is the Director of Robotics at DeepMind. Dr. Hadsell joined DeepMind in 2014 to pursue new solutions for artificial general intelligence. Her research focuses on the challenge of continual learning for AI agents and robots, and she has proposed neural approaches such as policy distillation, progressive nets, and elastic weight consolidation to solve the problem of catastrophic forgetting. Dr. Hadsell is on the executive boards of ICLR (International Conference on Learning Representations), WiML (Women in Machine Learning), and CoRL (Conference on Robot Learning). She is a fellow of the European Lab on Learning Systems (ELLIS), a founding organizer of NAISys (Neuroscience for AI Systems), and serves as a CIFAR advisor.

Koushil Sreenath (UC Berkeley) – Safety-Critical Control for Dynamic Robots: A Model-based and Data-driven Approach

Abstract: Model-based controllers can be designed to provide guarantees on stability and safety for dynamical systems. In this talk, I will show how we can address the challenges of stability through control Lyapunov functions (CLFs), input and state constraints through CLF-based quadratic programs, and safety-critical constraints through control barrier functions (CBFs). However, the performance of model-based controllers is dependent on having a precise model of the system. Model uncertainty could lead not only to poor performance but could also destabilize the system as well as violate safety constraints. I will present recent results on using model-based control along with data-driven methods to address stability and safety for systems with uncertain dynamics. In particular, I will show how reinforcement learning as well as Gaussian process regression can be used along with CLF and CBF-based control to address the adverse effects of model uncertainty.

Bio: Koushil Sreenath is an Assistant Professor of Mechanical Engineering, at UC Berkeley. He received a Ph.D. degree in Electrical Engineering and Computer Science and a M.S. degree in Applied Mathematics from the University of Michigan at Ann Arbor, MI, in 2011. He was a Postdoctoral Scholar at the GRASP Lab at University of Pennsylvania from 2011 to 2013 and an Assistant Professor at Carnegie Mellon University from 2013 to 2017. His research interest lies at the intersection of highly dynamic robotics and applied nonlinear control. His work on dynamic legged locomotion was featured on The Discovery Channel, CNN, ESPN, FOX, and CBS. His work on dynamic aerial manipulation was featured on the IEEE Spectrum, New Scientist, and Huffington Post. His work on adaptive sampling with mobile sensor networks was published as a book. He received the NSF CAREER, Hellman Fellow, Best Paper Award at the Robotics: Science and Systems (RSS), and the Google Faculty Research Award in Robotics.

Antonio Bicchi (Istituto Italiano di Tecnologia) – Planning and Learning Interaction with Variable Impedance

Abstract: In animals and in humans, the mechanical impedance of their limbs changes not only in dependence of the task, but also during different phases of the execution of a task. Part of this variability is intentionally controlled, by either co-activating muscles or by changing the arm posture, or both. In robots, impedance can be varied by varying controller gains, stiffness of hardware parts, and arm postures. The choice of impedance profiles to be applied can be planned off-line, or varied in real time based on feedback from the environmental interaction. Planning and control of variable impedance can use insight from human observations, from mathematical optimization methods, or from learning. In this talk I will review the basics of human and robot variable impedance, and discuss how this impact applications ranging from industrial and service robotics to prosthetics and rehabilitation.

Bio: Antonio Bicchi is a scientist interested in robotics and intelligent machines. After graduating in Pisa and receiving a Ph.D. from the University of Bologna, he spent a few years at the MIT AI Lab of Cambridge before becoming Professor in Robotics at the University of Pisa. In 2009 he founded the Soft Robotics Laboratory at the Italian Institute of Technology in Genoa. Since 2013 he is Adjunct Professor at Arizona State University, Tempe, AZ. He has coordinated many international projects, including four grants from the European Research Council (ERC). He served the research community in several ways, including by launching the WorldHaptics conference and the IEEE Robotics and Automation Letters. He is currently the President of the Italian Institute of Robotics and Intelligent Machines. He has authored over 500 scientific papers cited more than 25,000 times. He supervised over 60 doctoral students and more than 20 postdocs, most of whom are now professors in universities and international research centers, or have launched their own spin-off companies. His students have received prestigious awards, including three first prizes and two nominations for the best theses in Europe on robotics and haptics. He is a Fellow of IEEE since 2005. In 2018 he received the prestigious IEEE Saridis Leadership Award.

Robohub gets a fresh look

If you visited Robohub this week, you may have spotted a big change: how this blog looks now! On Tuesday (coinciding with Ada Lovelace Day and our ‘50 women in robotics that you need to know about‘ by chance), Robohub got a massive modernisation on its look by our Technical Editor Ioannis K. Erripis and his team.

There are many improvements and new features but the biggest update apart from the code is the design which is more clean and simple looking (especially on the single post view).

Ioannis K. Erripis

This fresher look has recently been tested on our sister project, AIhub.org. As Ioannis says, it offers a cleaner, simpler and more readable way to access the content from the robotics community that we post in this blog. You will also notice that the main page now displays a mosaic of content which is more accessible than before to facilitate how you scroll down our site.

Your feedback matters to us: if you find any issue with the new design or have any comment/idea for improvement, please let us know! You can reach us at editors[at]robohub.org.

And there are even more exciting news coming up together with this fresh look! We will soon release a brand new project that we have been developing in the background during this year. Stay tuned!

Online events to look out for on Ada Lovelace Day 2021

On the 12th of October, the world will celebrate Ada Lovelace Day to honor the achievements of women in science, technology, engineering and maths (STEM). After a successful worldwide online celebration of Ada Lovelace Day last year, this year’s celebration returns with a stronger commitment to online inclusion. In Finding Ada (the main network supporting Ada Lovelace Day), there will be three free webinars that you can enjoy in the comfort of your own home. There will also be loads of events happening around the world, so you have a wide range of content to celebrate Ada Lovelace Day 2021!

Engineering – Solving Problems for Real People

Engineering is the science of problem solving, and we have some pretty big problems in front of us. So how are engineers tackling the COVID-19 pandemic and climate change? And how do they stay focused on the impact of their engineering solutions on people and communities?

In partnership with STEM Wana Trust, we invite you to join Renée Young, associate mechanical engineer at Beca, Victoria Clark, senior environmental engineer at Beca, Natasha Mudaliar, operations manager at Reliance Reinforcing, and Sujata Roy, system planning engineer at Transpower, for a fascinating conversation about the challenges and opportunities of engineering.

13:00 NZST, 12 Oct: Perfect for people in New Zealand, Australia, and the Americas. (Note for American audiences: This panel will be on Monday for you.)

Register here, and find out about the speakers here.

Fusing Tech & Art in Games

The Technical Artist is a new kind of role in the games industry, but the possibilities for those who create and merge art and technology is endless. So what is tech art? And how are tech artists pushing the boundaries and creating new experiences for players?

Ada Lovelace Day and Ukie’s #RaiseTheGame invite you to join tech artist Kristrun Fridriksdottir, Jodie Azhar, technical art director at Silver Rain Games, Emma Roseburgh from Avalanche Studios, and Laurène Hurlin from Pixel Toys for our tech art webinar.

13:00 BST, 12 Oct: Perfect for people in the UK, Europe, Africa, Middle East, India, for early birds in the Americas and night owls in AsiaPacific.

Register here, and find out about the speakers here.

The Science of Hypersleep

Hypersleep is a common theme in science fiction, but what does science have to say about putting humans into suspended animation? What can we learn from hibernating animals? What’s the difference between hibernation and sleep? What health impacts would extended hypersleep have?

Ada Lovelace Day and the Arthur C. Clarke Award invite you to join science fiction author Anne Charnock, Prof Gina Poe, an expert on the relationship between sleep and memory, Dr Anusha Shankar, who studies torpor in hummingbirds, and Prof Kelly Drew, who studies hibernation in squirrels, for a discussion of whether hypersleep in humans is possible.

19:00 BST, 12 Oct: Perfect for people in the UK, Europe, Africa, and the Americas.

Register here, and find out about the speakers here.

Other worldwide events

Apart from the three webinars above, many other organisations will hold their own events to celebrate the day. From a 24-hour global edit-a-thon (The Pankhurst Centre) to a digital theatre play (STEM on Stage) to an online machine learning breakfast (Square Women Engineers + Allies Australia), plus several talks and panel discussions like this one on how you can change the world with the help of physics (Founders4Schools), or this other one on inspiring women and girls in STEAM (Engine Shed), you have plenty of options to choose from.

For a full overview of international events, check out this website.

We also hope that you enjoy reading our annual list of women in robotics that you need to know that will be released on the day. Happy Ada Lovelace Day 2021!

Online events to look out for on Ada Lovelace Day 2021

On the 12th of October, the world will celebrate Ada Lovelace Day to honor the achievements of women in science, technology, engineering and maths (STEM). After a successful worldwide online celebration of Ada Lovelace Day last year, this year’s celebration returns with a stronger commitment to online inclusion. In Finding Ada (the main network supporting Ada Lovelace Day), there will be three free webinars that you can enjoy in the comfort of your own home. There will also be loads of events happening around the world, so you have a wide range of content to celebrate Ada Lovelace Day 2021!

Engineering – Solving Problems for Real People

Engineering is the science of problem solving, and we have some pretty big problems in front of us. So how are engineers tackling the COVID-19 pandemic and climate change? And how do they stay focused on the impact of their engineering solutions on people and communities?

In partnership with STEM Wana Trust, we invite you to join Renée Young, associate mechanical engineer at Beca, Victoria Clark, senior environmental engineer at Beca, Natasha Mudaliar, operations manager at Reliance Reinforcing, and Sujata Roy, system planning engineer at Transpower, for a fascinating conversation about the challenges and opportunities of engineering.

13:00 NZST, 12 Oct: Perfect for people in New Zealand, Australia, and the Americas. (Note for American audiences: This panel will be on Monday for you.)

Register here, and find out about the speakers here.

Fusing Tech & Art in Games

The Technical Artist is a new kind of role in the games industry, but the possibilities for those who create and merge art and technology is endless. So what is tech art? And how are tech artists pushing the boundaries and creating new experiences for players?

Ada Lovelace Day and Ukie’s #RaiseTheGame invite you to join tech artist Kristrun Fridriksdottir, Jodie Azhar, technical art director at Silver Rain Games, Emma Roseburgh from Avalanche Studios, and Laurène Hurlin from Pixel Toys for our tech art webinar.

13:00 BST, 12 Oct: Perfect for people in the UK, Europe, Africa, Middle East, India, for early birds in the Americas and night owls in AsiaPacific.

Register here, and find out about the speakers here.

The Science of Hypersleep

Hypersleep is a common theme in science fiction, but what does science have to say about putting humans into suspended animation? What can we learn from hibernating animals? What’s the difference between hibernation and sleep? What health impacts would extended hypersleep have?

Ada Lovelace Day and the Arthur C. Clarke Award invite you to join science fiction author Anne Charnock, Prof Gina Poe, an expert on the relationship between sleep and memory, Dr Anusha Shankar, who studies torpor in hummingbirds, and Prof Kelly Drew, who studies hibernation in squirrels, for a discussion of whether hypersleep in humans is possible.

19:00 BST, 12 Oct: Perfect for people in the UK, Europe, Africa, and the Americas.

Register here, and find out about the speakers here.

Other worldwide events

Apart from the three webinars above, many other organisations will hold their own events to celebrate the day. From a 24-hour global edit-a-thon (The Pankhurst Centre) to a digital theatre play (STEM on Stage) to an online machine learning breakfast (Square Women Engineers + Allies Australia), plus several talks and panel discussions like this one on how you can change the world with the help of physics (Founders4Schools), or this other one on inspiring women and girls in STEAM (Engine Shed), you have plenty of options to choose from.

For a full overview of international events, check out this website.

We also hope that you enjoy reading our annual list of women in robotics that you need to know that will be released on the day. Happy Ada Lovelace Day 2021!

Real Roboticist focus series #6: Dennis Hong (Making People Happy)

In this final video of our focus series on IEEE/RSJ IROS 2020 (International Conference on Intelligent Robots and Systems) original series Real Roboticist, you’ll meet Dennis Hong speaking about the robots he and his team have created (locomotion and new ways of moving; an autonomous car for the visually impaired; disaster relief robots), Star Wars and cooking. All in all, ingredients from different worlds that Dennis is using to benefit society.

Dennis Hong is a Professor and the Founding Director of RoMeLa (Robotics & Mechanisms Laboratory) of the Mechanical & Aerospace Engineering Department at UCLA. If you’d like to find out more about how Star Wars influenced his professional career in robotics, how his experience taking a cooking assistant robot to MasterChef USA inspired a multi-million research project, and all the robots he is creating, check out his video below!

Sinergies between automation and robotics

In this IEEE ICRA 2021 Plenary Panel aimed at the younger generation of roboticists and automation experts, panelists Seth Hutchinson, Maria Pia Fanti, Peter B. Luh, Pieter Abbeel, Kaneko Harada, Michael Y. Wang, Kevin Lynch, Chinwe Ekenna, Animesh Garg and Frank Park, under the moderation of Ken Goldberg, discussed about how to close the gap between both disciplines, which have many topics in common. The panel was organised by the Ad Hoc Committee to Explore Synergies in Automation and Robotics (CESAR).

As the IEEE Robotics and Automation Society (IEEE RAS) explain, “robotics and automation have always been siblings. They are similar in many ways and have substantial overlap in topics and research communities, but there are also differences–many RAS members view them as disjoint and consider themselves purely in robotics or purely in automation. This committee’s goal is to reconsider these perceptions and think about ways we can bring these communities closer.”

#IROS2020 ‘Black in Robotics’ special series

Apart from the IEEE/RSJ IROS 2020 (International Conference on Intelligent Robots and Systems) original series Real Roboticist that we have been featuring in the last weeks, another series of three videos was produced together with Black in Robotics and the support of Toyota Research Institute. In this series, black roboticists give their personal examples of why diversity matters in robotics, showcase their research and explain what made them build a career in robotics.

Here’s a list of all the speakers and organisations who took part in the videos:

  • Ariel Anders – Roboticist at Robust.AI
  • Allison Okamura – Professor of Mechanical Engineering at Stanford University
  • Alivia Blount – Data Scientist
  • Anthony Jules – Co-founder and COO at Robust.AI
  • Andra Keay – Robotics Industry Futurist, Managing Director of Silicon Valley Robotics and Core Team Member of Robohub
  • Carlotta A. Berry – Professor of Electrical and Computer Engineering at Rose-Hulman Institute of Technology
  • Donna Auguste – Entrepreneur and Data Scientist
  • Clinton Enwerem – Robotics Trainee from the Robotics & Artificial Intelligence Nigeria (RAIN) team
  • Quentin Sanders – Postdoctoral Research Fellow at North Carolina State University
  • George Okoroafor – Robotics Research Engineer from the Robotics & Artificial Intelligence Nigeria (RAIN) team
  • Tatiana Jean-Louis – Amazon & Robotics Geek
  • Patrick Musau – Graduate Research Assistant at Vanderbilt University
  • Melanie Moses – Professor of Computer Science at the University of New Mexico

#IROS2020 Plenary and Keynote talks focus series #6: Jonathan Hurst & Andrea Thomaz

This week you’ll be able to listen to the talks of Jonathan Hurst (Professor of Robotics at Oregon State University, and Chief Technology Officer at Agility Robotics) and Andrea Thomaz (Associate Professor of Robotics at the University of Texas at Austin, and CEO of Diligent Robotics) as part of this series that brings you the plenary and keynote talks from the IEEE/RSJ IROS2020 (International Conference on Intelligent Robots and Systems). Jonathan’s talk is in the topic of humanoids, while Andrea’s is about human-robot interaction.

Prof. Jonathan Hurst – Design Contact Dynamics in Advance

Bio: Jonathan W. Hurst is Chief Technology Officer and co-founder of Agility Robotics, and Professor and co-founder of the Oregon State University Robotics Institute. He holds a B.S. in mechanical engineering and an M.S. and Ph.D. in robotics, all from Carnegie Mellon University. His university research focuses on understanding the fundamental science and engineering best practices for robotic legged locomotion and physical interaction. Agility Robotics is bringing this new robotic mobility to market, solving problems for customers, working towards a day when robots can go where people go, generate greater productivity across the economy, and improve quality of life for all.

Prof. Andrea Thomaz – Human + Robot Teams: From Theory to Practice

Bio: Andrea Thomaz is the CEO and Co-Founder of Diligent Robotics and a renowned social robotics expert. Her accolades include being recognized by the National Academy of Science as a Kavli Fellow, the US President’s Council of Advisors on Science and Tech, MIT Technology Review on its Next Generation of 35 Innovators Under 35 list, Popular Science on its Brilliant 10 list, TEDx as a featured keynote speaker on social robotics and Texas Monthly on its Most Powerful Texans of 2018 list.

Andrea’s robots have been featured in the New York Times and on the covers of MIT Technology Review and Popular Science. Her passion for social robotics began during her work at the MIT Media Lab, where she focused on using AI to develop machines that address everyday human needs. Andrea co-founded Diligent Robotics to pursue her vision of creating socially intelligent robot assistants that collaborate with humans by doing their chores so humans can have more time for the work they care most about. She earned her Ph.D. from MIT and B.S. in Electrical and Computer Engineering from UT Austin, and was a Robotics Professor at UT Austin and Georgia Tech (where she directed the Socially Intelligent Machines Lab).

Andrea is published in the areas of Artificial Intelligence, Robotics, and Human-Robot Interaction. Her research aims to computationally model mechanisms of human social learning in order to build social robots and other machines that are intuitive for everyday people to teach.

Andrea has received an NSF CAREER award in 2010 and an Office of Naval Research Young Investigator Award in 2008. In addition Diligent Robotics robot Moxi has been featured on NBC Nightly News and most recently in National Geographic “The robot revolution has arrived”.

#IROS2020 Real Roboticist focus series #5: Michelle Johnson (Robots That Matter)

We’re reaching the end of this focus series on IEEE/RSJ IROS 2020 (International Conference on Intelligent Robots and Systems) original series Real Roboticist. This week you’ll meet Michelle Johnson, Associate Professor of Physical Medicine and Rehabilitation at the University of Pennsylvania.

Michelle is also the Director of the Rehabilitation Robotics Lab at the University of Pennsylvania, whose aim is to use rehabilitation robotics and neuroscience to investigate brain plasticity and motor function after non-traumatic brain injuries, for example in stroke survivors or persons diagnosed with cerebral palsy. If you’d like to know more about her professional journey, her work with affordable robots for low/middle income countries and her next frontier in robotics, among many more things, check out her video below!

Page 2 of 5
1 2 3 4 5