Page 315 of 465
1 313 314 315 316 317 465

International conference on intelligent robots and systems (IROS)

This Sunday sees the start of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). This year the event is online and free for anyone to attend. Content will be available from the platform on demand, with access available from 25 October to 25 November 2020.

IROS conferences have traditionally had a theme and this year is no different with the emphasis being on “consumer robotics and our future”. You can sign up here.

Plenaries

IROS will feature three plenary talks. The speakers and topics are as follows:

  • Danica Kragic Robotics and artificial intelligence impacts on the fashion industry
  • Cynthia Breazeal Living with social robots: from research to commercialization and back
  • Yukie Nagai Cognitive development in humans and robots: new insights into intelligence

Keynote speakers

There are also nine keynote talks covering a number of topics.

  • Frank Dellaert Air, sea, and space robots
  • Anya Petrovskaya Driverless vehicles and field robots
  • Ashish Deshpande Rehabilitation robotics
  • Jonathan Hurst Humanoids
  • I-Ming Chen Food handling robotics
  • Steve LaValle Perception, action and control
  • Nikolaus Correll Grasping, haptics and end-effectors
  • Andrea Thomaz Human-robot interaction
  • Sarah Bergbreiter Design, micro and bio-inspired robotics

Technical talks

The technical talks have been divided into 12 topic areas.

  • Air, sea, and space robots
  • Driverless vehicles and field robots
  • Medical, cellular, micro and nano robots
  • Humanoids, exoskeletons, and rehab robots
  • Localization, mapping and navigation
  • Dynamics, control and learning
  • Design, mechanisms, actuators, soft and bio-inspired robots
  • Perception, action, and cognition
  • Grasping, haptics and end-effectors
  • Human-robot interaction, teleoperation, and virtual reality
  • Swarms and multi-robots
  • Industry 4.0

Each talk will feature its digest slide, pre-recorded video presentation, and the paper’s PDF. These will be available from 25 October, so keep an eye on the website.

Workshops

There are a whopping 35 workshops to choose from. These have on-demand content and also live sessions (dates vary so visit the webpages below for more information about each specific workshop).

  1. 3rd workshop on proximity perception in robotics: towards multi-modal cognition, Stefan Escaida Navarro*, Stephan Mühlbacher-Karrer, Hubert Zangl, Keisuke Koyama, Björn Hein, Ulrike Thomas, Hosam Alagi, Yitao Ding, Christian Stetco
  2. Bringing geometric methods to robot learning, optimization and control, Noémie Jaquier*, Leonel Rozo, Søren Hauberg, Hans-Peter Schröcker, Suvrit Sra
  3. 12th IROS20 workshop on planning, perception and navigation for intelligent vehicles, Philippe Martinet*, Christian Laugier, Marcelo H Ang Jr, Denis Fernando Wolf
  4. Robot-assisted training for primary care: how can robots help train doctors in medical examinations?, Thrishantha Nanayakkara*, Florence Ching Ying Leong, Thilina Dulantha Lalitharatne, Liang He, Fumiya Iida, Luca Scimeca, Simon Hauser, Josie Hughes, Perla Maiolino
  5. Workshop on animal-robot interaction, Cesare Stefanini and Donato Romano*
  6. Ergonomic human-robot collaboration: opportunities and challenges, Wansoo Kim*, Luka Peternel, Arash Ajoudani, Eiichi Yoshida
  7. New advances in soft robots control, Concepción A. Monje*, Egidio Falotico, Santiago Martínez de la Casa
  8. Autonomous system in medicine: current challenges in design, modeling, perception, control and applications, Hang Su, Yue Chen*, Jing GUO, Angela Faragasso, Haoyong Yu, Elena De Momi
  9. MIT MiniCheetah workshop, Sangbae Kim*, Patrick M. Wensing, Inhyeok Kim
  10. Workshop on humanitarian robotics, Garrett Clayton*, Raj Madhavan, Lino Marques
  11. Robotics-inspired biology, Nick Gravish*, Kaushik Jayaram, Chen Li, Glenna Clifton, Floris van Breugel
  12. Robots building robots. Digital manufacturing and human-centered automation for building consumer robots, Paolo Dario*, George Q. Huang, Peter Luh, MengChu Zhou
  13. Cognitive robotic surgery, Michael C. Yip, Florian Richter*, Danail Stoyanov, Francisco Vasconcelos, Fanny Ficuciello, Emmanuel B Vander Poorten, Peter Kazanzides, Blake Hannaford, Gregory Scott Fischer
  14. Application-driven soft robotic systems: Translational challenges, Sara Adela Abad Guaman, Lukas Lindenroth, Perla Maiolino, Agostino Stilli*, Kaspar Althoefer, Hongbin Liu, Arianna Menciassi, Thrishantha Nanayakkara, Jamie Paik, Helge Arne Wurdemann
  15. Reliable deployment of machine learning for long-term autonomy, Feras Dayoub*, Tomáš Krajník, Niko Sünderhauf, Ayoung Kim
  16. Robotic in-situ servicing, assembly, and manufacturing, Craig Carignan*, Joshua Vander Hook, Chakravarthini Saaj, Renaud Detry, Giacomo Marani
  17. Benchmarking progress in autonomous driving, Liam Paull*, Andrea Censi, Jacopo Tani, Matthew Walter, Felipe Codevilla, Sahika Genc, Sunil Mallya, Bhairav Mehta
  18. ROMADO: RObotic MAnipulation of Deformable Objects, Miguel Aranda*, Juan Antonio Corrales Ramon, Pablo Gil, Gonzalo Lopez-Nicolas, Helder Araujo, Youcef Mezouar
  19. Perception, learning, and control for autonomous agile vehicles, Giuseppe Loianno*, Davide Scaramuzza, Sertac Karaman
  20. Planetary exploration robots: challenges and opportunities, Hendrik Kolvenbach*, William Reid, Kazuya Yoshida, Richard Volpe
  21. Application-oriented modelling and control of soft robots, Thomas George Thuruthel*, Cosimo Della Santina, Seyedmohammadhadi Sadati, Federico Renda, Cecilia Laschi
  22. State of the art in robotic leg prostheses: where we are and where we want to be, Tommaso Lenzi*, Robert D. Gregg, Elliott Rouse, Joost Geeroms
  23. Worskhop on perception, planning and mobility in forestry robotics (WPPMFR 2020), João Filipe Ferreira* and David Portugal
  24. Why robots fail to grasp? – failure ca(u)ses in robot grasping and manipulation, Joao Bimbo*, Dimitrios Kanoulas, Giulia Vezzani, Kensuke Harada
  25. Trends and advances in machine learning and automated reasoning for intelligent robots and systems, Abdelghani Chibani, Craig Schlenoff, Yacine Amirat*, Shiqi Zhang, Jong-Hwan Kim, Ferhat Attal
  26. Learning impedance modulation for physical interaction: insights from humans and advances in robotics, Giuseppe Averta*, Franco Angelini, Meghan Huber, Jongwoo Lee, Manolo Garabini
  27. New horizons of robot learning – from industrial challenges to future capabilities, Kim Daniel Listmann* and Elmar Rueckert
  28. Robots for health and elderly care (RoboHEC), Leon Bodenhagen*, Oskar Palinko, Francois Michaud, Adriana Tapus, Julie Robillard
  29. Wearable SuperLimbs: design, communication, and control, Harry Asada*
  30. Human Movement Understanding for Intelligent Robots and Systems, Emel Demircan*, Taizo Yoshikawa, Philippe Fraisse, Tadej Petric
  31. Construction and architecture robotics, Darwin Lau*, Yunhui Liu, Tobias Bruckmann, Thomas Bock, Stéphane Caro
  32. Mechanisms and design: from inception to realization, Hao Su*, Matei Ciocarlie, Kyu-Jin Cho, Darwin Lau, Claudio Semini, Damiano Zanotto
  33. Bringing constraint-based robot programming to real-world applications, Wilm Decré*, Herman Bruyninckx, Gianni Borghesan, Erwin Aertbelien, Lars Tingelstad, Darwin G. Caldwell, Enrico Mingo, Abderrahmane Kheddar, Pierre Gergondet
  34. Managing deformation: a step towards higher robot autonomy, Jihong Zhu*, Andrea Cherubini, Claire Dune, David Navarro-Alarcon
  35. Social AI for human-robot interaction of human-care service robots, Ho Seok Ahn*, Hyungpil Moon, Minsu Jang, Jongsuk Choi

Robot challenges

Another element of the conference that sounds interesting is the robot challenges. There are three of these and you should be able to watch the competitions in action next week.

  1. Open cloud robot table organization challenge (OCRTOC). This competition focusses on table organisation tasks. Participants will need to organize the objects in the scene according to a target configuration. This competition will be broadcast on 25-27 October.
  2. 8th F1Tenth autonomous Grand Prix @ IROS 2020. This competition will take the form of a virtual race with standardised vehicles and hardware. The qualifying phase is a timed trial. The Grand Prix phase pits virtual competitors against each other on the same track. The race will be broadcast on 27 October.
  3. Robotic grasping and manipulation competition. There are two sections to this competition. In the first the robot has to make five cups of iced Matcha green tea. The second involves disassembly and assembly using a NIST Task Board.

Electronic design tool morphs interactive objects

MorphSensor glasses
An MIT team used MorphSensor to design multiple applications, including a pair of glasses that monitor light absorption to protect eye health. Credits: Photo courtesy of the researchers.

By Rachel Gordon

We’ve come a long way since the first 3D-printed item came to us by way of an eye wash cup, to now being able to rapidly fabricate things like car parts, musical instruments, and even biological tissues and organoids

While much of these objects can be freely designed and quickly made, the addition of electronics to embed things like sensors, chips, and tags usually requires that you design both separately, making it difficult to create items where the added functions are easily integrated with the form. 

Now, a 3D design environment from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) lets users iterate an object’s shape and electronic function in one cohesive space, to add existing sensors to early-stage prototypes.

The team tested the system, called MorphSensor, by modeling an N95 mask with a humidity sensor, a temperature-sensing ring, and glasses that monitor light absorption to protect eye health.

MorphSensor automatically converts electronic designs into 3D models, and then lets users iterate on the geometry and manipulate active sensing parts. This might look like a 2D image of a pair of AirPods and a sensor template, where a person could edit the design until the sensor is embedded, printed, and taped onto the item. 

To test the effectiveness of MorphSensor, the researchers created an evaluation based on standard industrial assembly and testing procedures. The data showed that MorphSensor could match the off-the-shelf sensor modules with small error margins, for both the analog and digital sensors.

“MorphSensor fits into my long-term vision of something called ‘rapid function prototyping’, with the objective to create interactive objects where the functions are directly integrated with the form and fabricated in one go, even for non-expert users,” says CSAIL PhD student Junyi Zhu, lead author on a new paper about the project. “This offers the promise that, when prototyping, the object form could follow its designated function, and the function could adapt to its physical form.” 

MorphSensor in action 

Imagine being able to have your own design lab where, instead of needing to buy new items, you could cost-effectively update your own items using a single system for both design and hardware. 

For example, let’s say you want to update your face mask to monitor surrounding air quality. Using MorphSensor, users would first design or import the 3D face mask model and sensor modules from either MorphSensor’s database or online open-sourced files. The system would then generate a 3D model with individual electronic components (with airwires connected between them) and color-coding to highlight the active sensing components.  

Designers can then drag and drop the electronic components directly onto the face mask, and rotate them based on design needs. As a final step, users draw physical wires onto the design where they want them to appear, using the system’s guidance to connect the circuit. 

Once satisfied with the design, the “morphed sensor” can be rapidly fabricated using an inkjet printer and conductive tape, so it can be adhered to the object. Users can also outsource the design to a professional fabrication house.  

To test their system, the team iterated on EarPods for sleep tracking, which only took 45 minutes to design and fabricate. They also updated a “weather-aware” ring to provide weather advice, by integrating a temperature sensor with the ring geometry. In addition, they manipulated an N95 mask to monitor its substrate contamination, enabling it to alert its user when the mask needs to be replaced.

In its current form, MorphSensor helps designers maintain connectivity of the circuit at all times, by highlighting which components contribute to the actual sensing. However, the team notes it would be beneficial to expand this set of support tools even further, where future versions could potentially merge electrical logic of multiple sensor modules together to eliminate redundant components and circuits and save space (or preserve the object form). 

Zhu wrote the paper alongside MIT graduate student Yunyi Zhu; undergraduates Jiaming Cui, Leon Cheng, Jackson Snowden, and Mark Chounlakone; postdoc Michael Wessely; and Professor Stefanie Mueller. The team will virtually present their paper at the ACM User Interface Software and Technology Symposium. 

This material is based upon work supported by the National Science Foundation.

Women in Robotics panel celebrating Ada Lovelace Day

We’d like to share the video from our 2020 Ada Lovelace Day celebration of Women in Robotics. The speakers were all on this year’s list, last year’s list, or nominated for next year’s list! and they present a range of cutting edge robotics research and commercial products. They are also all representatives of the new organization Black in Robotics which makes this video doubly powerful. Please enjoy the impactful work of:

Dr Ayanna Howard – Chair of Interactive Computing, Georgia Tech

Dr Carlotta Berry – Professor Electrical and Computer Engineering at Rose-Hulman Institute of Technology

Angelique Taylor – PhD student in Health Robotics at UCSD and Research Intern at Facebook

Dr Ariel Anders – roboticist and first technical hire at Robust.AI

Moderated by Jasmine Lawrence – Product Manager at X the Moonshot Factory

Follow them on twitter at @robotsmarts @DRCABerry @Lique_Taylor @Ariel_Anders @EDENsJasmine

Some of the takeaways from the talk were collected by Jasmine Lawrence at the end of the discussion and include the encouragement that you’re never too old to start working in robotics. While some of the panelists knew from an early age that robotics was their passion, for others it was a discovery later in life. Particularly as robotics has a fairly small academic footprint, compared to the impact in the world.

We also learned that Dr Ayanna Howard has a book available “Sex, Race and Robots: How to be human in the age of AI”

Another insight from the panel was that as the only woman in the room, and often the only person of color too, the pressure was on to be mindful of the impact on communities of new technologies, and to represent a diversity of viewpoints. This knowledge has contributed to these amazing women focusing on robotics projects with significant social impact.

And finally, that contrary to popular opinion, girls and women could be just as competitive as male counterparts and really enjoy the experience of robotics competitions, as long as they were treated with respect. That means letting them build and program, not just manage social media.

You can sign up for Women in Robotics online community here, or the newsletter here. And please enjoy the stories of 2020’s “30 women in robotics you need to know about” as well as reading the previous years’ lists!

Robots deciding their next move need help prioritizing

As robots replace humans in dangerous situations such as search and rescue missions, they need to be able to quickly assess and make decisions—to react and adapt like a human being would. Researchers at the University of Illinois at Urbana-Champaign used a model based on the game Capture the Flag to develop a new take on deep reinforcement learning that helps robots evaluate their next move.

A gecko-adhesive gripper for the Astrobee free-flying robot

Robots that can fly autonomously in space, also known as free-flying robots, could soon assist humans in a variety of settings. However, most existing free-flying robots are limited in their ability to grasp and manipulate objects in their surroundings, which may prevent them from being applied on a large-scale.

‘Digit’ robot for sale and ready to perform manual labor

Robot maker Agility, a spinoff created by researchers from Oregon State University, has announced that parties interested in purchasing one of its Digit robots can now do so. The human-like robot has been engineered to perform manual labor, such as removing boxes from shelves and loading them onto a truck. The robot can be purchased directly from Agility for $250,000.

#321: Empowering Farmers Through RootAI, with Josh Lessing

In this episode, Abate interviews Josh Lessing, co-founder and CEO of RootAI. At RootAI they are developing a system that tracks data on the farm and autonomously harvests crops using soft grippers and computer vision. Lessing talks about the path they took to build a product with good market fit and how they brought a venture capital backed startup to market.

Josh Lessing

Josh is one of the world’s leading minds on developing robotics and AI systems for the food industry, previously serving as the Director of R&D at Soft Robotics Inc. His current venture, Root AI, is integrating advanced robotics, vision systems and machine perception to automate agriculture. Josh was a Postdoctoral Fellow in Materials Science & Robotics at Harvard University, having earned his Ph.D. studying Biophysics & Physical Chemistry at the Massachusetts Institute of Technology and received an Sc.B. in Chemistry from Brown University.

Links

Page 315 of 465
1 313 314 315 316 317 465