Page 2 of 4
1 2 3 4

Eight lessons for robotics startups from NRI PI workshop

Research is all about being the first, but commercialization is all about repeatability, not just many times but every single time. This was one of the key takeaways from the Transitioning Research From Academia to Industry panel during the National Robotics Initiative Foundational Research in Robotics PI Meeting on March 10 2021. I had the pleasure of moderating a discussion between Lael Odhner, Co-Founder of RightHand Robotics, Andrea Thomaz, Co-Founder/CEO of Diligent Robotics and Assoc Prof at UTexas Austin, and Kel Guerin, Co-Founder/CIO of READY Robotics.

RightHand Robotics, Diligent Robotics and READY Robotics are young robotics startups that have all transitioned from the ICorps program and SBIR grant funding into becoming venture backed robotics startups. RightHand Robotics was founded in 2014 and is a Boston based company that specializes in robotics manipulation. It is spun out of work performed for the DARPA Autonomous Robotics Manipulation program and has since raised more than $34.3 million from investors that include Maniv Mobility, Playground and Menlo Ventures.

Diligent Robotics is based in Austin where they design and build robots like Moxi that assist clinical staff with routine activities so they can focus on caring for patients. Diligent Robotics is the youngest startup, founded in 2017 and having raised $15.8 million so far from investors that include True Ventures and Ubiquity Ventures. Andrea Thomaz maintains her position at UTexas Austin but has taken leave to focus on Diligent Robotics.

READY Robotics creates unique solutions that remove the barriers faced by small manufacturers when adopting robotic automation. Founded in 2016, and headquartered in Columbus, Ohio, the company has raised more than $41.8 million with investors that include Drive Capital and Canaan Capital. READY Robotics enables manufacturers to more easily deploy robots to the factory floor through a patented technology platform that combines a very easy to use programming interface and plug’n’play hardware. This enables small to medium sized manufacturers to be more competitive through the use of industrial robots.

To summarize the conversation into 8 key takeaways for startups.

  1. Research is primarily involved in developing a prototype (works once), whereas commercialization requires a product (works every time). Robustness and reliability are essential features of whatever you build.
  2. The customer development focus of the ICorps program speeds up the commercialization process, by forcing you into the field to talk face to face with potential customers and deeply explore their issues.
  3. Don’t lead with the robot! Get comfortable talking to people and learn to speak the language your customers use. Your job is to solve their problem, not persuade them to use your technology.
  4. The faster you can deeply embed yourself with your first customers, the faster you attain the critical knowledge that lets you define your product’s essential features, that the majority of your customers will need, from the merely ‘nice to have’ features or ‘one off’ ideas that can be misdirection.
  5. Team building is your biggest challenge, as many roles you will need to hire for are outside of your own experience. Conduct preparatory interviews with experts in an area that you don’t know, so that you learn what real expertize looks like, what questions to ask and what skillsets to look for.
  6. There is a lack of robotics skill sets in the marketplace so learn to look for transferable skills from other disciplines.
  7. It is actually easy to get to ‘yes’, but the real trick is knowing when to say ‘no’. In other words, don’t create or agree to bad contracts or term sheets, just for the sake of getting an agreement, considering it a ‘loss leader’. Focus on the agreements that make repeatable business sense for your company.
  8. Utilize the resources of your university, the accelerators, alumni funds, tech transfer departments, laboratories, experts and testing facilities.

And for robotics startups that don’t have immediate access to universities, then robotics clusters can provide similar assistance. From large clusters like RoboValley in Odense, MassRobotics in Boston and Silicon Valley Robotics which have startup programs, space and prototyping equipment, to smaller robotics clusters that can still provide a connection point to other resources.

 

Eight lessons for robotics startups from NRI PI workshop

Research is all about being the first, but commercialization is all about repeatability, not just many times but every single time. This was one of the key takeaways from the Transitioning Research From Academia to Industry panel during the National Robotics Initiative Foundational Research in Robotics PI Meeting on March 10 2021. I had the pleasure of moderating a discussion between Lael Odhner, Co-Founder of RightHand Robotics, Andrea Thomaz, Co-Founder/CEO of Diligent Robotics and Assoc Prof at UTexas Austin, and Kel Guerin, Co-Founder/CIO of READY Robotics.

RightHand Robotics, Diligent Robotics and READY Robotics are young robotics startups that have all transitioned from the ICorps program and SBIR grant funding into becoming venture backed robotics startups. RightHand Robotics was founded in 2014 and is a Boston based company that specializes in robotics manipulation. It is spun out of work performed for the DARPA Autonomous Robotics Manipulation program and has since raised more than $34.3 million from investors that include Maniv Mobility, Playground and Menlo Ventures.

Diligent Robotics is based in Austin where they design and build robots like Moxi that assist clinical staff with routine activities so they can focus on caring for patients. Diligent Robotics is the youngest startup, founded in 2017 and having raised $15.8 million so far from investors that include True Ventures and Ubiquity Ventures. Andrea Thomaz maintains her position at UTexas Austin but has taken leave to focus on Diligent Robotics.

READY Robotics creates unique solutions that remove the barriers faced by small manufacturers when adopting robotic automation. Founded in 2016, and headquartered in Columbus, Ohio, the company has raised more than $41.8 million with investors that include Drive Capital and Canaan Capital. READY Robotics enables manufacturers to more easily deploy robots to the factory floor through a patented technology platform that combines a very easy to use programming interface and plug’n’play hardware. This enables small to medium sized manufacturers to be more competitive through the use of industrial robots.

To summarize the conversation into 8 key takeaways for startups.

  1. Research is primarily involved in developing a prototype (works once), whereas commercialization requires a product (works every time). Robustness and reliability are essential features of whatever you build.
  2. The customer development focus of the ICorps program speeds up the commercialization process, by forcing you into the field to talk face to face with potential customers and deeply explore their issues.
  3. Don’t lead with the robot! Get comfortable talking to people and learn to speak the language your customers use. Your job is to solve their problem, not persuade them to use your technology.
  4. The faster you can deeply embed yourself with your first customers, the faster you attain the critical knowledge that lets you define your product’s essential features, that the majority of your customers will need, from the merely ‘nice to have’ features or ‘one off’ ideas that can be misdirection.
  5. Team building is your biggest challenge, as many roles you will need to hire for are outside of your own experience. Conduct preparatory interviews with experts in an area that you don’t know, so that you learn what real expertize looks like, what questions to ask and what skillsets to look for.
  6. There is a lack of robotics skill sets in the marketplace so learn to look for transferable skills from other disciplines.
  7. It is actually easy to get to ‘yes’, but the real trick is knowing when to say ‘no’. In other words, don’t create or agree to bad contracts or term sheets, just for the sake of getting an agreement, considering it a ‘loss leader’. Focus on the agreements that make repeatable business sense for your company.
  8. Utilize the resources of your university, the accelerators, alumni funds, tech transfer departments, laboratories, experts and testing facilities.

And for robotics startups that don’t have immediate access to universities, then robotics clusters can provide similar assistance. From large clusters like RoboValley in Odense, MassRobotics in Boston and Silicon Valley Robotics which have startup programs, space and prototyping equipment, to smaller robotics clusters that can still provide a connection point to other resources.

 

Robots4Humanity in next Society, Robots and Us

Speakers in tonight’s Society, Robots and Us at 6pm PST Tuesday Feb 23 include Henry Evans, mute quadriplegic and founder of Robots4Humanity and Aaron Edsinger, founder of Hello Robot. We’ll also being talking about robots for people with disabilities with Disability Advocate Adriana Mallozi, founder of Puffin Innovations and Daniel Seita, who is a deaf roboticist. The event is free and open to the public.

As a result of a sudden stroke, Henry Evans turned from being a Silicon Valley tech builder into searching for technologies and robots that would improve his life, and the life of his family and caregivers, as the founder of Robots4Humanity. Since then Henry has shaved himself with the help of the PR2 robot, and spoken on the TED stage with Chad Jenkins in a Suitable Tech Beam. Now he’s working with Aaron Edsinger and the Stretch Robot which is a very affordable household robot and teleoperation platform.

We’ll also be hearing from Adriana Mallozi, Disability Advocate and founder of Puffin Innovations which is a woman-owned assistive technology startup with a diverse team focused on developing solutions for people with disabilities to lead more inclusive and independent lives. The team at Puffin Innovations is dedicated to leveling the playing field for people with disabilities using Smart Assistive Technology (SAT).  SAT incorporates internet of things connectivity, machine learning, and artificial intelligence to provide maximum access with the greatest of ease. By tailoring everything they do, from user interfaces to our portable, durable, and affordable products, Puffin Innovations will use its Smart Assistive Technology to provide much needed solutions the disabled community has been longing for.

This continues our monthly exploration of Inclusive Robotics from CITRIS People and Robots Lab at the Universities of California, in partnership with Silicon Valley Robotics. On January 19, we discussed diversity with guest speakers Dr Michelle Johnson from the GRASP Lab at UPenn, Dr Ariel Anders from Women in Robotics and first technical hire at Robust.ai, Alka Roy from The Responsible Innovation Project, and Kenechukwu C. Mbanesi and Kenya Andrews from Black in Robotics, with discussion moderated by Dr Ken Goldberg, artist, roboticist and Director of the CITRIS People and Robots Lab, and Andra Keay from Silicon Valley Robotics.

You can see the full playlist of all the Society, Robots and Us conversations on the Silicon Valley Robotics youtube channel.

DOE’s E-ROBOT Prize targets robots for construction and built environment

Silicon Valley Robotics is pleased to announce that we are a Connector organization for the E-ROBOT Prize, and other DOE competitions on the American-Made Network. There is \$2 million USD available in up to ten prizes for Phase One of the E-ROBOT Prize, and \$10 million USD available in Phase Two. Individuals or teams can sign up for the competition, as the online platform offers opportunities to connect with potential team members, as do competition events organized by Connector organizations. Please cite Silicon Valley Robotics as your Connector organization, when entering the competition.

Silicon Valley Robotics will be hosting E-Robot Prize information and connection events as part of our calendar of networking and Construction Robotics Network events. The first event will be on February 3rd at 7pm PST in our monthly robot ‘showntell’ event “Bots&Beer”, and you can register here. We’ll be announcing more Construction Robotics Network events very soon.

E-ROBOT stands for Envelope Retrofit Opportunities for Building Optimization Technologies. Phase One of the E-ROBOT Prize looks for solutions in sensing, inspection, mapping or retrofitting in building envelopes and the deadline is May 19 2021. Phase Two will focus on holistic, rather than individual solutions, i.e. bringing together the full stack of sensing, inspection, mapping and retrofitting.

The overarching goal of E-ROBOT is to catalyze the development of minimally invasive, low-cost, and holistic building envelope retrofit solutions that make retrofits easier, faster, safer, and more accessible for workers. Successful competitors will provide solutions that provide significant advancements in robot technologies that will advance the energy efficiency retrofit industry and develop building envelope retrofit technologies that meet the following criteria:

  • Holistic: The solution must include mapping, retrofit, sensing, and inspection.
  • Low cost: The solution should reduce costs significantly when compared to current state-of-the-art solutions. The target for reduction in costs should be based on a 50% reduction from the baseline costs of a fully implemented solution (not just hardware, software, or labor; the complete fully implemented solution must be considered). If costs are not at the 50% level, there should be a significant energy efficiency gain achieved.
  • Minimally invasive: The solution must not require building occupants to vacate the premises or require envelope teardown or significant envelope damage.
  • Utilizes long-lasting materials: Retrofit is done with safe, nonhazardous, and durable (30+ year lifespan) materials.
  • Completes time-efficient, high-quality installations: The results of the retrofit must meet common industry quality standards and be completed in a reasonable timeframe.
  • Provides opportunities to workers: The solution enables a net positive gain in terms of the workforce by bringing high tech jobs to the industry, improving worker safety, enabling workers to be more efficient with their time, improving envelope accessibility for workers, and/or opening up new business opportunities or markets.

The E-ROBOT Prize provides a total of \$5 million in funding, including \$4 million in cash prizes for competitors and an additional \$1 million in awards and support to network partners.

Through this prize, the U.S. Department of Energy (DOE) will stimulate technological innovation, create new opportunities for the buildings and construction workforce, reduce building retrofit costs, create a safer and faster retrofit process, ensure consistent, high-quality installations, enhance construction retrofit productivity, and improve overall energy savings of the built environment.

The E-ROBOT Prize is made up of two phases that will fast-track efforts to identify, develop, and validate disruptive solutions to meet building industry needs. Each phase will include a contest period when participants will work to rapidly advance their solutions. DOE invites anyone, individually or as a team, to compete to transform a conceptual solution into product reality.

DOE’s E-ROBOT Prize targets robots for construction and built environment

Silicon Valley Robotics is pleased to announce that we are a Connector organization for the E-ROBOT Prize, and other DOE competitions on the American-Made Network. There is \$2 million USD available in up to ten prizes for Phase One of the E-ROBOT Prize, and \$10 million USD available in Phase Two. Individuals or teams can sign up for the competition, as the online platform offers opportunities to connect with potential team members, as do competition events organized by Connector organizations. Please cite Silicon Valley Robotics as your Connector organization, when entering the competition.

Silicon Valley Robotics will be hosting E-Robot Prize information and connection events as part of our calendar of networking and Construction Robotics Network events. The first event will be on February 3rd at 7pm PST in our monthly robot ‘showntell’ event “Bots&Beer”, and you can register here. We’ll be announcing more Construction Robotics Network events very soon.

E-ROBOT stands for Envelope Retrofit Opportunities for Building Optimization Technologies. Phase One of the E-ROBOT Prize looks for solutions in sensing, inspection, mapping or retrofitting in building envelopes and the deadline is May 19 2021. Phase Two will focus on holistic, rather than individual solutions, i.e. bringing together the full stack of sensing, inspection, mapping and retrofitting.

The overarching goal of E-ROBOT is to catalyze the development of minimally invasive, low-cost, and holistic building envelope retrofit solutions that make retrofits easier, faster, safer, and more accessible for workers. Successful competitors will provide solutions that provide significant advancements in robot technologies that will advance the energy efficiency retrofit industry and develop building envelope retrofit technologies that meet the following criteria:

  • Holistic: The solution must include mapping, retrofit, sensing, and inspection.
  • Low cost: The solution should reduce costs significantly when compared to current state-of-the-art solutions. The target for reduction in costs should be based on a 50% reduction from the baseline costs of a fully implemented solution (not just hardware, software, or labor; the complete fully implemented solution must be considered). If costs are not at the 50% level, there should be a significant energy efficiency gain achieved.
  • Minimally invasive: The solution must not require building occupants to vacate the premises or require envelope teardown or significant envelope damage.
  • Utilizes long-lasting materials: Retrofit is done with safe, nonhazardous, and durable (30+ year lifespan) materials.
  • Completes time-efficient, high-quality installations: The results of the retrofit must meet common industry quality standards and be completed in a reasonable timeframe.
  • Provides opportunities to workers: The solution enables a net positive gain in terms of the workforce by bringing high tech jobs to the industry, improving worker safety, enabling workers to be more efficient with their time, improving envelope accessibility for workers, and/or opening up new business opportunities or markets.

The E-ROBOT Prize provides a total of \$5 million in funding, including \$4 million in cash prizes for competitors and an additional \$1 million in awards and support to network partners.

Through this prize, the U.S. Department of Energy (DOE) will stimulate technological innovation, create new opportunities for the buildings and construction workforce, reduce building retrofit costs, create a safer and faster retrofit process, ensure consistent, high-quality installations, enhance construction retrofit productivity, and improve overall energy savings of the built environment.

The E-ROBOT Prize is made up of two phases that will fast-track efforts to identify, develop, and validate disruptive solutions to meet building industry needs. Each phase will include a contest period when participants will work to rapidly advance their solutions. DOE invites anyone, individually or as a team, to compete to transform a conceptual solution into product reality.

Let’s talk about the future of Air cargo.

You invest in the future you want to live in. I want to invest my time in the future of rapid logistics.

Three years ago I set out on a journey to build a future where one-day delivery is available anywhere in the world by commercializing high precision, low-cost automated airdrops. In the beginning, the vision seemed almost so grand and unachievable as to be silly. A year ago we began assembling a top-notch team full of engineers, aviators and business leaders to help solve this problem.  After a lot of blood sweat and tears, we arrive at present day with the announcement of our $8M seed round raise backed by some amazing capital partners and a growing coalition excited and engaged to accelerate DASH to the next chapter.  With this occasion, we have been reflecting a lot on the journey and the “why” that inspired this endeavor to start all those years ago.


Why Does This Problem Exist?

To those of us fortunate enough to live in large well-infrastructured metropolitan cities, deliveries and logistics isn’t an issue we often consider. We expect our Amazon Prime, UPS, and FedEx packages to arrive the next day or within the standard 3-5 business days.  If you live anywhere else these networks grind to a halt trying to deliver.  For all its scale, Amazon Prime services less than 60 percent of zipcodes in the US with free 2-day prime shipping. The rural access index shows that over 3 Billion people, live in rural settings and over 700 million people don’t live within easy access to all-weather roads at all. Ask manufacturers in need of critical spare parts in Montana, earthquake rescue personnel in Nepal, grocery store owners in mountainous Columbia, or anyone on the 20,000 inhabited islands of the Philippines if rapid logistics feels solved or affordable. The short answer – it’s not.

Before that package is delivered to your door it requires a middle mile solution to move from region to region. There is only one technology that can cross oceans, mountains, and continents in a single day, and that is air cargo.

Air cargo accounts for less than one  percent of all pounds delivered, but over 33 percent of all shipping revenue globally. We collectively believe in air cargo and rely on it to get our most critical and immediate deliveries, including a  growing share of e-commerce and just in time deliveries.  If you want something fast, it’s coming by airplane. There is no substitute. 

However, the efficiency and applications for air cargo break down when the plane has to land.  While the 737 can fly over 600 mph and thousands of miles, it requires hundreds of millions in infrastructure, airports, and ground trucking to get cargo from the airport to your local warehouse making it very costly for commercial deliveries. The ground infrastructure has to exist on every island in the Philippines, every mountain town in Columbia and every town in Nepal. This infrastructure has to reach both sides of every mountain or island anywhere you want things fast.   Even when you can land at a modern airport take-off and fuel burn during climb can account for upwards of 30 percent of an entire flight’s fuel use and drives insurance and maintenance costs from landing and takeoff cycles. This problem is so intrinsic to air cargo and logistics it almost seems natural. Well of course flyover states and rural areas don’t get cheap, fast, and convenient deliveries. Are you going to land that 737 at 20 towns on the way from LA to New York City? We fly over potential customers on our way to big urban cities with modern infrastructure even though only a minority portion of the world’s population lives there. Something has to change.

Our solution

To solve this problem is simple in thought. Yet this has been  one of the most complex tasks I’ve had the honor of working on in my engineering career.  Land the package, not the plane.  By commercializing high-precision low-cost air drops you can decouple airplanes from landings, runways and trucks.  Suddenly a delivery to rural Kansas is just as fast and cost-effective as a major coastal city. Fuel, insurance, utilization rate, service improvements, coverage area, and-and-and, so many metrics improve overnight in significant ways if an existing aircraft can deliver directly to the last mile sorting facility and bypasses much of the complexity, cost and infrastructure needed for traditional hub and spoke networks.

DASH Systems performing air drop tests in Southern California (image from DASH Systems)

Perhaps one of the most common questions I received when I started DASH why hasn’t [insert your preferred enterprise organization here] done this before? Without taking a detour conversation on why large enterprises historically struggle with innovation, the simple answer is: Because now is the time.  Advancements in IoT, low size weight and power flight controllers coupled with a career implementing automation in safety-critical environments meant that the necessary ingredients were ready. Tremendous credit is due to some of the most brilliant engineers, scientists and developers I’ve had the pleasure of working with who took to task carving away raw ideas and rough prototypes into aerospace grade commercial products. All with the bravery to do so while working outside the confines of existing aerospace text books.

Beyond the intricacies of technology was a personal impetus to implement. My father’s family has origins in Barbados, during hurricane season we would make the call, when the phone lines were restored, to ask “is everything okay?” It often felt like a roll of the dice if they would be spared that year in a sick game of roulette that someone else would lose.  With islands by definition nearly all help and aid have to come from aboard, but how can supplies be distributed when ports are destroyed, runways damaged and roads washed out? To me, it is a moral imperative to help, but also to build self-sustaining commercial solutions that can scale to help more in the future.

This thought process was put to the test in 2017, just weeks after starting to seriously contemplate and study the ideas that became DASH. Hurricane Maria hit Puerto Rico. I awoke just as millions of others to witness one of the worst hurricanes to make landfall in 100 years. That day we started making calls, 10 days later we were flying inland in a rented Cessna 208 delivering thousands of pounds of humanitarian supplies via air drops to cut off communities.  The take away was that if this could be done safely and legally on an idle Fedex feeder aircraft, if those on the ground were willing and ready for rapid logistics at the same price they would have paid, why did it have wait until a natural disaster to strike?  DASH exists because there is no technology, process, or company that can honestly make the claim of delivery to anywhere or even most places in under 2 days. We in large cities have come to enjoy it and expect it, yet in the same breath, we cut the conversation short for those geographically situated elsewhere. Our solution exists and with the hard work of an amazingly talented team and excellent partners continue to scale and grow until that one day that claim can be made.

Our Future

The story of DASH is far from over, our vision is rapid logistics anywhere and there is a flight path ahead of us to get there. Today, DASH is advancing the state of the art of precision air drop technology, tomorrow we are looking to deliver into your community wherever it is and despite the circumstances.  The entire globe deserves the same level of service and convenience. The list is too long to thank everyone who has helped DASH get to where we are today, and growing longer every day. Instead I can offer up, look to the skies you may see your next delivery safely and precisely coming down to a location near you.

 

Joel Ifill is the founder and CEO of DASH Systems.  He can be found at www.dashshipping.com and reached at inquiries@dashshipping.com we are always on the hunt for talented roboticists engineers and developers who enjoy aviation, inquire at HR@DASHshipping.com

Robohub wins Champion Award in SVR ‘Good Robot’ Industry Awards

President: Sabine Haeurt          

Founded: 2012

HQ:  Switzerland

Robohub is an online platform and non-profit that brings together leading communicators in robotics research, start-ups, business, and education from around the world, focused on connecting the robotics community to the public. It can be difficult for the public to find free, high-quality information about robotics. At Robohub, we enable roboticists to share their stories in their own words by providing them with a social media platform and editorial guidance. This means that our readers get to learn about the latest research and business news, events and opinions, directly from the experts.

Since 2012, Robohub and its international community of volunteers have published over 300 Robohub Podcasts, 7000 blog posts, videos and more, reaching 1M pageviews every year, and more than 30k followers on social media. You can follow robohub on Twitter at @robohub.

Why we need a robot registry

Robots are rolling out into the real world and we need to meet the emerging challenges in responsible fashion but one that doesn’t block innovation. At the recent ARM Developers Summit 2020 I shared my suggestions for five practical steps that we could undertake at a regional, national or global level as part of the Five Laws of Robotics presentation (below).

The Five Laws of Robotics are drawn from the EPSRC Principles of Robotics, first developed in 2010 and a living document workshopped by experts across many relevant disciplines. These five principles are practical and concise, embracing the majority of principles expressed across a wide range of ethics documents. I will explain in more detail.

  1. There should be no killer robots.
  2. Robots should (be designed to) obey the law.
  3. Robots should (be designed to) be good products.
  4. Robots should be transparent in operation
  5. Robots should be identifiable

EPSRC says that robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. More information is at the Campaign to Stop Killer Robots.

Humans, not robots, are the responsible agents. Robots should be designed and operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy.

Robots are products. They should be designed using processes which assure their safety and security. Quality guidelines, processes and standards already exist.

Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit users, instead their machine nature should be made transparent.

It should be possible to find out who is responsible for any robot. My suggestion here is that robots in public spaces require a license plate; a clear identification of robot and the responsible organization.

As well as speaking about Five Laws of Robotics, I introduced five practical proposals to help us respond at a regional, national and global level.

  1. Robot Registry (license plates, access to database of owners/operators)
  2. Algorithmic Transparency (via Model Cards and Testing Benchmarks)
  3. Independent Ethical Review Boards (as in biotech industry)
  4. Robot Ombudspeople to liaise between public and policy makers
  5. Rewarding Good Robots design awards and case studies

Silicon Valley Robotics is about to announce the first winners of our inaugural Robotics Industry Awards. The SVR Industry Awards consider the responsible design as well as technological innovation and commercial success. There are also some ethical checkmark or certification initiatives under preparation, but like the development of new standards, these can take a long time to do properly, whereas awards, endorsements and case studies can be available immediately to foster the discussion of what constitutes good robots and what are the social challenges that robotics needs to solve.

In fact, the robot registry suggestion was picked up recently by Stacey Higginbotham in the IEEE Spectrum. Silicon Valley Robotics is putting together these policy suggestions for the new White House administration.

Exploring the DARPA SubTerranean Challenge

The DARPA Subterranean (SubT) Challenge aims to develop innovative technologies that would augment operations underground. On July 20, Dr Timothy Chung, the DARPA SubTChallenge Program Manager, joined Silicon Valley Robotics to discuss the upcoming Cave Circuit and Subterranean Challenge Finals, and the opportunities that still exist for individual and team entries in both Virtual and Systems Challenges, as per the video below.

The SubT Challenge allows teams to demonstrate new approaches for robotic systems to rapidly map, navigate, and search complex underground environments, including human-made tunnel systems, urban underground, and natural cave networks.

The SubT Challenge is organized into two Competitions (Systems and Virtual), each with two tracks (DARPA-funded and self-funded).

SYSTEMS COMPETITION RESULTS

Teams in the Systems Competition completed four total runs, two 60-minute runs on each of two courses, Experimental and Safety Research. The courses varied in difficulty and included 20 artifacts each. Teams earned points by correctly identifying artifacts within a five-meter accuracy. The final score was a total of each team’s best score from each of the courses. In instances of a points tie, team rank was determined by (1) earliest time the last artifact was successfully reported, averaged across the team’s best runs on each course; (2) earliest time the first artifact was successfully reported, averaged across the team’s best runs on each course; and (3) lowest average time across all valid artifact reports, averaged across the team’s best runs on each course.

The Tunnel Circuit final scores were as follows

25 Explorer, DARPA-funded
11 CoSTAR (Collaborative SubTerranean Autonomous Resilient Robots), DARPA-funded
10 CTU-CRAS, self-funded winner of the $200,000 Tunnel Circuit prize
9 MARBLE (Multi-agent Autonomy with Radar-Based Localization for Exploration), DARPA-funded
7 CSIRO Data61, DARPA-funded
5 CERBERUS (CollaborativE walking & flying RoBots for autonomous ExploRation in Underground Settings), DARPA-funded
2 NCTU (National Chiao Tung University), self-funded
2 Robotika, self-funded
1 CRETISE (Collaborative Robot Exploration and Teaming In Subterranean Environments), DARPA-funded
1 PLUTO (Pennsylvania Laboratory for Underground Tunnel Operations), DARPA-funded
0 Coordinated Robotics, self-funded

The Urban Circuit final scores were as follows:

16 CoSTAR (Collaborative SubTerranean Autonomous Resilient Robots), DARPA-funded
11 Explorer, DARPA-funded
10 CTU-CRAS-NORLAB (Czech Technical University in Prague – Center for Robotics and Autonomous Systems – Northern Robotics Laboratory), self-funded winner of $500,000 first place prize
9 CSIRO Data61, DARPA-funded
7 CERBERUS (CollaborativE walking & flying RoBots for autonomous ExploRation in Underground Settings), DARPA-funded
4 Coordinated Robotics, self-funded winner of the $250,000 second place prize
4 MARBLE (Multi-agent Autonomy with Radar-Based Localization for Exploration), DARPA-funded
2 NCTU (National Chiao Tung University), self-funded
2 Robotika, self-funded
1 NUS SEDS, (National University of Singapore Students for Exploration and Development of Space), self-funded

VIRTUAL COMPETITION RESULTS

The Virtual competitors developed advanced software for their respective teams of virtual aerial and wheeled robots to explore tunnel environments, with the goal of finding various artifacts hidden throughout the virtual environment and reporting their locations and types to within a five-meter radius during each 60-minute simulation run. A correct report is worth one point and competitors win by accruing the most points across multiple, diverse simulated environments.

The Tunnel Circuit final scores were as follows:

50 Coordinated Robotics, self-funded
21 BARCS, DARPA-funded
14 SODIUM-24 Robotics, self-funded
9 Robotika, self-funded
7 COLLEMBOLA, DARPA-funded
1 Flying Fitches, self-funded
0 AAUNO, self-funded
0 CYNET.ai, self-funded

The Urban Circuit final scores were as follows:

150 BARCS (Bayesian Adaptive Robot Control System), DARPA-funded
115 Coordinated Robotics, self-funded winner of the $250,000 first place prize
21 Robotika, self-funded winner of the $150,000 second place prize
17 COLLEMBOLA (Communication Optimized, Low Latency Exploration, Map-Building and Object Localization Autonomy), DARPA-funded
7 Flying Fitches, self-funded winner of the $100,000 third place prize
7 SODIUM-24 Robotics, self-funded
2 CYNET.ai, self-funded
0 AAUNO, self-funded

2020 Cave Circuit and Finals

The Cave Circuit, the final of three Circuit events, is planned for later this year. Final Event, planned for summer of 2021, will put both Systems and Virtual teams to the test with courses that incorporate diverse elements from all three environments. Teams will compete for up to $2 million in the Systems Final Event and up to $1.5 million in the Virtual Final Event, with additional prizes.

Learn more about the opportunities to participate either virtual or systems Team: https://www.subtchallenge.com/

Dr. Timothy Chung joined DARPA’s Tactical Technology Office as a program manager in February 2016. He serves as the Program Manager for the OFFensive Swarm-Enabled Tactics Program and the DARPA Subterranean (SubT) Challenge.

Prior to joining DARPA, Dr. Chung served as an Assistant Professor at the Naval Postgraduate School and Director of the Advanced Robotic Systems Engineering Laboratory (ARSENL). His academic interests included modeling, analysis, and systems engineering of operational settings involving unmanned systems, combining collaborative autonomy development efforts with an extensive live-fly field experimentation program for swarm and counter-swarm unmanned system tactics and associated technologies.

Dr. Chung holds a Bachelor of Science in Mechanical and Aerospace Engineering from Cornell University. He also earned Master of Science and Doctor of Philosophy degrees in Mechanical Engineering from the California Institute of Technology.

Learn more about DARPA here: www.darpa.mil

RSS 2020 – all the papers and videos!

RSS 2020 was held virtually this year, from the RSS Pioneers Workshop on July 11 to the Paper Awards and Farewell on July 16. Many talks are now available online, including 103 accepted papers, each presented as an online Spotlight Talk on the RSS Youtube channel, and of course the plenaries and much of the workshop content as well. We’ve tried to link here to all of the goodness from RSS 2020.

The RSS Keynote on July 15 was delivered by Josh Tenenbaum, Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences, CSAIL. Titled “It’s all in your head: Intuitive physics, planning, and problem-solving in brains, minds and machines”.

Abstract: I will overview what we know about the human mind’s internal models of the physical world, including how these models arise over evolution and developmental learning, how they are implemented in neural circuitry, and how they are used to support planning and rapid trial-and-error problem-solving in tool use and other physical reasoning tasks. I will also discuss prospects for building more human-like physical common sense in robots and other AI systems.

RSS 2020 introduces the new RSS Test of Time Award given to highest impact papers published at RSS (and potentially journal versions thereof) from at least ten years ago. Impact may mean that it changed how we think about problems or about robotic design, that it brought fully new problems to the attention of the community, or that it pioneered new approach to robotic design or problem solving. With this award, RSS generally wants to foster the discussion of the long term development of our field. The award is an opportunity to reflect on and discuss the past, which is essential to make progress in the future. The awardee’s keynote is therefore complemented with a Test of Time Panel session devoted to this important discussion.

This year’s Test of Time Awards goes to the pair of papers for pioneering an information smoothing approach to the SLAM problem via square root factorization, its interpretation as a graphical model, and the widely-used GTSAM free software repository.

Abstract: Many estimation, planning and optimal control problems in robotics have an optimization problem at their core. In most of these optimization problems, the objective function is composed of many different factors or terms that are local in nature, i.e., they only depend on a small subset of the variables. 10 years ago the Square Root SAM papers identified factor graphs as a particularly insightful way of modeling this locality structure. Since then we have realized that factor graphs can represent a wide variety of problems across robotics, expose opportunities to improve computational performance, and are beneficial in designing and thinking about how to model a problem, even aside from performance considerations. Many of these principles have been embodied in our evolving open source package GTSAM, which puts factor graphs front and central, and which has been used with great success in a number of state of the art robotics applications. We will also discuss where factor graphs, in our opinion, can break in

The RSS 2020 Plenary Sessions highlighted Early Career Awards for researchers, Byron Boots, Luca Carlone and Jeanette Bohg. Byron Boots is an Associate Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. Luca Carlone is the Charles Stark Draper Assistant Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University.

Title: Perspectives on Machine Learning for Robotics

Abstract: Recent advances in machine learning are leading to new tools for designing intelligent robots: functions relied on to govern a robot’s behavior can be learned from a robot’s interaction with its environment rather than hand-designed by an engineer. Many machine learning methods assume little prior knowledge and are extremely flexible, they can model almost anything! But, this flexibility comes at a cost. The same algorithms are often notoriously data hungry and computationally expensive, two problems that can be debilitating for robotics. In this talk I’ll discuss how machine learning can be combined with prior knowledge to build effective solutions to robotics problems. I’ll start by introducing an online learning perspective on robot adaptation that unifies well-known algorithms and suggests new approaches. Along the way, I’ll focus on the use of simulation and expert advice to augment learning. I’ll discuss how imperfect models can be leveraged to rapidly update simple control policies and imitation can accelerate reinforcement learning. I will also show how we have applied some of these ideas to an autonomous off-road racing task that requires impressive sensing, speed, and agility to complete.

Title: The Future of Robot Perception: Certifiable Algorithms and Real-time High-level Understanding

Abstract: Robot perception has witnessed an unprecedented progress in the last decade. Robots are now able to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation, manipulation, and human-robot interaction. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception.

This talk discusses two efforts targeted at bridging this gap. The first focuses on robustness. I present recent advances in the design of certifiable perception algorithms that are robust to extreme amounts of noise and outliers and afford performance guarantees. I present fast certifiable algorithms for object pose estimation: our algorithms are “hard to break” (e.g., are robust to 99% outliers) and succeed in localizing objects where an average human would fail. Moreover, they come with a “contract” that guarantees their input-output performance. I discuss the foundations of certifiable perception and motivate how these foundations can lead to safer systems.

The second effort targets high-level understanding. While humans are able to quickly grasp both geometric, semantic, and physical aspects of a scene, high-level scene understanding remains a challenge for robotics. I present our work on real-time metric-semantic understanding and 3D Dynamic Scene Graphs. I introduce the first generation of Spatial Perception Engines, that extend the traditional notions of mapping and SLAM, and allow a robot to build a “mental model” of the environment, including spatial concepts (e.g., humans, objects, rooms, buildings) and their relations at multiple levels of abstraction.
Certifiable algorithms and real-time high-level understanding are key enablers for the next generation of autonomous systems, that are trustworthy, understand and execute high-level human instructions, and operate in large dynamic environments and over and extended period of time

Title: A Tale of Success and Failure in Robotics Grasping and Manipulation

Abstract: In 2007, I was a naïve grad student and started to work on vision-based robotic grasping. I had no prior background in manipulation, kinematics, dynamics or control. Yet, I dove into the field by re-implementing and improving a learning-based method. While making some contributions, the proposed method also had many limitations partly due to the way the problem was framed. Looking back at the entire journey until today, I find that I have learned the most about robotic grasping and manipulation from observing failures and limitations of existing approaches – including my own. In this talk, I want to highlight how these failures and limitations have shaped my view on what may be some of the underlying principles of autonomous robotic manipulation. I will emphasise three points. First, perception and prediction will always be noisy, partial and sometimes just plain wrong. Therefore, one focus of my research is on methods that support decision-making under uncertainty due to noisy sensing, inaccurate models and hard-to-predict dynamics. To this end, I will present a robotic system that demonstrates the importance of continuous, real-time perception and its tight integration with reactive motion generation methods. I will also talk about work that funnels uncertainty by enabling robots to exploit contact constraints during manipulation.

Second, a robot has many more sensors than just cameras and they all provide complementary information. Therefore, one focus of my research is on methods that can exploit multimodal information such as vision and touch for contact-rich manipulation. It is non-trivial to manually design a manipulation controller that combines modalities with very different characteristics. I will present work that uses self-supervision to learn a compact and multimodal representation of visual and haptic sensory inputs, which can then be used to improve the sample efficiency of policy learning. And third, choosing the right robot action representation has a large influence on the success of a manipulation policy, controller or planner. While believing many years that inferring contact points for robotic grasping is futile, I will present work that convinced me otherwise. Specifically, this work uses contact points as an abstraction that can be re-used by a diverse set of robot hands.

Inclusion@RSS is excited to host a panel “On the Future of Robotics” to discuss how we can have an inclusive robotics community and its impact on the future of the field. Moderator: Matt Johnson-Roberson (University of Michigan) with Panelists: Tom Williams (Colorado School of Mines), Eduard Fosch-Villaronga (Leiden University), Lydia Tapia (University of New Mexico), Chris Macnab (University of Calgary), Adam Poulsen (Charles Sturt University), Chad Jenkins (University of Michigan), Kendall Queen (University of Pennsylvania), Naveen Kuppuswamy (Toyota Research Institute).

The RSS community is committed to increasing the participation of groups traditionally underrepresented in robotics (including but not limited to: women, LGBTQ+, underrepresented minorities, and people with disabilities), especially people early in their studies and career. Such efforts are crucial for increasing research capacity, creativity, and broadening the impact of robotics research.

The RSS Pioneers Workshop for senior Ph.D. students and postdocs, was modelled on the highly successful HRI Pioneers Workshop and took place on Saturday July 11. The goal of RSS Pioneers is to bring together a cohort of the world’s top early career researchers to foster creativity and collaborations surrounding challenges in all areas of robotics, as well as to help young researchers navigate their next career stages. The workshop included a mix of research and career talks from senior scholars in the field from both academia and industry, research presentations from attendees and networking activities, with a poster session where Pioneers will get a chance to externally showcase their research.

Content from the various workshops on July 12 and 13 may be available through the various workshop websites.

RSS 2020 Accepted Workshops

WS1-2 Reacting to contact: Enabling transparent interactions through intelligent sensing and actuation Ankit Bhatia
Aaron M. Johnson
Matthew T. Mason
[Session]
WS1-3 Certifiable Robot Perception: from Global Optimization to Safer Robots Luca Carlone
Tat-Jun Chin
Anders Eriksson
Heng Yang
[Session]
WS1-4 Advancing the State of Machine Learning for Manufacturing Robotics Elena Messina
Holly Yanco
Megan Zimmerman
Craig Schlenoff
Dragos Margineantu
[Session]
WS1-5 Advances and Challenges in Imitation Learning for Robotics  Scott Niekum
Akanksha Saran
Yuchen Cui
Nick Walker
Andreea Bobu
Ajay Mandlekar
Danfei Xu
[Session]
WS1-6 2nd Workshop on Closing the Reality Gap in Sim2Real Transfer for Robotics Sebastian Höfer
Kostas Bekris
Ankur Handa
Juan Camilo Gamboa
Florian Golemo
Melissa Mozifian
[Session]
WS1-7 ROS Carpentry Workshop Katherine Scott
Mabel Zhang
Camilo Buscaron
Steve Macenski
N/A
WS1-8 Perception and Control for Fast and Agile Super-Vehicles II Varun Murali
Phillip Foehn
Davide Scaramuzza
Sertac Karaman
[Session]
WS1-9 Robotics Retrospectives  Jeannette Bohg
Franziska Meier
Arunkumar Byravan
Akshara Rai
[Session]
WS1-10 Heterogeneous Multi-Robot Task Allocation and Coordination  Harish Ravichandar
Ragesh Ramachandran
Sonia Chernova
Seth Hutchinson
Gaurav Sukhatme
Vijay Kumar
[Session]
WS1-11 Learning (in) Task and Motion Planning  Danny Driess
Neil T. Dantam
Lydia E. Kavraki
Marc Toussaint
[Session]
WS1-12 Performing Arts Robots & Technologies, Integrated (PARTI)  Naomi Fitter
Heather Knight
Amy LaViers
[Session]
WS1-13 Robots in the Wild: Challenges in Deploying Robust Autonomy for Robotic Exploration Hannah Kerner
Amy Tabb
Jnaneshwar Das
Pratap Tokekar
Masahiro Ono
[Session]
WS1-14 Emergent Behaviors in Human-Robot Systems  Erdem Bıyık
Minae Kwon
Dylan Losey
Noah Goodman
Stefanos Nikolaidis
Dorsa Sadigh
[Session]

Monday, July 13

WS Title Organizers Virtual Session Link
WS2-1 Interaction and Decision-Making in Autonomous Driving  Rowan McAllister
Litin Sun
Igor Gilitschenski
Daniela Rus
[Session]
WS2-2 2nd RSS Workshop on Robust Autonomy: Tools for Safety in Real-World Uncertain Environments Andrea Bajcsy
Ransalu Senanayake
Somil Bansal
Sylvia Herbert
David Fridovich-Keil
Jaime Fernández Fisac
[Session]
WS2-3 AI & Its Alternatives in Assistive & Collaborative Robotics  Deepak Gopinath
Aleksandra Kalinowska
Mahdieh Nejati
Katarina Popovic
Brenna Argall
Todd Murphey
[Session]
WS2-4 Benchmarking Tools for Evaluating Robotic Assembly of Small Parts Adam Norton
Holly Yanco
Joseph Falco
Kenneth Kimble
[Session]
WS2-5 Good Citizens of Robotics Research Mustafa Mukadam
Nima Fazeli
Niko Sünderhauf
[Session]
WS2-6 Structured Approaches to Robot Learning for Improved Generalization  Arunkumar Byravan
Markus Wulfmeier
Franziska Meier
Mustafa Mukadam
Nicolas Heess
Angela Schoellig
Dieter Fox
[Session]
WS2-7 Explainable and Trustworthy Robot Decision Making for Scientific Data Collection Nisar Ahmed
P. Michael Furlong
Geoff Hollinger
Seth McCammon
[Session]
WS2-8 Closing the Academia to Real-World Gap in Service Robotics  Guilherme Maeda
Nick Walker
Petar Kormushev
Maru Cabrera
[Session]
WS2-9 Visuotactile Sensors for Robust Manipulation: From Perception to Control  Alex Alspach
Naveen Kuppuswamy
Avinash Uttamchandani
Filipe Veiga
Wenzhen Yuan
[Session]
WS2-10 Self-Supervised Robot Learning  Abhinav Valada
Anelia Angelova
Joschka Boedecker
Oier Mees
Wolfram Burgard
[Session]
WS2-11 Power On and Go Robots: ‘Out-of-the-Box’ Systems for Real-World Applications Jonathan Kelly
Stephan Weiss
Robuffo Giordana
Valentin Peretroukhin
[Session]
WS2-12 Workshop on Visual Learning and Reasoning for Robotic Manipulation  Kuan Fang
David Held
Yuke Zhu
Dinesh Jayaraman
Animesh Garg
Lin Sun
Yu Xiang
Greg Dudek
[Session]
WS2-13 Action Representations for Learning in Continuous Control  Tamim Asfour
Miroslav Bogdanovic
Jeannette Bohg
Animesh Garg
Roberto Martín-Martín
Ludovic Righetti
[Se

RSS 2020 Accepted Papers

Paper ID Title Authors Virtual Session Link
1 Planning and Execution using Inaccurate Models with Provable Guarantees Anirudh Vemula (Carnegie Mellon University)*; Yash Oza (CMU); J. Bagnell (Aurora Innovation); Maxim Likhachev (CMU) Virtual Session #1
2 Swoosh! Rattle! Thump! – Actions that Sound Dhiraj Gandhi (Carnegie Mellon University)*; Abhinav Gupta (Carnegie Mellon University); Lerrel Pinto (NYU/Berkeley) Virtual Session #1
3 Deep Visual Reasoning: Learning to Predict Action Sequences for Task and Motion Planning from an Initial Scene Image Danny Driess (Machine Learning and Robotics Lab, University of Stuttgart)*; Jung-Su Ha (); Marc Toussaint () Virtual Session #1
4 Elaborating on Learned Demonstrations with Temporal Logic Specifications Craig Innes (University of Edinburgh)*; Subramanian Ramamoorthy (University of Edinburgh) Virtual Session #1
5 Non-revisiting Coverage Task with Minimal Discontinuities for Non-redundant Manipulators Tong Yang (Zhejiang University)*; Jaime Valls Miro (University of Technology Sydney); Yue Wang (Zhejiang University); Rong Xiong (Zhejiang University) Virtual Session #1
6 LatticeNet: Fast Point Cloud Segmentation Using Permutohedral Lattices Radu Alexandru Rosu (University of Bonn)*; Peer Schütt (University of Bonn); Jan Quenzel (University of Bonn); Sven Behnke (University of Bonn) Virtual Session #1
7 A Smooth Representation of Belief over SO(3) for Deep Rotation Learning with Uncertainty Valentin Peretroukhin (University of Toronto)*; Matthew Giamou (University of Toronto); W. Nicholas Greene (MIT); David Rosen (MIT Laboratory for Information and Decision Systems); Jonathan Kelly (University of Toronto); Nicholas Roy (MIT) Virtual Session #1
8 Leading Multi-Agent Teams to Multiple Goals While Maintaining Communication Brian Reily (Colorado School of Mines)*; Christopher Reardon (ARL); Hao Zhang (Colorado School of Mines) Virtual Session #1
9 OverlapNet: Loop Closing for LiDAR-based SLAM Xieyuanli Chen (Photogrammetry & Robotics Lab, University of Bonn)*; Thomas Läbe (Institute for Geodesy and Geoinformation, University of Bonn); Andres Milioto (University of Bonn); Timo Röhling (Fraunhofer FKIE); Olga Vysotska (Autonomous Intelligent Driving GmbH); Alexandre Haag (AID); Jens Behley (University of Bonn); Cyrill Stachniss (University of Bonn) Virtual Session #1
10 The Dark Side of Embodiment – Teaming Up With Robots VS Disembodied Agents Filipa Correia (INESC-ID & University of Lisbon)*; Samuel Gomes (IST/INESC-ID); Samuel Mascarenhas (INESC-ID); Francisco S. Melo (IST/INESC-ID); Ana Paiva (INESC-ID U of Lisbon) Virtual Session #1
11 Shared Autonomy with Learned Latent Actions Hong Jun Jeon (Stanford University)*; Dylan Losey (Stanford University); Dorsa Sadigh (Stanford) Virtual Session #1
12 Regularized Graph Matching for Correspondence Identification under Uncertainty in Collaborative Perception Peng Gao (Colorado school of mines)*; Rui Guo (Toyota Motor North America); Hongsheng Lu (Toyota Motor North America); Hao Zhang (Colorado School of Mines) Virtual Session #1
13 Frequency Modulation of Body Waves to Improve Performance of Limbless Robots Baxi Zhong (Goergia Tech)*; Tianyu Wang (Carnegie Mellon University); Jennifer Rieser (Georgia Institute of Technology); Abdul Kaba (Morehouse College); Howie Choset (Carnegie Melon University); Daniel Goldman (Georgia Institute of Technology) Virtual Session #1
14 Self-Reconfiguration in Two-Dimensions via Active Subtraction with Modular Robots Matthew Hall (The University of Sheffield)*; Anil Ozdemir (The University of Sheffield); Roderich Gross (The University of Sheffield) Virtual Session #1
15 Singularity Maps of Space Robots and their Application to Gradient-based Trajectory Planning Davide Calzolari (Technical University of Munich (TUM), German Aerospace Center (DLR))*; Roberto Lampariello (German Aerospace Center); Alessandro Massimo Giordano (Deutches Zentrum für Luft und Raumfahrt) Virtual Session #1
16 Grounding Language to Non-Markovian Tasks with No Supervision of Task Specifications Roma Patel (Brown University)*; Ellie Pavlick (Brown University); Stefanie Tellex (Brown University) Virtual Session #1
17 Fast Uniform Dispersion of a Crash-prone Swarm Michael Amir (Technion – Israel Institute of Technology)*; Freddy Bruckstein (Technion) Virtual Session #1
18 Simultaneous Enhancement and Super-Resolution of Underwater Imagery for Improved Visual Perception Md Jahidul Islam (University of Minnesota Twin Cities)*; Peigen Luo (University of Minnesota-Twin Cities); Junaed Sattar (University of Minnesota) Virtual Session #1
19 Collision Probabilities for Continuous-Time Systems Without Sampling Kristoffer Frey (MIT)*; Ted Steiner (Charles Stark Draper Laboratory, Inc.); Jonathan How (MIT) Virtual Session #1
20 Event-Driven Visual-Tactile Sensing and Learning for Robots Tasbolat Taunyazov (National University of Singapore); Weicong Sng (National University of Singapore); Brian Lim (National University of Singapore); Hian Hian See (National University of Singapore); Jethro Kuan (National University of Singapore); Abdul Fatir Ansari (National University of Singapore); Benjamin Tee (National University of Singapore); Harold Soh (National University Singapore)* Virtual Session #1
21 Resilient Distributed Diffusion for Multi-Robot Systems Using Centerpoint JIANI LI (Vanderbilt University)*; Waseem Abbas (Vanderbilt University); Mudassir Shabbir (Information Technology University); Xenofon Koutsoukos (Vanderbilt University) Virtual Session #1
22 Pixel-Wise Motion Deblurring of Thermal Videos Manikandasriram Srinivasan Ramanagopal (University of Michigan)*; Zixu Zhang (University of Michigan); Ram Vasudevan (University of Michigan); Matthew Johnson Roberson (University of Michigan) Virtual Session #1
23 Controlling Contact-Rich Manipulation Under Partial Observability Florian Wirnshofer (Siemens AG)*; Philipp Sebastian Schmitt (Siemens AG); Georg von Wichert (Siemens AG); Wolfram Burgard (University of Freiburg) Virtual Session #1
24 AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos Laura Smith (UC Berkeley)*; Nikita Dhawan (UC Berkeley); Marvin Zhang (UC Berkeley); Pieter Abbeel (UC Berkeley); Sergey Levine (UC Berkeley) Virtual Session #1
25 Provably Constant-time Planning and Re-planning for Real-time Grasping Objects off a Conveyor Belt Fahad Islam (Carnegie Mellon University)*; Oren Salzman (Technion); Aditya Agarwal (CMU); Likhachev Maxim (Carnegie Mellon University) Virtual Session #1
26 Online IMU Intrinsic Calibration: Is It Necessary? Yulin Yang (University of Delaware)*; Patrick Geneva (University of Delaware); Xingxing Zuo (Zhejiang University); Guoquan Huang (University of Delaware) Virtual Session #1
27 A Berry Picking Robot With A Hybrid Soft-Rigid Arm: Design and Task Space Control Naveen Kumar Uppalapati (University of Illinois at Urbana Champaign)*; Benjamin Walt ( University of Illinois at Urbana Champaign); Aaron Havens (University of Illinois Urbana Champaign); Armeen Mahdian (University of Illinois at Urbana Champaign); Girish Chowdhary (University of Illinois at Urbana Champaign); Girish Krishnan (University of Illinois at Urbana Champaign) Virtual Session #1
28 Iterative Repair of Social Robot Programs from Implicit User Feedback via Bayesian Inference Michael Jae-Yoon Chung (University of Washington)*; Maya Cakmak (University of Washington) Virtual Session #1
29 Cable Manipulation with a Tactile-Reactive Gripper Siyuan Dong (MIT); Shaoxiong Wang (MIT); Yu She (MIT)*; Neha Sunil (Massachusetts Institute of Technology); Alberto Rodriguez (MIT); Edward Adelson (MIT, USA) Virtual Session #1
30 Automated Synthesis of Modular Manipulators’ Structure and Control for Continuous Tasks around Obstacles Thais Campos de Almeida (Cornell University)*; Samhita Marri (Cornell University); Hadas Kress-Gazit (Cornell) Virtual Session #1
31 Learning Memory-Based Control for Human-Scale Bipedal Locomotion Jonah Siekmann (Oregon State University)*; Srikar Valluri (Oregon State University); Jeremy Dao (Oregon State University); Francis Bermillo (Oregon State University); Helei Duan (Oregon State University); Alan Fern (Oregon State University); Jonathan Hurst (Oregon State University) Virtual Session #1
32 Multi-Fidelity Black-Box Optimization for Time-Optimal Quadrotor Maneuvers Gilhyun Ryou (Massachusetts Institute of Technology)*; Ezra Tal (Massachusetts Institute of Technology); Sertac Karaman (Massachusetts Institute of Technology) Virtual Session #1
33 Manipulation Trajectory Optimization with Online Grasp Synthesis and Selection Lirui Wang (University of Washington)*; Yu Xiang (NVIDIA); Dieter Fox (NVIDIA Research / University of Washington) Virtual Session #1
34 VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation Ryan Hoque (UC Berkeley)*; Daniel Seita (University of California, Berkeley); Ashwin Balakrishna (UC Berkeley); Aditya Ganapathi (University of California, Berkeley); Ajay Tanwani (UC Berkeley); Nawid Jamali (Honda Research Institute); Katsu Yamane (Honda Research Institute); Soshi Iba (Honda Research Institute); Ken Goldberg (UC Berkeley) Virtual Session #1
35 Spatial Action Maps for Mobile Manipulation Jimmy Wu (Princeton University)*; Xingyuan Sun (Princeton University); Andy Zeng (Google); Shuran Song (Columbia University); Johnny Lee (Google); Szymon Rusinkiewicz (Princeton University); Thomas Funkhouser (Princeton University) Virtual Session #2
36 Generalized Tsallis Entropy Reinforcement Learning and Its Application to Soft Mobile Robots Kyungjae Lee (Seoul National University)*; Sungyub Kim (KAIST); Sungbin Lim (UNIST); Sungjoon Choi (Disney Research); Mineui Hong (Seoul National University); Jaein Kim (Seoul National University); Yong-Lae Park (Seoul National University); Songhwai Oh (Seoul National University) Virtual Session #2
37 Learning Labeled Robot Affordance Models Using Simulations and Crowdsourcing Adam Allevato (UT Austin)*; Elaine Short (Tufts University); Mitch Pryor (UT Austin); Andrea Thomaz (UT Austin) Virtual Session #2
38 Towards Embodied Scene Description Sinan Tan (Tsinghua University); Huaping Liu (Tsinghua University)*; Di Guo (Tsinghua University); Xinyu Zhang (Tsinghua University); Fuchun Sun (Tsinghua University) Virtual Session #2
39 Reinforcement Learning based Control of Imitative Policies for Near-Accident Driving Zhangjie Cao (Stanford University); Erdem Biyik (Stanford University)*; Woodrow Wang (Stanford University); Allan Raventos (Toyota Research Institute); Adrien Gaidon (Toyota Research Institute); Guy Rosman (Toyota Research Institute); Dorsa Sadigh (Stanford) Virtual Session #2
40 Deep Drone Acrobatics Elia Kaufmann (ETH / University of Zurich)*; Antonio Loquercio (ETH / University of Zurich); Rene Ranftl (Intel Labs); Matthias Müller (Intel Labs); Vladlen Koltun (Intel Labs); Davide Scaramuzza (University of Zurich & ETH Zurich, Switzerland) Virtual Session #2
41 Active Preference-Based Gaussian Process Regression for Reward Learning Erdem Biyik (Stanford University)*; Nicolas Huynh (École Polytechnique); Mykel Kochenderfer (Stanford University); Dorsa Sadigh (Stanford) Virtual Session #2
42 A Bayesian Framework for Nash Equilibrium Inference in Human-Robot Parallel Play Shray Bansal (Georgia Institute of Technology)*; Jin Xu (Georgia Institute of Technology); Ayanna Howard (Georgia Institute of Technology); Charles Isbell (Georgia Institute of Technology) Virtual Session #2
43 Data-driven modeling of a flapping bat robot with a single flexible wing surface Jonathan Hoff (University of Illinois at Urbana-Champaign)*; Seth Hutchinson (Georgia Tech) Virtual Session #2
44 Safe Motion Planning for Autonomous Driving using an Adversarial Road Model Alex Liniger (ETH Zurich)*; Luc Van Gool (ETH Zurich) Virtual Session #2
45 A Motion Taxonomy for Manipulation Embedding David Paulius (University of South Florida)*; Nicholas Eales (University of South Florida); Yu Sun (University of South Florida) Virtual Session #2
46 Aerial Manipulation Using Hybrid Force and Position NMPC Applied to Aerial Writing Dimos Tzoumanikas (Imperial College London)*; Felix Graule (ETH Zurich); Qingyue Yan (Imperial College London); Dhruv Shah (Berkeley Artificial Intelligence Research); Marija Popovic (Imperial College London); Stefan Leutenegger (Imperial College London) Virtual Session #2
47 A Global Quasi-Dynamic Model for Contact-Trajectory Optimization in Manipulation Bernardo Aceituno-Cabezas (MIT)*; Alberto Rodriguez (MIT) Virtual Session #2
48 Vision-Based Goal-Conditioned Policies for Underwater Navigation in the Presence of Obstacles Travis Manderson (McGill University)*; Juan Camilo Gamboa Higuera (McGill University); Stefan Wapnick (McGill University); Jean-François Tremblay (McGill University); Florian Shkurti (University of Toronto); David Meger (McGill University); Gregory Dudek (McGill University) Virtual Session #2
49 Spatio-Temporal Stochastic Optimization: Theory and Applications to Optimal Control and Co-Design Ethan Evans (Georgia Institute of Technology)*; Andrew Kendall (Georgia Institute of Technology); Georgios Boutselis (Georgia Institute of Technology ); Evangelos Theodorou (Georgia Institute of Technology) Virtual Session #2
50 Kernel Taylor-Based Value Function Approximation for Continuous-State Markov Decision Processes Junhong Xu (INDIANA UNIVERSITY)*; Kai Yin (Vrbo, Expedia Group); Lantao Liu (Indiana University, Intelligent Systems Engineering) Virtual Session #2
51 HMPO: Human Motion Prediction in Occluded Environments for Safe Motion Planning Jaesung Park (University of North Carolina at Chapel Hill)*; Dinesh Manocha (University of Maryland at College Park) Virtual Session #2
52 Motion Planning for Variable Topology Truss Modular Robot Chao Liu (University of Pennsylvania)*; Sencheng Yu (University of Pennsylvania); Mark Yim (University of Pennsylvania) Virtual Session #2
53 Emergent Real-World Robotic Skills via Unsupervised Off-Policy Reinforcement Learning Archit Sharma (Google)*; Michael Ahn (Google); Sergey Levine (Google); Vikash Kumar (Google); Karol Hausman (Google Brain); Shixiang Gu (Google Brain) Virtual Session #2
54 Compositional Transfer in Hierarchical Reinforcement Learning Markus Wulfmeier (DeepMind)*; Abbas Abdolmaleki (Google DeepMind); Roland Hafner (Google DeepMind); Jost Tobias Springenberg (DeepMind); Michael Neunert (Google DeepMind); Noah Siegel (DeepMind); Tim Hertweck (DeepMind); Thomas Lampe (DeepMind); Nicolas Heess (DeepMind); Martin Riedmiller (DeepMind) Virtual Session #2
55 Learning from Interventions: Human-robot interaction as both explicit and implicit feedback Jonathan Spencer (Princeton University)*; Sanjiban Choudhury (University of Washington); Matt Barnes (University of Washington); Matthew Schmittle (University of Washington); Mung Chiang (Princeton University); Peter Ramadge (Princeton); Siddhartha Srinivasa (University of Washington) Virtual Session #2
56 Fourier movement primitives: an approach for learning rhythmic robot skills from demonstrations Thibaut Kulak (Idiap Research Institute)*; Joao Silverio (Idiap Research Institute); Sylvain Calinon (Idiap Research Institute) Virtual Session #2
57 Self-Supervised Localisation between Range Sensors and Overhead Imagery Tim Tang (University of Oxford)*; Daniele De Martini (University of Oxford); Shangzhe Wu (University of Oxford); Paul Newman (University of Oxford) Virtual Session #2
58 Probabilistic Swarm Guidance Subject to Graph Temporal Logic Specifications Franck Djeumou (University of Texas at Austin)*; Zhe Xu (University of Texas at Austin); Ufuk Topcu (University of Texas at Austin) Virtual Session #2
59 In-Situ Learning from a Domain Expert for Real World Socially Assistive Robot Deployment Katie Winkle (Bristol Robotics Laboratory)*; Severin Lemaignan (); Praminda Caleb-Solly (); Paul Bremner (); Ailie Turton (University of the West of England); Ute Leonards () Virtual Session #2
60 MRFMap: Online Probabilistic 3D Mapping using Forward Ray Sensor Models Kumar Shaurya Shankar (Carnegie Mellon University)*; Nathan Michael (Carnegie Mellon University) Virtual Session #2
61 GTI: Learning to Generalize across Long-Horizon Tasks from Human Demonstrations Ajay Mandlekar (Stanford University); Danfei Xu (Stanford University)*; Roberto Martín-Martín (Stanford University); Silvio Savarese (Stanford University); Li Fei-Fei (Stanford University) Virtual Session #2
62 Agbots 2.0: Weeding Denser Fields with Fewer Robots Wyatt McAllister (University of Illinois)*; Joshua Whitman (University of Illinois); Allan Axelrod (University of Illinois); Joshua Varghese (University of Illinois); Girish Chowdhary (University of Illinois at Urbana Champaign); Adam Davis (University of Illinois) Virtual Session #2
63 Optimally Guarding Perimeters and Regions with Mobile Range Sensors Siwei Feng (Rutgers University)*; Jingjin Yu (Rutgers Univ.) Virtual Session #2
64 Learning Agile Robotic Locomotion Skills by Imitating Animals Xue Bin Peng (UC Berkeley)*; Erwin Coumans (Google); Tingnan Zhang (Google); Tsang-Wei Lee (Google Brain); Jie Tan (Google); Sergey Levine (UC Berkeley) Virtual Session #2
65 Learning to Manipulate Deformable Objects without Demonstrations Yilin Wu (UC Berkeley); Wilson Yan (UC Berkeley)*; Thanard Kurutach (UC Berkeley); Lerrel Pinto (); Pieter Abbeel (UC Berkeley) Virtual Session #2
66 Deep Differentiable Grasp Planner for High-DOF Grippers Min Liu (National University of Defense Technology)*; Zherong Pan (University of North Carolina at Chapel Hill); Kai Xu (National University of Defense Technology); Kanishka Ganguly (University of Maryland at College Park); Dinesh Manocha (University of North Carolina at Chapel Hill) Virtual Session #2
67 Ergodic Specifications for Flexible Swarm Control: From User Commands to Persistent Adaptation Ahalya Prabhakar (Northwestern University)*; Ian Abraham (Northwestern University); Annalisa Taylor (Northwestern University); Millicent Schlafly (Northwestern University); Katarina Popovic (Northwestern University); Giovani Diniz (Raytheon); Brendan Teich (Raytheon); Borislava Simidchieva (Raytheon); Shane Clark (Raytheon); Todd Murphey (Northwestern Univ.) Virtual Session #2
68 Dynamic Multi-Robot Task Allocation under Uncertainty and Temporal Constraints Shushman Choudhury (Stanford University)*; Jayesh Gupta (Stanford University); Mykel Kochenderfer (Stanford University); Dorsa Sadigh (Stanford); Jeannette Bohg (Stanford) Virtual Session #2
69 Latent Belief Space Motion Planning under Cost, Dynamics, and Intent Uncertainty Dicong Qiu (iSee); Yibiao Zhao (iSee); Chris Baker (iSee)* Virtual Session #2
70 Learning of Sub-optimal Gait Controllers for Magnetic Walking Soft Millirobots Utku Culha (Max-Planck Institute for Intelligent Systems); Sinan Ozgun Demir (Max Planck Institute for Intelligent Systems); Sebastian Trimpe (Max Planck Institute for Intelligent Systems); Metin Sitti (Carnegie Mellon University)* Virtual Session #3
71 Nonparametric Motion Retargeting for Humanoid Robots on Shared Latent Space Sungjoon Choi (Disney Research)*; Matthew Pan (Disney Research); Joohyung Kim (University of Illinois Urbana-Champaign) Virtual Session #3
72 Residual Policy Learning for Shared Autonomy Charles Schaff (Toyota Technological Institute at Chicago)*; Matthew Walter (Toyota Technological Institute at Chicago) Virtual Session #3
73 Efficient Parametric Multi-Fidelity Surface Mapping Aditya Dhawale (Carnegie Mellon University)*; Nathan Michael (Carnegie Mellon University) Virtual Session #3
74 Towards neuromorphic control: A spiking neural network based PID controller for UAV Rasmus Stagsted (University of Southern Denmark); Antonio Vitale (ETH Zurich); Jonas Binz (ETH Zurich); Alpha Renner (Institute of Neuroinformatics, University of Zurich and ETH Zurich); Leon Bonde Larsen (University of Southern Denmark); Yulia Sandamirskaya (Institute of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland)* Virtual Session #3
75 Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping Cristian Bodnar (University of Cambridge)*; Adrian Li (X); Karol Hausman (Google Brain); Peter Pastor (X); Mrinal Kalakrishnan (X) Virtual Session #3
76 Scaling data-driven robotics with reward sketching and batch reinforcement learning Serkan Cabi (DeepMind)*; Sergio Gómez Colmenarejo (DeepMind); Alexander Novikov (DeepMind); Ksenia Konyushova (DeepMind); Scott Reed (DeepMind); Rae Jeong (DeepMind); Konrad Zolna (DeepMind); Yusuf Aytar (DeepMind); David Budden (DeepMind); Mel Vecerik (Deepmind); Oleg Sushkov (DeepMind); David Barker (DeepMind); Jonathan Scholz (DeepMind); Misha Denil (DeepMind); Nando de Freitas (DeepMind); Ziyu Wang (Google Research, Brain Team) Virtual Session #3
77 MPTC – Modular Passive Tracking Controller for stack of tasks based control frameworks Johannes Englsberger (German Aerospace Center (DLR))*; Alexander Dietrich (DLR); George Mesesan (German Aerospace Center (DLR)); Gianluca Garofalo (German Aerospace Center (DLR)); Christian Ott (DLR); Alin Albu-Schaeffer (Robotics and Mechatronics Center (RMC), German Aerospace Center (DLR)) Virtual Session #3
78 NH-TTC: A gradient-based framework for generalized anticipatory collision avoidance Bobby Davis (University of Minnesota Twin Cities)*; Ioannis Karamouzas (Clemson University); Stephen Guy (University of Minnesota Twin Cities) Virtual Session #3
79 3D Dynamic Scene Graphs: Actionable Spatial Perception with Places, Objects, and Humans Antoni Rosinol (MIT)*; Arjun Gupta (MIT); Marcus Abate (MIT); Jingnan Shi (MIT); Luca Carlone (Massachusetts Institute of Technology) Virtual Session #3
80 Robot Object Retrieval with Contextual Natural Language Queries Thao Nguyen (Brown University)*; Nakul Gopalan (Georgia Tech); Roma Patel (Brown University); Matthew Corsaro (Brown University); Ellie Pavlick (Brown University); Stefanie Tellex (Brown University) Virtual Session #3
81 AlphaPilot: Autonomous Drone Racing Philipp Foehn (ETH / University of Zurich)*; Dario Brescianini (University of Zurich); Elia Kaufmann (ETH / University of Zurich); Titus Cieslewski (University of Zurich & ETH Zurich); Mathias Gehrig (University of Zurich); Manasi Muglikar (University of Zurich); Davide Scaramuzza (University of Zurich & ETH Zurich, Switzerland) Virtual Session #3
82 Concept2Robot: Learning Manipulation Concepts from Instructions and Human Demonstrations Lin Shao (Stanford University)*; Toki Migimatsu (Stanford University); Qiang Zhang (Shanghai Jiao Tong University); Kaiyuan Yang (Stanford University); Jeannette Bohg (Stanford) Virtual Session #3
83 A Variable Rolling SLIP Model for a Conceptual Leg Shape to Increase Robustness of Uncertain Velocity on Unknown Terrain Adar Gaathon (Technion – Israel Institute of Technology)*; Amir Degani (Technion – Israel Institute of Technology) Virtual Session #3
84 Interpreting and Predicting Tactile Signals via a Physics-Based and Data-Driven Framework Yashraj Narang (NVIDIA)*; Karl Van Wyk (NVIDIA); Arsalan Mousavian (NVIDIA); Dieter Fox (NVIDIA) Virtual Session #3
85 Learning Active Task-Oriented Exploration Policies for Bridging the Sim-to-Real Gap Jacky Liang (Carnegie Mellon University)*; Saumya Saxena (Carnegie Mellon University); Oliver Kroemer (Carnegie Mellon University) Virtual Session #3
86 Manipulation with Shared Grasping Yifan Hou (Carnegie Mellon University)*; Zhenzhong Jia (SUSTech); Matthew Mason (Carnegie Mellon University) Virtual Session #3
87 Deep Learning Tubes for Tube MPC David Fan (Georgia Institute of Technology )*; Ali Agha (Jet Propulsion Laboratory); Evangelos Theodorou (Georgia Institute of Technology) Virtual Session #3
88 Reinforcement Learning for Safety-Critical Control under Model Uncertainty, using Control Lyapunov Functions and Control Barrier Functions Jason Choi (UC Berkeley); Fernando Castañeda (UC Berkeley); Claire Tomlin (UC Berkeley); Koushil Sreenath (Berkeley)* Virtual Session #3
89 Fast Risk Assessment for Autonomous Vehicles Using Learned Models of Agent Futures Allen Wang (MIT)*; Xin Huang (MIT); Ashkan Jasour (MIT); Brian Williams (Massachusetts Institute of Technology) Virtual Session #3
90 Online Domain Adaptation for Occupancy Mapping Anthony Tompkins (The University of Sydney)*; Ransalu Senanayake (Stanford University); Fabio Ramos (NVIDIA, The University of Sydney) Virtual Session #3
91 ALGAMES: A Fast Solver for Constrained Dynamic Games Simon Le Cleac’h (Stanford University)*; Mac Schwager (Stanford, USA); Zachary Manchester (Stanford) Virtual Session #3
92 Scalable and Probabilistically Complete Planning for Robotic Spatial Extrusion Caelan Garrett (MIT)*; Yijiang Huang (MIT Department of Architecture); Tomas Lozano-Perez (MIT); Caitlin Mueller (MIT Department of Architecture) Virtual Session #3
93 The RUTH Gripper: Systematic Object-Invariant Prehensile In-Hand Manipulation via Reconfigurable Underactuation Qiujie Lu (Imperial College London)*; Nicholas Baron (Imperial College London); Angus Clark (Imperial College London); Nicolas Rojas (Imperial College London) Virtual Session #3
94 Heterogeneous Graph Attention Networks for Scalable Multi-Robot Scheduling with Temporospatial Constraints Zheyuan Wang (Georgia Institute of Technology)*; Matthew Gombolay (Georgia Institute of Technology) Virtual Session #3
95 Robust Multiple-Path Orienteering Problem: Securing Against Adversarial Attacks Guangyao Shi (University of Maryland)*; Pratap Tokekar (University of Maryland); Lifeng Zhou (Virginia Tech) Virtual Session #3
96 Eyes-Closed Safety Kernels: Safety of Autonomous Systems Under Loss of Observability Forrest Laine (UC Berkeley)*; Chih-Yuan Chiu (UC Berkeley); Claire Tomlin (UC Berkeley) Virtual Session #3
97 Explaining Multi-stage Tasks by Learning Temporal Logic Formulas from Suboptimal Demonstrations Glen Chou (University of Michigan)*; Necmiye Ozay (University of Michigan); Dmitry Berenson (U Michigan) Virtual Session #3
98 Nonlinear Model Predictive Control of Robotic Systems with Control Lyapunov Functions Ruben Grandia (ETH Zurich)*; Andrew Taylor (Caltech); Andrew Singletary (Caltech); Marco Hutter (ETHZ); Aaron Ames (Caltech) Virtual Session #3
99 Learning to Slide Unknown Objects with Differentiable Physics Simulations Changkyu Song (Rutgers University); Abdeslam Boularias (Rutgers University)* Virtual Session #3
100 Reachable Sets for Safe, Real-Time Manipulator Trajectory Design Patrick Holmes (University of Michigan); Shreyas Kousik (University of Michigan)*; Bohao Zhang (University of Michigan); Daphna Raz (University of Michigan); Corina Barbalata (Louisiana State University); Matthew Johnson Roberson (University of Michigan); Ram Vasudevan (University of Michigan) Virtual Session #3
101 Learning Task-Driven Control Policies via Information Bottlenecks Vincent Pacelli (Princeton University)*; Anirudha Majumdar (Princeton) Virtual Session #3
102 Simultaneously Learning Transferable Symbols and Language Groundings from Perceptual Data for Instruction Following Nakul Gopalan (Georgia Tech)*; Eric Rosen (Brown University); Stefanie Tellex (Brown University); George Konidaris (Brown) Virtual Session #3
103 A social robot mediator to foster collaboration and inclusion among children Sarah Gillet (Royal Institute of Technology)*; Wouter van den Bos (University of Amsterdam); Iolanda Leite (KTH) Virtual Session #3

The RSS Foundation is the governing body behind the Robotics: Science and Systems (RSS) conference. The foundation was started and is run by volunteers from the robotics community who believe that an open, high-quality, single-track conference is an important component of an active and growing scientific discipline.

Silicon Valley Bank reports on ‘The Future of Robotics’

You know robotics has ‘made it’ when Silicon Valley Bank (SVB) is reporting on it. Just five years ago, SVB barely had a hardware division, let alone a robotics and frontier tech team. This report itself shows the maturity of the field of robotics, and that’s also one of the key takeaways. There may be fewer deals in robotics, but the deals are getting bigger, as consolidation in new robotics markets starts to happen.

“Robotics is the latest advent in the multi-century trend toward the automation of production. The number of industrial robots, a key component of Industry 4.0, is accelerating. These machines are built by major multinationals and, increasingly, venture-backed startups.

As the segment continues to mature, data are coming in that allow founders, investors and policymakers to establish a framework for thinking about these companies. In this special sector report, we take a data-driven approach to emerging topics in the industry, including business models, performance metrics and capitalization trends.

Finally, we zoom out and consider how automation affects the labor market. In our view, the social implications of this industry will be massive and will require continuous examination by those driving this technology forward.”

Austin Badger, Director of Frontier Tech Practice at Silicon Valley Bank

Beyond the startup funding information though is valuable assessment of the economics of automation, from the shift from CapEx to OpEx and ARR, to the shift from automation to productivity to wealth creation. While it’s clear that automation increases wealth and productivity, there are still justifiable fears that automation will reduce labor opportunities. At the same time, it’s going to be primarily an issue for the developing countries that are currently serving as cheap labor for the world’s on-the-move manufacturing facilities.

Silicon Valley Robotics works closely with Silicon Valley Bank to help startups grow. SVB participates in our In-Depth Networks and Forums. You can download the SVB report “The Future of Robotics” here: https://www.svb.com/trends-insights/reports/the-future-of-robotics

ICRA 2020 launches with plenary panel ‘COVID-19: How Roboticists Can Help’

ICRA is the largest robotics meeting in the world and is the flagship conference of the IEEE Robotics & Automation Society. It is thus our honor and pleasure to welcome you to this edition, although the current exceptional circumstances did not allow us to organize it in Paris as planned with the glimpse and splendor that our wonderful robotics community deserves. Now, for sure, Virtual ICRA 2020, the first online ICRA, will be one of the most memorable ICRA editions ever! [Message from the General & Program Chairs]

Live Plenary Panel – COVID-19 : How Roboticist Can Help ?

Our first Plenary is a hot topic panel on COVID-19 Pandemic & Robotics, moderated by Ken Goldberg and chaired by Wolfram Burgard. Catch it on Big Screen or on IEEE.TV.

Proudly featuring:

Robin Murphy
Brad Nelson
Richard Voyles
Kris Hauser
Antonio Bicchi
Andra Keay
Gangtie Zheng
Ayanna Howard

Kirsten Thurow
Helen Grenier
Howie Choset
Guang-Zhong Yang

Join us for the virtual conference taking place May 31 to August 31 with sessions available both live and on demand. Plenaries and keynotes will be featured every afternoon (Central European Time) from June 1 to June 15, with live interactive Q&A sessions with the speaker. Our goal is bringing cutting-edge ICRA sessions to our community around the globe and provide opportunities to network with like-minded professionals from around the world. We hope that this offering reaches new members of our community and creates engaging discussions within the virtual conference platform.

Schedule

Virtual workshops 31 May to 30 June
Award ceremony 5 June
Plenary talks 1 – 17 June
Paper discussions 1 June  –  31 August
Conference recorded material 1 June  –  31 August
RAS Member Events 1 June  –  31 August
Plenaries
Lydia E. Kavraki Planning in Robotics and Beyond Tuesday June 2, 1PM UTC
Yann LeCun Self-Supervised Learning & World Models Wednesday June 3, 1PM UTC
Jean-Paul Laumond Geometry of Robot Motion: from the Rolling Car to the Rolling Man Thursday June 4, 1PM UTC
Keynotes
Allison Okamura Haptics for Humans in a Physically Distanced World Monday June 8, 1PM UTC
Kerstin Dautenhahn Human-Centred Social Robotics:
Autonomy, Trust and Interaction Challenges
Tuesday June 9, 1PM UTC
Pieter Abbeel Can Deep Reinforcement Learning from pixels
be made as efficient as from state?
Wednesday June 10,  1PM UTC
Jaeheung Park Compliant Whole-body Control for Real-World Interactions Thursday June 11, 1PM UTC
Cordelia Schmid Automatic Video Understanding Friday June 121PM UTC
Cyrill Stachniss Robots in the Fields:
Directions Towards Sustainable Crop Production
 Monday June 15, 1PM UTC
Toby Walsh How long before Killer Robots?  Tuesday June 16, 1PM UTC
Hajime Asama Robot Technology for Super Resilience – Remote Technology for Response to Disasters, Accidents, and Pandemic Wednesday June 17,  1PM UTC

Special RAS Events

There are also several virtual gatherings for IEEE Robotics and Automation Society (RAS) society members and Students. Scroll below for more information.

RAS Meet the Leaders (formally Lunch with Leaders)

RAS Meet the Leaders is the virtual equivalent of the popular RAS Lunch with Leaders event traditionally held at IEEE RAS’s flagship conferences: ICRA, CASE, and IROS.

Meet the Leaders is planned for multiple dates and time zones to accommodate the international robotics community. Each Leader will begin with an informal 5-minute presentation about their career, followed by a question and answer session.

Participants (students and young professionals) may sign up for ONE session to participate in a relaxed chat with academic and industry leaders from around the world.

The following Leaders are confirmed for the dates and times listed below (check back often for additional sessions):

  • Tuesday, June 2nd @ 12:00 PDT / 19:00 GMT
    Aleksandra Faust, 2020 RAS Early Industry Career Award in Robotics and Automation
  • Wednesday, June 3rd @ 10:00 am AEST / 00:00 GMT
    Peter Corke, 2020 RAS George Saridis Leadership Award, (and colleagues)
  • Thursday, June 4th @ 8:00 pm JST / 11:00 GMT
    Toshio Fukuda, IEEE President
  • Thursday, June 4th @ 13:00 EDT / 17:00 GMT
    Jaydev Desai, RAS AdCom Class of 2022
  • Monday, June 8th @ 12:30 JST / 03:30 GMT
    Zhidong Wang, RAS VP Electronic Products and Services Board, Yasushi Nakauchi RAS VP Financial Activities Board, Yasuhisa Hirata, RAS AdCom Class of 2022
  • Tuesday, June 9th @ 1:00 am KST / 16:00:00 GMT (Monday, June 8)
    Frank Park, RAS President Elect
  • Tuesday, June 9th @ 12:00 CDT / 17:00 GMT
    Lydia Kavraki, 2020 RAS Pioneer Award Winner
  • Thursday, June 11th @ 10:00 am PDT / 17:00:00 GMT
    Allison Okamura, Editor-in-Chief of RA-L and Marcia O’Malley, IROS 2020 Program Chair
  • Thursday, June 11th @ 12:00 pm PDT / 19:00:00 GMT
    Dieter Fox, 2020 RAS Pioneer Award WinnerFriday,
  • June 12th @ 3:00 pm CEST / 13:00 GMT
    Torsten Kroeger, RAS Vice President of Conference Activities
  • Registration Form (Required): https://app.smartsheet.com/b/form/3834d7362695475f915f52b1653439c9

RAC ‘Emerging Trends in Retail Robotics’ report released

Robots are increasingly being deployed in retail environments. The reasons for this include: to relieve staff from the performance of repetitive and mundane tasks; to reallocate staff to more value-added, customer-facing activities; to realize operational improvements; and, to utilize real-time in-store generated data. Due to the impact of the 2020 Coronavirus outbreak, we can now add a new reason to use robots in retail: to assist with customer and employee safety.

In this Research Article, the Retail Analytics Council at NWU presents information on the benefits associated with deploying robots in stores. Estimates of the size of the global retail robot market are advanced. The impact on demand for robots in the grocery industry, in light of the Coronavirus outbreak, is discussed as well. This is followed by a review of U.S. retail robot deployments and the advancing of some emerging applications.

In summary, we find that the trend toward deploying robots in retail environments is accelerating. The reasons for this include their functional utility, advances in AI, and the ability to address both labor challenges and customer and employee safety concerns. The introduction of new uses of real-time, in-store generated data is another advantage. Further, the movement toward multimodal robots that are efficient at performing various functions adds to the value equation. We also find that changing consumer behavior to increase online purchases, especially in grocery, is a major impetus fueling this movement. Finally, establishing industry standards, which is ongoing, will fuel adoption.

Previous impediments to adoption, which are not detailed here, are also at play. These, for the most part, include issues of cost and training. The costs of robots will decrease, and the ROI will greatly increase, as complex computing moves off the payload via 5G and sensor costs continue to decrease. Increased vendor competition will also be a factor. The cost and complexity associated with environmental training are also being addressed via the introduction of synthetic data.

As the industry is still in its infancy, there are minimal reliable studies regarding market size. Estimates range from $4.8 billion to $19 billion in the 2015 to 2018 time frame, to as much as $52 billion by 2025. In April 2018, Bekryl Market Analysts published its Global Retail Robots Market Size Analysis, 2018-2028. Bekryl estimates the global retail robot market at $19 billion in 2018. They further estimate that the market will grow at a CAGR of 12.7 percent over the next ten years.

Now consider a different perspective. Verified Market Research valued the global retail robotics market at $4.78 billion in 2018, but expects a much more rapid rate of growth of 31.89 percent from 2019- 2026, reaching $41.67 billion by 2026.12 In 2016, yet another point of view was advanced by consulting firm Roland Berger, which stated “[t]he segment of robots designed for retail stores is emerging in a global robotics market that is already significant ($19 billion in 2015) and growing steadily ($52 billion in 2025).”

As the current Coronavirus pandemic constrains consumers’ ability to shop in stores, there is ample evidence that a shift to online purchasing is occurring in select categories, particularly grocery. To realize operating efficiencies while meeting this increased demand, grocery retailers, which represent the largest segment currently invested in robotics technology, are expected to accelerate their rate of investment.

The pressing question is whether this current movement to online grocery purchases during the pandemic represents a more permanent shift in consumer behavior. Consumers seem to think so. For example, in an April 2020 survey, 43 percent of adults said they were somewhat or very likely to
continue ordering groceries online once the pandemic ends (see Chart 11). McKinsey & Company’s COVID-19 U.S. Digital Sentiment Survey found that fully “75 percent of people using digital channels for the first time indicate that they will continue to use them when things return to normal.”

In conclusion, we see the pace of retail robot adoption accelerating, especially in the grocery segment. Technology advancements surrounding deployments in stores, backroom/warehouses, and delivery applications will continue to improve. Deployment costs will fall, as will the time to deploy, which will increase ROI, as will multi-functional payloads that perform a variety of tasks. Emerging innovations will add interesting new use cases. Increasing uses of real-time data generated, and the application/integration thereof, will also create additional value. Finally, ongoing efforts to establish industry standards will aid in industry adoption.

Silicon Valley Robotics is on the Robotics and AI Advisory Board of the Retail Analytics Council at NWU, where you can download the full report “Emerging Trends in Retail Robotics”.

Open Problems for Robots in Surgery and Healthcare

* Please register at:
https://robotsinsurgeryandhealthcare.eventbrite.com

The COVID-19 pandemic is increasing global demand for robots that can 
assist in surgery and healthcare. This symposium focuses on recent 
advances and open problems in robot-assisted tele-surgery and 
tele-medicine and needs for new research and development. The online 
format will encourage active dialogue among faculty, students, 
professionals, and entrepreneurs.

Featuring:
Gary Guthart, CEO, Intuitive Surgical
Robin Murphy, Texas A&M
Pablo Garcia Kilroy, VP Research, Verb Surgical
Allison Okamura, Professor Stanford
David Noonan, Director of Research, Auris Surgical
Jaydev Desai, Director, Georgia Tech Center for Medical Robotics
Nicole Kernbaum, Principal Engineer, Seismic Powered Clothin
Monroe Kennedy III, Professor, Stanford

Presented by the University of California Center for Information 
Technology Research in the Interest of Society (CITRIS) and the Banatao 
Institute “People and Robots” Initiative, SRI International, and 
Silicon Valley Robotics.

Schedule:

   *  09:30-10:00: Conversation with Robin Murphy, Texas A&M and Director of 
Robotics for Infectious Diseases, and Andra Keay, Director of Silicon 
Valley Robotics
   *  10:00-10:30: Conversation with Gary Guthart, CEO Intuitive Surgical 
and Ken Goldberg, Director of CITRIS People and Robots Initiative
   *  10:30-11:00: Conversation with Pablo Garcia Kilroy, VP Research 
Verb Surgical and Tom Low, Director of Robotics at SRI International
   *  11:00-11:15: Coffee Break
   *  11:15-11:45: Conversation with David Noonan, Director of Research, 
Auris Surgical and Nicole Kernbaum
   *  11:45-12:45: Keynote by Jaydev Desai, Director, Georgia Tech Center 
for Medical Robotics
   *  12:45-01:15: Conversation with Allison Okamura, Stanford and Monroe 
Kennedy III, Stanford

From SLAM to Spatial AI


You can watch this seminar here at 1PM EST (10AM PST) on May 15th.

 Andrew Davison (Imperial College London)

Andrew Davison

Abstract: To enable the next generation of smart robots and devices which can truly interact with their environments, Simultaneous Localisation and Mapping (SLAM) will progressively develop into a general real-time geometric and semantic `Spatial AI’ perception capability. I will give many examples from our work on gradually increasing visual SLAM capability over the years. However, much research must still be done to achieve true Spatial AI performance. A key issue is how estimation and machine learning components can be used and trained together as we continue to search for the best long-term scene representations to enable intelligent interaction. Further, to enable the performance and efficiency required by real products, computer vision algorithms must be developed together with the sensors and processors which form full systems, and I will cover research on vision algorithms for non-standard visual sensors and graph-based computing architectures.

Biography: Andrew Davison is Professor of Robot Vision and Director of the Dyson Robotics Laboratory at Imperial College London. His long-term research focus is on SLAM (Simultaneous Localisation and Mapping) and its evolution towards general `Spatial AI’: computer vision algorithms which enable robots and other artificial devices to map, localise within and ultimately understand and interact with the 3D spaces around them. With his research group and collaborators he has consistently developed and demonstrated breakthrough systems, including MonoSLAM, KinectFusion, SLAM++ and CodeSLAM, and recent prizes include Best Paper at ECCV 2016 and Best Paper Honourable Mention at CVPR 2018. He has also had strong involvement in taking this technology into real applications, in particular through his work with Dyson on the design of the visual mapping system inside the Dyson 360 Eye robot vacuum cleaner and as co-founder of applied SLAM start-up SLAMcore. He was elected Fellow of the Royal Academy of Engineering in 2017.

Robotics Today Seminars

“Robotics Today – A series of technical talks” is a virtual robotics seminar series. The goal of the series is to bring the robotics community together during these challenging times. The seminars are scheduled on Fridays at 1PM EDT (10AM PDT) and are open to the public. The format of the seminar consists of a technical talk live captioned and streamed via Web and Twitter (@RoboticsSeminar), followed by an interactive discussion between the speaker and a panel of faculty, postdocs, and students that will moderate audience questions.

Stay up to date with upcoming seminars with the Robotics Today Google Calendar (or download the .ics file) and view past seminars on the Robotics Today Youtube Channel. And follow us on Twitter!

Upcoming Seminars

Seminars will be broadcast at 1PM EST (10AM PST) here.

Leslie Kaelbling

22 May 2020: Leslie Kaelbling (MIT)

Allison Okamura

29 May 2020: Allison Okamura (Stanford)

Anca Dragan

12 June 2020: Anca Dragan (UC Berkeley)

Past Seminars

We’ll post links to the recorded seminars soon!

Organizers

Contact

Page 2 of 4
1 2 3 4