2019 Robot Launch startup competition is open!
It’s time for Robot Launch 2019 Global Startup Competition! Applications are now open until September 2nd 6pm PDT. Finalists may receive up to $500k in investment offers, plus space at top accelerators and mentorship at Silicon Valley Robotics co-work space.
Winners in previous years include high profile robotics startups and acquisitions:
2018: Anybotics from ETH Zurich, with Sevensense and Hebi Robotics as runners-up.
2017: Semio from LA, with Appellix, Fotokite, Kinema Systems, BotsAndUs and Mothership Aeronautics as runners up in Seed and Series A categories.
Flexibility of Mobile Robots Supports Lean Manufacturing Initiatives and Continuous Optimizations of Internal Logistics at Honeywell
How Drones Are Disrupting The Insurance Industry
#292: Robot Operating System (ROS) & Gazebo, with Brian Gerkey
In this episode, Audrow Nash interviews Brian Gerkey, CEO of Open Robotics about the Robot Operating System (ROS) and Gazebo. Both ROS and Gazebo are open source and are widely used in the robotics community. ROS is a set of software libraries and tools, and Gazebo is a 3D robotics simulator. Gerkey explains ROS and Gazebo and talks about how they are used in robotics, as well as some of the design decisions of the second version of ROS, ROS2.
Brian Gerkey
Brian Gerkey is the CEO of Open Robotics, which seeks to develop and drive the adoption of open source software in robotics. Before Open Robotics, Brian was the Director of Open Source Development at Willow Garage, a computer scientist in the SRI Artificial Intelligence Center, a post-doctoral scholar in Sebastian Thrun‘s group in the Stanford Artificial Intelligence Lab. Brian did his PhD with Maja Matarić in the USC Interaction Lab.
Links
Piab’s Kenos KCS Gripper
Industrial Robotics Companies to Watch Out for in 2020
6 Disrupting Trends in Industrial Robotics Poised To Transform Manufacturing
What You Need to Know to Work With a Collaborative Robot System
Evolution of Embedded Vision Technologies for Robotics
Will Creating Robots for Space Travel Become More Necessary in the Near Future?
The Power of Adaptive Robotic EOAT
Acura Provides All Access Look into the Performance Manufacturing Center (PMC)
Making sense of vision and touch: #ICRA2019 best paper award video and interview
PhD candidate Michelle A. Lee from the Stanford AI Lab won the best paper award at ICRA 2019 with her work “Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks”. You can read the paper on arxiv here.
Audrow Nash was there to capture her pitch.
And here’s the official video about the work.
Full reference
Lee, Michelle A., Yuke Zhu, Krishnan Srinivasan, Parth Shah, Silvio Savarese, Li Fei-Fei, Animesh Garg, and Jeannette Bohg. “Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks.” arXiv preprint arXiv:1810.10191 (2018).