RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. Established in 1997, the original mission was to field a team of robots capable of winning against the human soccer World Cup champions by 2050.
To celebrate 20 years of RoboCup, the Federation is launching a video series featuring each of the leagues with one short video for those who just want a taster, and one long video for the full story. Robohub will be featuring one league every week leading up to RoboCup 2017 in Nagoya, Japan.
This week, we consider being part of the RoboCupIndustrial league. RoboCupIndustrial is a competition between industrial mobile robots focusing on logistics and warehousing systems. In anticipation of Industry 4.0, participants compete in automation through robots, autonomous systems, and mobile robot technology. You’ll hear about the history and ambitions of RoboCup from the trustees, and inspiring teams from around the world.
Short version:
Long version:
Want to watch the rest? You can view all the videos on the RoboCup playlist below:
For the first time on Broadway human and drone performances fuse to create a new form of artistic expression. The magic happened in Cirque du Soleil’s first musical on Broadway: ‘Paramour’ at the Lyric Theatre. The show is themed on the Golden Age of Hollywood and follows the life of a poet who is forced to choose between love and art. The contributions of the technology firm Verity Studios include the choreography of the drone show segment, the frame and lighting design of the drone costumes, and all underlying drone technologies. The system was operated by the show’s automation team, with Verity Studios providing maintenance services twice per year. The dancing drones completed almost 400 shows, including more than 7,000 autonomous takeoffs, flights, and landings.
Over the past year, 398 audiences of up to 2,000 people witnessed an octet of colorful lampshades perform an airborne choreography during Cirque du Soleil’s Broadway show Paramour, which ran until April 20th. The work behind the design and choreography of the flying lampshades, which turn out to be self-piloted drones, bears the signature of the Swiss high-tech company Verity Studios.
But how novel is it really that robots have appeared in theater? Since Karel Capek’s science fiction play R.U.R. (short for Rossum’s Universal Robots) introduced the word “robot” to the English language and to science fiction almost 100 years ago, the technical challenges of incorporating robots into live performance and theater have been overwhelming. Before these Broadway drones, nearly all theater robots were remote-controlled puppets, relying on humans hidden off-scene to steer their movements and provide their intelligence. What holds for Broadway, also holds for the even bigger picture.
It is intriguing that the aerial revolution gained momentum on the ground first: Even though the complexity of most advanced industrial automation systems found in state-of-the art factories and warehouses hardly compares to the complexity of drone show systems that allow for the autonomous navigation of dozens, hundreds, or thousands of drones in a safety-critical environment, one single example stands out. The only comparable systems that can serve as a template for operating large numbers of autonomous mobile robots with high demands for reliability and safety are those developed by the technology company Kiva Systems (bought by Amazon for USD 775 million in 2012), where Verity Studios’ founder Professor Raffaello D’Andrea and his former colleagues developed a solution for automated storage and retrieval in warehouses. His team unleashed thousands of mobile robots in warehouses, which started the robotics gold rush in the Silicon Valley. Now, Professor D’Andrea’s team is pursuing an analogous, even more ambitious goal in the air.
Placing intelligent, autonomous flying machines in live theater presents a multi-faceted challenge: creating a compelling performance with safety, reliability, and ease of operation. While the compelling performance translates into shaping a convincing creative concept around the drones’ choreography, the latter chiefly points to designing computerized systems in lieu of making use of human pilots: Verity Studios’ drones are flying mobile robots and navigate autonomously, piloting themselves, only supervised by a human operator. To navigate autonomously, these drones require a reliable method for determining their position in space. Since GPS is usually not available indoors, Verity Studios has built on more than a decade of research and development at the Flying Machine Arena of Switzerland’s Federal Institute of Technology (ETH Zurich) to provide a novel localization method.
The first groups of autonomous robots are now meeting the live event industries’ high requirements: Initially, Verity Studios set the stage for widely recognized flying machines in entertainment in 2014, by collaborating with Cirque de Soleil on Sparked, a short film in which 10 siblings of the similarly dressed 8 quadcopters from Paramour dance, hover, and flicker around a stunned human actor. Sparked was named a Winner of the 2016 New York City Drone Film Festival. However, it is one thing to feature drones in a 4-minute video, where the makers have the luxury to shoot it 100 times and then choose the best shot. Live performances on Broadway running 8 times per week in front of up to 2,000 people every time, by contrast, require the drone shows to work every single time. Verity Studios has met these requirements in the best sense: During their one-year run, the drones on Broadway safely completed more than 7,000 autonomous take-offs, flights, and landings. How was that possible?
In contrast to the long list of safety incidents involving remote-controlled drones at events, performance and reliability are key to Paramour’s dancing drones which are custom-made and hand-crafted in Switzerland. Each of the eight flying machines featured in Paramour uses 80 sensors to fly and performs roughly 1.5 billion calculations per second. Each critical calculation is double-checked every time it is performed in case any of the concerned processors makes a mistake. In other words, accidents are prevented by self-test and monitoring. And even if something does happen, a back-up is in place. For example, on the show of 2017-02-08, the battery from one of the eight vehicles could not provide sufficient power during the takeoff phase. The quadrocopter detected the problem and returned to the ground on only two propellers.
In most cases, the drones withstand any point of failure, but where a single point of failure is unavoidable, a fail-safe or weakest link design is used. The failure of any single system component cannot result in an unmanaged loss of control, but must be handled appropriately. For instance, a drone should be able to land in a controlled fashion despite a failure of any one of its motors, any part of its electronics, any one of its cables or connectors, and any one of its batteries. Similarly, loss of communication between a drone and its ground station must not create a dangerous situation. This can be achieved by designing a fully redundant system, i.e., a system that can continue operation, or trigger a safe emergency behavior, if any one of its components fails. While such designs can be more expensive to create than designs without duplication, they provide much higher safety (and help reduce insurance costs). Their decades-long history in the manned aviation industry has shown this approach’s effectiveness and provides a treasure trove of experience and evidence for the emerging drone industry.
One way to design such a redundant system is to first design a simple flight-capable system, to then duplicate all its components (e.g. sensors, processors, actuators) to achieve the necessary redundancy, and to carefully design switch-over from a failed system to its backup. Importantly, the design of such a system cannot stop with the design of fully redundant drones, but needs to extend to all other critical components of the drone show system, including the positioning system, communications architecture, e-stop systems, and control stations. In the end, this uncompromising approach allows for a degree of safety and robustness that makes those intelligent and autonomous robots suitable for performance next to theater audiences. The result is a one-of-a-kind performance that extends the traditional palette of light, sound, stage effects, and interaction with human performers by translating an intimate character beat into an unexpected visual motif.
The enticing reality is that there are many more areas to explore, ranging from live concerts to active scenography, that offer dramatic possibilities for performing robots on a much broader scale. Once again, not only such stage flyers, but also Lucie micro-drones will make history, not only on Broadway or at Amazon, but all across the domain of performing arts revolutionizing the scene with a new realm of creative expression.
Update: The response to Tertill’s crowdfunding campaign has amazed and delighted us! Pledges totalling over $250,000 have come from 1000+ backers. We’re shipping to all countries, with over a fifth of Tertill’s supporters coming from outside the United States. But the end is near; Tuesday (11 July) is the last full day of the campaign. After that Tertill’s discounted campaign price will no longer be available and delivery in time for next year’s (northern hemisphere) growing season cannot be assured.
Franklin Robotics has launched a Kickstarter campaign for Tertill, their solar-powered, garden-weeding robot.
Tertill lives in your garden, collecting sunlight to power its weed patrol, and cutting down short plants with a string trimmer/weed whacker with almost no intervention required. Available for about $300USD, the fully autonomous Tertill is the first weeding robot available to home gardeners.
Tertill is round, short, has four wheel drive and extreme camber wheels. It uses proprietary algorithms to ensure that it finds as many weeds as it can, using its sensors to distinguish between weeds and crops based on height. As Tertill sees it: small plants are weeds, big plants are crops. But what if you have seedlings? Simply place a plant collar around seedlings to inform Tertill that this plant is wanted.
Tertill doesn’t use a camera for weed discrimination because Franklin wants to put Tertill to work in your garden now. In the not-too-distant future, inexpensive vision systems may reliably distinguish between plants and crops even when both are seedlings. But, according to Franklin, such system are not yet ready.
Tertill is designed such that one robot can easily keep up with the weeds in a typical sized garden (that’s about 100 square feet in the USA). Tertill will usually handle gardens considerably larger than this, but how much larger depends on weather and local weed types. And Terrill’s can work in teams, coexisting harmoniously in the same space. Multiple Tertill robots will simply avoid each other.
Check out the video below:
But Tertill needs a small barrier to keep it from wandering off beyond the borders of your garden. A garden fence, the wooden border of a raised bed, or any sort of edging that is two or more inches tall will work.
Otherwise, Tertill is designed to live in your garden and require very little attention. Summer showers won’t bother it. And, under normal conditions, you won’t have to worry about charging Tertill—the sun will see to that. But, if Tertill has been in the dark for so long (months) that its battery is completely exhausted, a USB connection can be used to charge the battery.
Franklin believes that small, simple robots can help solve big, complex problems. Franklin’s robots aim to help gardeners by reducing the tedium and physical challenge of weeding, making gardening more fun. The data their robots collect helps gardeners garden better, improving yield and quality. Their robots aim to benefit the environment by eliminating the need for herbicides and returning organic matter to the soil.
Click here to visit Franklin’s website and learn more.
The Robot Academy is a new learning resource from Professor Peter Corke and the Queensland University of Technology (QUT), the team behind the award-winning Introduction to Robotics and Robotic Vision courses. There are over 200 lessons available, all for free.
The lessons were created in 2015 for the Introduction to Robotics and Robotic Vision courses. We describe our approach to creating the original courses in the article, An Innovative Educational Change: Massive Open Online Courses in Robotics and Robotic Vision. The courses were designed for university undergraduate students but many lessons are suitable for anybody, as you can easily see the difficulty rating for each lesson. Below are several examples of image formation and 3D vision.
The geometry of image formation
The real world has three dimensions but an image has only two. We can use linear algebra and homogeneous coordinates to understand what’s going on. This more general approach allows us to model the positions of pixels in the sensor array and to derive relationships between points on the image and points on an arbitrary plane in the scene.
How is an image formed? The real world has three dimensions but an image has only two: how does this happen and what are the consequences? We can use simple geometry to understand what’s going on.
An image is a two-dimensional projection of a three-dimensional world. The big problem with this projection is that big distant objects appear the same size as small close objects. For people, and robots, it’s important to distinguish these different situations. Let’s look at how humans and robots can determine the scale of objects and estimate the 3D structure of the world based on 2D images.
No matter how great a surgeon is, robotic assistance can bring a higher level of precision to the operating table. The ability to remotely operate a robot that can hold precision instruments greatly increases the accuracy of surgical procedures like thoracoscopic surgery, which is used to treat lung cancer.
Of the two most common types of lung cancer, non-small cell lung cancer (NSCLC) is a good candidate for surgery because the tumors spread slowly and are more localized. Since more than 80 percent of people with lung cancer have NSCLC, surgery is a common treatment.
Lung cancer usually starts when epithelial cells that line the inside of the lungs rapidly reproduce into cancerous cells, creating tumors inside the lungs. These tumors have been traditionally removed directly by the skilled hands of a surgeon. Today, we’re starting to see more tumors being removed by robots that are controlled by skilled surgeons. This process is known as robotic assisted lung surgery.
The key benefits of robotic assisted lung surgery are:
It’s minimally invasiveand small incisions are used. Traditional lung surgery procedures utilize a large incision across the chest wall. When using robotic assisted surgery, the incision is about half as long. The robotic arms can be maneuvered more intricately, so the incision doesn’t need to be as large. Often a second incision is made for the removal of tissue.
Patients often experience a faster recovery time. When a patient has a robotic lobectomy, they’re often back to their normal routine in a week’s time. Older patients in their 70s and 80s also experience a good recovery with this type of procedure, which is good news considering they’re usually not great candidates for open surgeries. The small incisions are responsible for a faster recovery time compared to the long recovery time after traditional lung surgery.
There are several robotic assisted procedures used to treat lung cancer:
Video-Assisted Thoracoscopic Surgery (VATS)
Fewer than 5 percent of thoracic surgeries in the US are performed robotically. Rush University is one of a few US medical institutions that offer a full range of robotic-assisted lung cancer procedures, including VATS. In fact, Rush is known as a leader in VATS.
VATS is the most common type of robotic assisted lung surgery. With this procedure, one incision is made to insert the surgical instruments for the removal of tissue, while another incision is used to place the camera. The surgeon maneuvers the surgical instruments while an assistant operates the camera so the surgeon can see what’s happening.
The da Vinci Si robotic system
Rush University is now using the da Vinci Si robotic system for lung surgery, where it was once only used for other, more general surgeries.
How the da Vinci Si robotic system works
A surgeon sits at a console to control a four-armed robot that’s been positioned above the patient lying on the operating table. The surgeon observes the scene from a screen that displays images coming from a camera. As the surgeon moves his or her controls, the robot responds accordingly in real-time, and the surgical instruments attached to the robotic arms perform the surgery.
One of Rush University Medical Center’s thoracic surgeons, Gary Chmielewski, says, “any motion I can do with my hands, the robot can simulate inside the patient with more precision and less tissue trauma. It all works together for a better operation that’s easier on the patient.”
Nine years of positive results
A study published in 2014, analyzed the results of different types of robotic lobectomies for treating lung cancer over a period of 9 years. The study was designed to evaluate the evolution of technique as well as the robotic technology. The study found a positive trend in patient outcomes when they opted for the upgraded robotic systems compared to the standard systems.
Twenty years ago, who would have thought having robotic assisted lung surgery would become the most popular, most effective option?
If robots can be used to help surgeons remove tumors from lung cancer patients, what else is possible with robotics? The possibilities are endless.
A U.S. drone strike reportedly killed eight individuals at an al-Shabab command post in southern Somalia. It is believed to be the first U.S. drone strike in Somalia since President Trump relaxed rules for targeting al-Shabab militants. (New York Times)
Canada announced that it is moving ahead with its acquisition of strike-capable drones. The planned acquisition is part of a $62 billion investment in new military systems and technologies. In a press conference, Canadian Prime Minister Justin Trudeau said that the government will carefully review how to best use the drones prior to deployment. (CBC)
Alphabet will sell Boston Dynamics, which makes quadruped and bipedal robots, as well as robotics firm Schaft, to SoftBank Group, a Japanese telecommunications corporation. Boston Dynamics has become well-known for developing advanced, dexterous robots. Alphabet acquired Boston Dynamics in 2013 and Schaft in 2014. (Recode)
A U.S. fighter jet shot down an Iranian drone near the Al Tanf border crossing between Syria and Iraq. In a statement, the U.S.-led coalition said that the drone had earlier attacked U.S. allies on the ground and that it was approximately the size of a U.S. MQ-1 Predator, indicating that it was likely a Shahed-129 drone. A variety of Iranian drones have supported pro-regime forces in Syria since at least 2012. (The Daily Beast) For more on the drones operating in Syria and Iraq, click here.
Meanwhile, at Over the Horizon, U.S. Air Force Colonel Joseph Campo discusses U.S. drone operations and the psychological toll of those operations on drone pilots.
At Recode, April Glaser writes that the Trump administration’s plan to privatize air traffic control could accelerate the push to develop a national drone tracking system.
In a study published in the online Journal of Unmanned Vehicle Systems, Paul Nesbit examines the growing number of close encounters between drones and manned aircraft in Canada. (Phys.org)
At TechCrunch, Brian Heater looks at how the Robust Adaptive Systems Lab at Carnegie Mellon is developing drones and robots that can work alongside humans.
At CBC, Kyle Bakx visits a village in Canada that is seeking to transform itself into a hub for companies that develop and test drones.
Amazon has been awarded a patent for a safety system for its delivery drones that shuts off the aircraft’s rotors if it detects an imminent collision. (Drone Life)
Researchers at Georgia Tech are developing the Miniature Autonomous Blimp, an indoor unmanned blimp that has a camera that can detect faces and autonomously follow individuals. (IEEE Spectrum)
Navajo County sheriff’s deputies in Arizona used a drone to assist in the search for a man who went missing in a local forest. (White Mountain Independent)
The European Commission officially launched the 5.5 billion Euro European Defence Fund, which will include funding for unmanned systems development and acquisition. (Press Release)
The U.S. Navy awarded Academi Training Center, Insitu, PAE ISR, and AAI contracts for sea and land-based drones for intelligence, surveillance, and reconnaissance. The total value of the contracts is $1.73 billion. (DoD)
The U.S. Special Operations Command awarded AAI Corp. a multiple award with a $475 million maximum order ceiling for Mid-Endurance Unmanned Aircraft Systems. (DoD)
The U.S. Special Operations Command awarded Insitu a multiple award with a $475 million maximum order ceiling for Mid-Endurance Unmanned Aircraft Systems. (DoD)
The U.S. Air Force awarded Radio Hill Technologies a $2.5 million contract for 100 Block 3 Dronebuster counter-UAS detection and jamming systems. (Shephard Media)
The Defense Advanced Research Projects Agency awarded BAE Systems two contracts worth $5.4 million to develop a payload architecture that will enable smaller drones to multitask, a program known as CONCERTO. (IHS Jane’s 360)
The Defense Advanced Research Projects Agency awarded Oregon State University a $6.5 million grant to improve the trustworthiness of artificial intelligence in robots and drones. (Press Release)
Boeing is partnering with Huntington Ingalls, the largest military shipbuilder in the U.S., to help to develop the company’s Echo Voyager extra large unmanned undersea vehicle (XLUUV). (Defense One)
Dronefence, a German counter-UAS startup, announced seed funding from Larnabel Ventures, VP Capital, Boundary Holding, and Technology and Business Consulting Group. (Press Release)
Maritime Robotics, a Norwegian company specializing in unmanned maritime vehicles, will provide Seabed Geosolutions with its Mariner unmanned surface vehicle for seismic exploration. (Maritime Journal)
The Duke Energy Foundation awarded Butler Technology and Career Development School in Hamilton, Ohio a $45,000 grant to teach students how to use drones. (WLWT5)
Virginia Governor Terry McAuliffe awarded three $50,000 grants to the Counter-Drone Research Corporation, TruWeather Solutions, and dbS Productions to conduct research into drones and autonomous vehicles. (StateScoop)
Autonomous vehicle and artificial intelligence company Cognata raised $5 million from Emerge, Maniv Mobility, and Airbus Ventures. (TechCrunch)
For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.
Uber, the global ride-sharing transportation company, has named two replacements to recover from the recent firing of Anthony Levandowski who headed their Advanced Technologies Group, their OTTO trucking unit, and their self-driving team. Levandowski was fired May 30th.
Eric Meyhofer
Meyhofer, who before coming to Uber was the co-founder of Carnegie Robotics and a CMU robotics professor, was also part of the group that came to Uber from CMU (see below). He has now been named to head Uber’s Advanced Technologies Group (ATG) self-driving group and will report directly to Uber CEO Travis Kalanick.
The ATG group is charged with developing the self-driving technologies of mapping, perception, safety, data collection and learning, and self-driving for cars and trucks.
Sensors that determine distances are integral to the process. Elon Musk said recently that LiDAR isn’t needed because cameras, sensors, software and high-speed GPUs can do the same tricks at a fraction of the cost. Levandowski favored LiDARs, particularly newly developed solid state LiDAR technologies.
Anthony Levandowski
Levandowski, the previous head of the ATG, joined Google to work with Sebastian Thrun on Google Street View, started a mobile mapping company that experimented with LiDAR technology and another to build a self-driving LiDAR-using car (a Prius). Google acquired both companies including their IP. In 2016 Levandowski left Google to found Otto, a company making self-driving kits to retrofit semi-trailer trucks. Just as the kit was launched, Uber acquired Otto and Levandowski became the head of Uber’s driverless car operation in addition to continuing his work at Otto.
The Levandowski case, which caused Uber to fire him, revolves around the intellectual property and particularly the LiDAR-related technologies that Google’s and Uber’s self-driving plans revolve around. Getting the cost of perception down to a reasonable level has been part of the bigger challenge of self-driving technology and LiDAR technology is integral to that plan.
Google’s Waymo self-driving unit is implying in their suit that in return for bringing Google’s IP to Uber, Uber gave Levandowski $250 million in stock grants. Uber has called Waymo’s claims baseless and an attempt to slow down a competitor.
Waymo also claims that Uber has a history of “stealing” technology and includes the time in 2015 when Uber hired away 50+ of the Carnegie Mellon University robotics team – a move that cost Uber a reported $5 billion and created havoc at CMU and the National Robotics Engineering Centre (NREC) which lost 1/3 of their staff to Uber. The move was preceded by establishing a strategic partnership with CMU to work together on self-driving technologies. Four months later, Uber hired the 50.
Carnegie Mellon left decimated after Uber poaches 40 top-rated robotic researchers to help them develop self-driving cars
Carnegie Mellon ‘in crisis’ after mass defection of scientists to Uber
Uber hope their fleet of taxis will not require drivers in the future
Used $5 billion from investors to poach at least 40 from the National Robotics Engineering Center
Uber took six principal investigators and 34 engineers
Brian Zajac
Zajac has been on Uber’s self-driving team since 2015 after stints with Shell and the U.S. Army. Now he becomes the new chief of hardware development and reports to Meyhofer.
“Zajac will now bear a great deal of responsibility for cracking the driverless car problem, which Uber CEO Travis Kalanick has described as “existential” to the company. Uber loses huge amounts of money, and many observers think eliminating the cost of drivers is its only realistic path to profitability.”
Bottom line:
Uber has research teams in Silicon Valley, Toronto and Pittsburgh all working to perfect Level 5 autonomous driving capabilities before any of their competitors are able to duplicate the process. Google, Baidu, Yandex, Didi Chuxing, a few of the Tier 1 component makers, and many others including all the major car companies are racing forward with the same intentions. Levandowski’s firing caused a big gap in Uber’s self-driving project management and fear amongst their investors. Uber hopes that these two changes, Meyhofer as overall head and Zajac as hardware chief, will quell the fears that Uber is losing their momentum.
It is unclear if Masayoshi Son, Chairman of Softbank, was one of the 17 million YouTube viewers of Boston Dynamic’s Big Dog before acquiring the company for an undisclosed amount this past Thursday. What is clear is the acquisition of Boston Dynamics by Softbank is a big deal. Softbank’s humanoid robot Pepper is trading up her dainty wheels for a pair of sturdy legs.
In expressing his excitement for the acquisition, Masayoshi Son said, “Today, there are many issues we still cannot solve by ourselves with human capabilities. Smart robotics are going to be a key driver of the next stage of the Information Revolution, and Marc and his team at Boston Dynamics are the clear technology leaders in advanced dynamic robots. I am thrilled to welcome them to the SoftBank family and look forward to supporting them as they continue to advance the field of robotics and explore applications that can help make life easier, safer and more fulfilling.”
Marc Raibert, CEO of Boston Dynamics, previously sold his company to Google in 2013. Following the departure of Andy Rubin from Google, the internet company expressed buyers remorse. Raibert’s company failed to advance from being a military contractor to a commercial enterprise. It became very challenging incorporating Boston Dynamic’s zoo of robots (mechanical dogs, cheetahs, bulls, mules and complex humanoids) into Google’s autonomous strategy. Since Rubin’s exit in 2014, rumors of buyers acquiring Boston Dynamics from Google have ranged from Toyota Research to Amazon Robotics. Softbank represents a new chapter for Raibert, and possible the entire industry.
Raibert’s statement to the press gave astute readers a peek of what to expect: “We at Boston Dynamics are excited to be part of SoftBank’s bold vision and its position creating the next technology revolution, and we share SoftBank’s belief that advances in technology should be for the benefit of humanity. We look forward to working with SoftBank in our mission to push the boundaries of what advanced robots can do and to create useful applications in a smarter and more connected world.” A quick study of the assets of both companies reveals how Boston Dynamics could help Softbank in its mission to build robots to benefit humanity.
Softbank’s premier robot is Pepper, a four-foot tall social robot that has been mostly deployed in Asia as a customer service agent. Recently, Pepper, as part of Softbank’s commitment to the Trump administration to invest $50 billion in the United States has been spotted in stores in California. As an example, Pepper proved itself as a valuable asset last year to Palo Alto’s premier tech retailer B8ta, accounting for a 6 times increase in sales. To date, there are close to 10,000 Pepper robots deployed worldwide, mostly in Asian retail stores. However, Softbank is also the owner of Sprint with 4,500 cell phone stores across the USA, and a major investor in WeWork with 140 locations globally servicing 100,000 members – could Pepper be the customer service agent or receptionist of the future?
According to Softbank’s website, Pepper is designed to be a “day-to-day companion,” with its most compelling feature being the ability to perceive emotions. Softbank boasts that their humanoid is the first robot ever to recognize moods of homo sapiens and adapt its behavior accordingly. While this is extremely relevant for selling techniques, Softbank is most proud of Pepper being the first robot to be adopted into homes in Japan. It is believed that Pepper is more than a point-of-purchase display gimmick, but an example of the next generation of caregivers for the rising elderly populations in Japan and the United States. According to the Daily Good, “Pepper could do wonders for the mental engagement and continual monitoring of those in need.” Its under $2,000 price point also provides an attractive incentive to introduce the robot into new environments, however wheel-based systems are a limitation in the home with clutter floors, stairs and other unforeseen obstacles.
Boston Dynamics is almost the complete opposite of Softbank; it is a research group spun out of MIT. Its expertise is not in social robots but in military “proofs of concepts” like futuristic combat mules. The company has touted some of the most frightening mechanical beasts to ever walk the planet from metal Cheetahs that sprint at over 25 miles per hour to mechanized dogs that scale mountains with ease to one of the largest humanoids every built that has an uncanny resemblance to Cyberdyne’s T-800. In a step towards commercialization, Boston Dynamics released earlier this year its newest monster – a wheel-biped leg robot named Handle that can easily lift over a hundred pounds and jump higher than Lebron James. Many analysts pontificated that this appeared to be Boston Dynamics attempt to prove its relevance to Google with a possible last mile delivery bot.
In an IEEE interview when Handle debuted last February, Raibert exclaimed, “Wheels are a great invention. But wheels work best on flat surfaces and legs can go anywhere. By combining wheels and legs, Handle can have the best of both worlds.” IEEE writer Evan Ackerman questioned, after seeing Handle, if the next generation of Boston Dynamic’s humanoids could feature legs with roller-skate like shoes. One is certain that Boston Dynamics is the undisputed leader of dynamic control and balance systems for complex mechanical designs.
Leading roboticist Dan Kara of ABI Research confirmed that “these [Boston Dynamics] are the world’s greatest experts on legged mobility.”
If walking is the expertise of Raibert’s team and Softbank is the leader of cognitive robotics with a seemingly endless supply of capital, the combination could be the first real striding humanoid capable of human-like emotions. By 2030 there will be 70 million people over the age of 65 years in America, with a considerable smaller amount of caregivers. To answer this call researchers are already converting current versions of Pepper into sophisticated robotic assistants. Last year, Rice University unveiled a “Multi-Purpose Eldercare Robot Assistant (MERA)” which is essentially a customized version of the Softbank’s robot. MERA is specifically designed to be a home companion for seniors that “records and analyzes videos of a person’s face and calculates vital signs such as heart and breathing rates.” Rice University partnered with IBM’s Aging-in-Place Research Lab to create MERA’s speech technology. IBM’s Lab founder, Susann Keohane, explained that Pepper “has everything bundled into one adorable self.” Now with Boston Dynamic’s legs Pepper could be a friend, physical therapist, and life coach walking side by side with its human companion.
Daniel Theobald, founder of Vecna Technologies – a healthcare robotic company, summed it best last week, “I think Softbank has made a major commitment to the future of robotics. They understand that the world economy is going to be driven by robotics more and more.”
Next Tuesday we will dive further into the implications of Softbank’s purchase of Boston Dynamics with Dr. Howard Morgan/First Round Capital, Tom Ryden/MassRobotics and Dr. Eric Daimler/Obama White House at RobotLabNYC’s event on 6/13 @ 6pm WeWork Grand Central (RSVP).
3! 2! 1! Go! Suddenly, robots jerk into motion and zoom across the field to score points, crossing over several types of terrain and shooting balls into high and low goals. Another buzzer sounds, drivers pick up their controls and all six robots—three per alliance—are now under human control. As these huge 120-pound robots score points, cheers ring through a packed stadium, fueled by high school students who worked hard to build their robot in just six weeks. As the match ends, nervous and excited students wait to see who is the winner of the 2016 world championship.
This was my last match as a member of the Girls of Steel FIRST Robotics Competition Team #3504. FIRST (For Recognition and Inspiration of Science and Technology) is a robotics program for students from K-12, and I was in the last division, FRC. The program is about more than introducing students to STEM and giving them hands-on experience, it’s about helping students to grow and have positive impacts by recognizing community service efforts, celebrating good values, developing soft skills, and guiding students to pursue higher education.
The next fall, I was off to college at the Illinois Institute of Technology in Chicago, studying to become a mechanical engineer. For the first time in my life, I was on my own. My time was so swept away by schoolwork, clubs, exploring the city, and making new friends that FIRST became a distant memory. Now, I fear that if I hadn’t bumped back into it, I would have lost touch with the program that played such a critical role in my life. While at a robotics club meeting, sign ups for the FRC Midwest Regional Planning Committee were passed around. Wanting to somehow get involved, I signed up.
I had no idea what to expect as I walked into a big conference room with a friend, and fellow FRC alum, for our first committee meeting. As the meeting progressed—densely filled with information and detailed plans for upcoming seasons and younger leagues—I sat there stunned. Regardless of if they were an alum or not, there were so many people who dedicated their time and effort to make this program work for students. It was a wake-up call. During my four years of being a member, I took so much for granted and didn’t realize the magnitude of hard work that volunteers and mentors put in for the students. They were the ones who supported and helped make me the person that I am today. So, to all of the volunteers and mentors who may be reading this, thank you for everything, I couldn’t have done it without you.
Just like that, I was hooked on FIRST once again. I kept going to meetings, trying to help as much as I could while making connections with other volunteers. Right before the season started, there were a couple teams who had a shortage of mentors, which is how I found Hawks on the Horizon FRC team #5125. The first meeting I went to with the students opened my eyes; I was so used to being a member, I had no idea how to mentor FRC. Eventually, I learned how to be a different figure in a familiar situation and how to adjust to the differences between a large all-girls team, with many resources, and a small family-like team. Yet, without a doubt, I knew this was my new team. From the very first day, I was welcomed with open arms by the mentors and students, who made sure I came back.
“Who is your role model?” was a common question for me when I was very young, and I’d respond with a superhero. Now it’s never asked, but I have a better answer since I’ve gotten the chance to meet some of them. One of my role models is a student mentor from my old team. Although I haven’t seen her in a long time, I found myself remembering conversations we’d had years ago about some of the challenges of being a student mentor. Knowing I wasn’t alone, and how she’d dealt with some sticky situations, helped me guide students to find their own answers and become comfortable being hands-off myself. I’ve learned so much from past mentors like her, fellow mentors, and the students themselves. I’m happy to have found my FIRST family again.
The Midwest Regional was a whole new experience for me. During high school, my competition days were hectic, filled with fixing the robot, talking to other teams, and cheering for our alliance. At this past regional, it was an entirely different world. I was the student volunteer coordinator, helping run the student ambassador program and talking to people about how to get involved. I only visited my team when I got the chance. I was less aware of specific robots and paid more attention to what happens behind the scenes to ensure regional runs like a well-oiled machine. I want to be more involved in the process, since these competitions ignited a passion for engineering within me, and I cannot express with words how grateful I am.
Regardless of the changes within the past year, by the end of the competition, I still rocked the uniform. I’m incredibly proud of Hawks on the Horizon, thankful for Girls of Steel, and amazed by everyone in the Midwest Planning Committee. This transition from being a member to mentor has been an amazing journey and it doesn’t stop here. I’ll be working to give back for the rest of my life by helping make programs like FIRST happen for future students. Thank you to everyone who is a part of this organization and to find a team or event, go to firstinspires.org.
If you enjoyed this article, you may also want to read:
In this episode, Audrow Nash and Christina Brester conduct interviews at the 2016 International Association of Science Parks and Areas of Innovation conference in Moscow, Russia. They speak with Vadim Kotenev of Rehabot and Motorica about prosthetic hands and rehabilatative devices; and Vagan Martirosyan, CEO of TryFit, a company that uses robotic sensors to help people find shoes that fit them well.
An image of one of the rehabilitative devices from Rehabot.
The robotic platform that scans your feet from TryFit.
In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda).
Every week, euRobotics will publish a video interview with a project, so that you can find out more about their activities. This week features Co4Robots: Achieving Complex Collaborative Missions via Decentralized Control and Coordination of Interacting Robots.
Objectives
Recent applications necessitate coordination of different robots. Current practice is mainly based on offline, centralized planning and tasks are fulfilled in a predefined manner. Co4Robots’ goal is to build a systematic methodology to:
accomplish complex task specifications given to a team of potentially heterogeneous robots;
develop control schemes appropriate for mobility and manipulation capabilities of the robots;
achieve perceptual capabilities that enable robots to localize themselves and estimate the dynamic environment state;
integrate all in a decentralized framework.
Expected impact
The envisioned scenarios involve multi-robot services in e.g. office environments. Although public facilities are in some degree pre-structured, the need for the Co4Robots’ framework is evident since:
it will lead to an improved use of resources and a faster accomplishment of tasks inside workspaces with high social activity;
it will contribute towards the vision of more flexible multi-robot applications in both professional and domestic environments, also in view of the “Industry 4.0” vision and the general need to deploy such systems in everyday life scenarios.
In the event of a natural disaster that disrupts phone and Internet systems over a wide area, autonomous aircraft could potentially hover over affected regions, carrying communications payloads that provide temporary telecommunications coverage to those in need.
However, such unpiloted aerial vehicles, or UAVs, are often expensive to operate, and can only remain in the air for a day or two, as is the case with most autonomous surveillance aircraft operated by the U.S. Air Force. Providing adequate and persistent coverage would require a relay of multiple aircraft, landing and refueling around the clock, with operational costs of thousands of dollars per hour, per vehicle.
Now a team of MIT engineers has come up with a much less expensive UAV design that can hover for longer durations to provide wide-ranging communications support. The researchers designed, built, and tested a UAV resembling a thin glider with a 24-foot wingspan. The vehicle can carry 10 to 20 pounds of communications equipment while flying at an altitude of 15,000 feet. Weighing in at just under 150 pounds, the vehicle is powered by a 5-horsepower gasoline engine and can keep itself aloft for more than five days — longer than any gasoline-powered autonomous aircraft has remained in flight, the researchers say.
The team is presenting its results this week at the American Institute of Aeronautics and Astronautics Conference in Denver, Colorado. The team was led by R. John Hansman, the T. Wilson Professor of Aeronautics and Astronautics; and Warren Hoburg, the Boeing Assistant Professor of Aeronautics and Astronautics. Hansman and Hoburg are co-instructors for MIT’s Beaver Works project, a student research collaboration between MIT and the MIT Lincoln Laboratory.
A solar no-go
Hansman and Hoburg worked with MIT students to design a long-duration UAV as part of a Beaver Works capstone project — typically a two- or three-semester course that allows MIT students to design a vehicle that meets certain mission specifications, and to build and test their design.
In the spring of 2016, the U.S. Air Force approached the Beaver Works collaboration with an idea for designing a long-duration UAV powered by solar energy. The thought at the time was that an aircraft, fueled by the sun, could potentially remain in flight indefinitely. Others, including Google, have experimented with this concept, designing solar-powered, high-altitude aircraft to deliver continuous internet access to rural and remote parts of Africa.
But when the team looked into the idea and analyzed the problem from multiple engineering angles, they found that solar power — at least for long-duration emergency response — was not the way to go.
“[A solar vehicle] would work fine in the summer season, but in winter, particularly if you’re far from the equator, nights are longer, and there’s not as much sunlight during the day. So you have to carry more batteries, which adds weight and makes the plane bigger,” Hansman says. “For the mission of disaster relief, this could only respond to disasters that occur in summer, at low latitude. That just doesn’t work.”
The researchers came to their conclusions after modeling the problem using GPkit, a software tool developed by Hoburg that allows engineers to determine the optimal design decisions or dimensions for a vehicle, given certain constraints or mission requirements.
This method is not unique among initial aircraft design tools, but unlike these tools, which take into account only several main constraints, Hoburg’s method allowed the team to consider around 200 constraints and physical models simultaneously, and to fit them all together to create an optimal aircraft design.
“This gives you all the information you need to draw up the airplane,” Hansman says. “It also says that for every one of these hundreds of parameters, if you changed one of them, how much would that influence the plane’s performance? If you change the engine a bit, it will make a big difference. And if you change wingspan, will it show an effect?”
Framing for takeoff
After determining, through their software estimations, that a solar-powered UAV would not be feasible, at least for long-duration use in any part of the world, the team performed the same modeling for a gasoline-powered aircraft. They came up with a design that was predicted to stay in flight for more than five days, at altitudes of 15,000 feet, in up to 94th-percentile winds, at any latitude.
In the fall of 2016, the team built a prototype UAV, following the dimensions determined by students using Hoburg’s software tool. To keep the vehicle lightweight, they used materials such as carbon fiber for its wings and fuselage, and Kevlar for the tail and nosecone, which houses the payload. The researchers designed the UAV to be easily taken apart and stored in a FedEx box, to be shipped to any disaster region and quickly reassembled.
This spring, the students refined the prototype and developed a launch system, fashioning a simple metal frame to fit on a typical car roof rack. The UAV sits atop the frame as a driver accelerates the launch vehicle (a car or truck) up to rotation speed — the UAV’s optimal takeoff speed. At that point, the remote pilot would angle the UAV toward the sky, automatically releasing a fastener and allowing the UAV to lift off.
In early May, the team put the UAV to the test, conducting flight tests at Plum Island Airport in Newburyport, Massachusetts. For initial flight testing, the students modified the vehicle to comply with FAA regulations for small unpiloted aircraft, which allow drones flying at low altitude and weighing less than 55 pounds. To reduce the UAV’s weight from 150 to under 55 pounds, the researchers simply loaded it with a smaller ballast payload and less gasoline.
In their initial tests, the UAV successfully took off, flew around, and landed safely. Hoburg says there are special considerations that have to be made to test the vehicle over multiple days, such as having enough people to monitor the aircraft over a long period of time.
“There are a few aspects to flying for five straight days,” Hoburg says. “But we’re pretty confident that we have the right fuel burn rate and right engine that we could fly it for five days.”
“These vehicles could be used not only for disaster relief but also other missions, such as environmental monitoring. You might want to keep watch on wildfires or the outflow of a river,” Hansman adds. “I think it’s pretty clear that someone within a few years will manufacture a vehicle that will be a knockoff of this.”
This research was supported, in part, by MIT Lincoln Laboratory.
In a long-awaited transaction, The New York Times Dealbook announced that SoftBank was buying Boston Dynamics from Alphabet (Google). Also included in the deal is the Japanese startup Schaft. Acquisition details were not disclosed.
Both Boston Dynamics and Schaft were acquired by Google when Andy Rubin was developing Google’s robot group through a series of acquisitions. Both companies have continued to develop innovative mobile robots. And both have been on Google’s for sale list.
Boston Dynamics, a DARPA and DoD-funded 25 year old company, designed two and four-legged robots for the military. Videos of BD’s robots WildCat, Big Dog, Cheetah, SpotMini (shown above getting into an elevator) and Handle, have been YouTube hits for years. Handle, BD’s most recent is a two-wheeled, four-legged hybrid robot that can stand, walk, hop, run and roll at up to 9 MPH.
Schaft, a Japanese startup/participant in the DARPA Robotic Challenge, recently unveiled a two-legged robot that climbed stairs, can carry 125 pounds of payload, move in tight spaces and keep its balance throughout.
SoftBank, through another acquisition (of French Aldabaran, the maker of the Nao and Romeo robots), and in a joint venture with Foxconn and Alibaba, has developed and marketed thousands of Pepper robots. Pepper is a cute, humanoid, mobile robot being marketed and used as a guide and sales assistant. The addition of Boston Dynamics and Schaft to the SoftBank stable add talent and technology to their growing robotics efforts, particularly the Tokyo-based Schaft.
Today, there are many issues we still cannot solve by ourselves with human capabilities. Smart robotics are going to be a key driver of the next stage of the information revolution,” said Masayoshi Son, chairman and chief executive of SoftBank.
Lecturer Steffen Pfiffner of University of Weingarten in Germany is teaching ROS to 26 students at the same time at a very fast pace. His students, all of them within the Master on Computer Science of University of Weingarten, use only a web browser. They connect to a web page containing the lessons, a ROS development environment and several ROS based simulated robots. Using the browser, Pfiffner and his colleague Benjamin Stähle, are able to teach how to program with ROS quickly and to many students. This is what Robot Ignite Academy is made for.
“With Ignite Academy our students can jump right into ROS without all the hardware and software setup problems. And the best: they can do this from everywhere,” says Pfiffner.
Robot Ignite Academy provides a web service which contains the teaching material in text and video format, the simulations of several ROS based robots that the students must learn to program, and the development environment required to build ROS programs and test them on the simulated robot.
Student’s point of view
Students bring their own laptops to the class and connect to the online platform. From that moment, their laptop becomes a ROS development machine, ready to develop programs for many simulated real robots.
The Academy provides the text, the videos and the examples that the student has to follow. Then, the student creates her own ROS program and makes the robot perform a specific action. The student develops the ROS programs as if she is in a typical ROS development computer.
The main advantage is that students can use a Windows, Linux or Mac machine to learn ROS. They don’t even have to install ROS in their computers. The only prerequisite of the laptop is to have a browser. So students do not mess with all the installation problems that frustrate them (and the teachers!), especially when they are starting.
After class, students can continue with their learning at home, library or even the beach if there is a wifi available! All their code, learning material and simulations are stored online so they can access them from anywhere, anytime using any computer.
Teacher’s point of view
The advantage of using the platform is not only for the students but also for the teachers. Teachers do not have to create the material and maintain it. They do not have to prepare the simulations or work on multiple different computers. They don’t even have to prepare the exams!! (which are already provided by the platform).
So what are the teachers for?
By making use of the provided material, the teacher can concentrate on guiding the students by explaining the most confusing parts, answer questions, suggest modifications according to the level of each student, and adapt the pace to the different types of students.
This new method of teaching ROS is exploding among the Universities and High Schools that want to provide the latest and most practical teachings to their students. The method, developed by Robot Ignite Academy, combines a new way of teaching based on practice and an online learning platform. Those two points combined make the teaching of ROS a smooth experience and can potentially see the students’ knowledge base skyrocket.
As user Walace Rosa indicates in his video comment about Robot Ignite Academy:
It is a game changer [in] teaching ROS!
The method is becoming very popular in the robotics circuits too, and many teachers are using it for younger students. For example, High School Mundet in Barcelona is using it to teach ROS to 15 years old students.
Additionally, the academy provides a free online certification exam with different levels of knowledge certification. Many Universities are using this exam to certify that their students did learn the material since the exam is quite demanding.
Some examples of past events
1 week ROS course in Barcelona for SMART-E project team members. This is a private course given by Robot Ignite Academy at Barcelona for 15 members of the SMART-E project that need to be up to speed with ROS fast. From 8th to 12nd of May 2017
The fields of modular and origami robotics have become increasingly popular in recent years, with both approaches presenting particular benefits, as well as limitations, to the end user. Christoph Belke and Jamie Paik from RRL, EPFL and NCCR Robotics have recently proposed an elegant new solution that integrates both types of robotics in order to overcome their individual limitations: Mori, a modular origami robot.
Mori is the first example of a robot that combines the concepts behind both origami robots and reconfigurable, modular robots. Origami robotics utilises folding of thin structures to produce single robots that can change their shape, while modular robotics uses large numbers of individual entities to reconfigure the overall shape and address diverse tasks. Origami robots are compact and light-weight but have functional restrictions related to the size and shape of the sheet and how many folds can be created. By contrast, modular robots are more flexible when it comes to shape and configuration, but they are generally bulky and complex.
Mori, an origami robot that is modular, merges the benefits of these two approaches and eliminates some of their drawbacks. The presented prototype has the quasi-2D profile of an origami robot (meaning that it is very thin) and the flexibility of a modular robot. By developing a small and symmetrical coupling mechanism with a rotating pivot that provides actuation, each module can be attached to another in any formation. Once connected, the modules can fold up into any desirable shape.
The individual modules have a triangular structure with dimensions of just 6 mm in thickness, 70 mm in width and 26 g in weight. Contained within this slender structure are actuators, sensors and an on-board controller. This means that the only external input required for full functionality is a power source. The researchers at EPFL have thereby managed to create a robot that has the thin structure of an origami robot as well as the functional flexibility of a modular system.
The prototype presents a highly adaptive modular robot and has been tested in three scenarios that demonstrate the system’s flexibility. Firstly, the robots are assembled into a reconfigurable surface, which changes its shape according to the user’s input. Secondly, a single module is manoeuvred through a small gap, using rubber rings embedded into the rotating pivot as wheels, and assembled on the other side into a container. Thirdly the robot is coupled with feedback from an external camera, allowing the system to manipulate objects with closed-loop control.
With Mori, the researchers have created the first robotic system that can represent reconfigurable surfaces of any size in three dimensions by using quasi-2D modules. The system’s design is adaptable to whatever task required, be that modulating its shape to repair damage to a structure in space, moulding to a limb weakened after injury in order to provide selective support or reconfiguring user interfaces, such as changing a table’s surface to represent geographical data. The opportunities are truly endless.
Reference
Christoph H. Belke and Jamie Paik, “Mori: A Modular Origami Robot“, IEEE/ASME Transactions on Mechatronics, doi:10.1109/TMECH.2017.2697310