Page 246 of 247
1 244 245 246 247

Soft-bodied robots: Actuators inspired by muscle

In this image, VAMPs are shown actuated and cut open in cross section. The cross section shows the inner chambers that collapse when vacuum is applied. Credit: Wyss Institute at Harvard University.
In this image, VAMPs are shown actuated and cut open in cross section. The cross section shows the inner chambers that collapse when vacuum is applied. Credit: Wyss Institute at Harvard University.

To make robots more cooperative and have them perform tasks in close proximity to humans, they must be softer and safer. A new actuator developed by a team led by George Whitesides, Ph.D. — who is a Core Faculty member at Harvard’s Wyss Institute for Biologically Inspired Engineering and the Woodford L. and Ann A. Flowers University Professor of Chemistry and Chemical Biology in Harvard University’s Faculty of Arts and Sciences (FAS) – generates movements similar to those of skeletal muscles using vacuum power to automate soft, rubber beams.

Like real muscles, the actuators are soft, shock absorbing, and pose no danger to their environment or humans working collaboratively alongside them or the potential future robots equipped with them. The work was reported June 1 in the journal Advanced Materials Technologies.

“Functionally, our actuator models the human bicep muscle,” said Whitesides, who is also a Director of the Kavli Institute for Bionano Science and Technology at Harvard University. “There are other soft actuators that have been developed, but this one is most similar to muscle in terms of response time and efficiency.”

Whitesides’ team took an unconventional approach to its design, relying on vacuum to decrease the actuator’s volume and cause it to buckle. While conventional engineering would consider bucking to be a mechanical instability and a point of failure, in this case the team leveraged this instability to develop VAMPs (vacuum-actuated muscle-inspired pneumatic structures). Whereas previous soft actuators rely on pressurized systems that expand in volume, VAMPs mimic true muscle because they contract, which makes them an attractive candidate for use in confined spaces and for a variety of purposes.

The actuator — comprising soft rubber or ‘elastomeric’ beams — is filled with small, hollow chambers of air like a honeycomb. By applying vacuum the chambers collapse and the entire actuator contracts, generating movement. The internal honeycomb structure can be custom tailored to enable linear, twisting, bending, or combinatorial motions.

VAMPs are functionally modeled after the human bicep, similar to the biological muscle in terms of response time and efficiency. Credit: Wyss Institute at Harvard University

“Having VAMPs built of soft elastomers would make it much easier to automate a robot that could be used to help humans in the service industry,” said the study’s first author Dian Yang, who was a graduate researcher pursuing his Ph.D. in Engineering Sciences at Harvard during the time of the work, and is now a Postdoctoral Researcher.

The team envisions that robots built with VAMPs could be used to assist the disabled or elderly, to serve food, deliver goods, and perform other tasks related to the service industry. What’s more, soft robots could make industrial production lines safer, faster, and quality control easier to manage by enabling human operators to work in the same space.

Although a complex control system has not yet been developed for VAMPs, this type of actuation is easy to control due to its simplicity: when vacuum is applied, VAMPs will contract. They could be used as part of a tethered or untethered system depending on environmental or performance needs. Additionally, VAMPs are designed to prevent failure — even when damaged with a 2mm hole, the team showed that VAMPs will still function. In the event that major damage is caused to the system, it fails safely.

“It can’t explode, so it’s intrinsically safe,” said Whitesides.

Here, a VAMPs lifts a 500 gram weight with ease. Credit: Wyss Institute at Harvard University

Whereas other actuators powered by electricity or combustion could cause damage to humans or their surroundings, loss of vacuum pressure in VAMPs would simply render the actuator motionless.

“These self-healing, bioinspired actuators bring us another step closer to being able to build entirely soft-bodied robots, which may help to bridge the gap between humans and robots and open entirely new application areas in medicine and beyond,” said Wyss Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Boston Children’s Hospital Vascular Biology Program, as well as Professor of Bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).

In addition to Whitesides and Yang, other authors on the study included: Mohit S. Verma, Ph.D.,(FAS); Ju-Hee So, Ph.D., (FAS); Bobak Mosadegh, Ph.D., (Wyss, FAS); Christoph Keplinger, Ph.D., (FAS); Benjamin Lee (FAS); Fatemeh Khashai (FAS); Elton Lossner (FAS), and Zhigang Suo, Ph.D., (SEAS, Kavli Institute).

May fundings, acquisitions and IPOs


UPDATE: June 1, 2016: Forbes wrote today that Toyota is in discussions with Google not only for Boston Dynamics but also for Schaft, the Japanese startup that won the DARPA Robotics Challenge — a two-company sale.

May was another big month for robotics – 13 companies were funded to the tune of $111 million. Four companies were acquired with 2 of the 4 reporting selling prices totaling $422 million. And that’s without the $5.2 billion bid for Kuka by Chinese Midea, or the pending sale of Google’s Boston Dynamics.

The financial pages are lighting up over recent stories about these big-money sales. First there was the $5.2 billion offer by Midea Group, a Chinese appliance manufacturer, for Kuka AG, the Augsburg, Germany-based manufacturer of robots and automated systems. Kuka is one of the Big Four of robot manufacturers. On the day of the bid, Kuka’s stock rose from $84/share to $110 where it’s stayed since.

Then came the announcement by Tech Insider that the Toyota Research Institute is in the final phase of negotiations to acquire Google’s robotics company Boston Dynamics, of Big Dog fame. Boston Dynamics spun out of the MIT Leg Lab in 1992 and worked on various military and DARPA funded research projects until Google’s Andy Rubin acquired the company along with 8 other robotics companies. Boston Dynamics never quite adapted to Google and Google’s push to build a consumer robot, hence their being put on the block in March, 2016.

From Forbes, news of a new fund focusing on robotics: Chrysalix VC, a Vancouver, BC venture capital group focused on alternative energy, has partnered with Dutch robotics commercialization center RoboValley to create a new VC fund focused on robotics. The vehicle is targeting E100 million.

Below are the fundings, acquisitions, IPOs and failures that actually happened in May:


  1. Locus Robotics raised $8 million in a Series A funding from existing seed investors. The funds will be used to expand product development and general marketing of Locus’ novel material handling robots. Locus is a Massachusetts-based company founded specifically in answer to Kiva Systems’ robots being taken in house by Amazon and no longer available to non-Amazon clients. Locus’ founder, Bruce Welty is a Kiva-using distribution center owner, who, as a consequence of Amazon’s actions, had no recourse other than to build a company that uses a fleet of robots integrated into current warehouse management systems to provide robotic platforms to carry picked items to a conveyor or to the packing station thereby reducing human walking distances and improving overall picking efficiencies.
  2. Gamaya, a Swiss aerial analytics spin-off from the Swiss EPFL, raised $3.2 million in a Series A funding. Funds will be used to develop their new 40 bands of light hyperspectral imaging sensor and analytics software platform (traditional multi-spectral sensors have 4 bands).
  3. Hortau is a California soil moisture monitoring company which raised $10 million to grow and broaden their new system of networked field sensors, weather stations and control units allowing growers to remotely open and close valves and fire up engines for irrigation from cloud-based management software.
  4. nuTonomy is a Cambridge-based start-up that raised $16 million in a Series A round of funding from a group of Singapore and US VCs. This is in addition to the $3.6 million raised in January which included funds from Ford Chairman Bill Ford. nuTonomy is planning to launch a fleet of autonomous taxis in Singapore by 2019 and begin testing later this year. NuTonomy is using retrofitted Mitsubishi electric cars and plans to add Renault EVs later this year.
  5. Mazor Surgical Technologies, an Israeli company, has sold $11.9 million of their stock, 4% of their shares, to Medtronic, a global medical technology, services, and solutions provider, with a performance agreement to sell another 6% of Mazor shares for up to $20 million. An additional clause of the agreement kicks in if performance milestones are met whereby Mazor can issue an addition 5% of new shares for an additional $20M from Medtronic. Details of the deal are here.
  6. Dedrone GmbH, a German startup whose DroneTracker drone detection platform, raised $10 million in a Series A funding from a series of EU and Silicon Valley VCs. In just 15 months, Dedrone has grown to more than 40 employees and 100 distributors in over 50 countries.
  7. Astrobotic Technology, the CMU spin-off company working on delivering payloads to the moon, raised $2.5 million from Space Angels Network. Astrobotic has 10 projects with governments, companies, universities, non-profits, NASA, and individuals for their first moon mission.
  8. MegaBots, an Oakland, CA entertainment startup, has raised $2.4 million in seed funding to bring robot-fighting to a venue near you. MegaBots plans to use the seed funding to build their robot for the fight against the Japanese team they’ve challenged; and to secure sponsorships, perhaps even a TV contract for a program that tracks the team from building the robots to competing.
  9. Zipline International, a San Francisco startup, raised $800k from UPS and $18 million from Yahoo founder Jerry Yang, Microsoft co-founder Paul Allen and others to develop their small robot airplane designed to carry vaccines, medicine and blood to remote areas where health workers place text orders for what they need.
  10. Cyberhawk Innovations raised $2.9 million in financing to enable UK-based Cyberhawk to expand its commercial development of the drone-captured data inspection market for the oil & gas industry and infrastructure markets.
  11. Eonite Perception, a Silicon Valley vision systems startup, raised $5.25 million in a seed round from multiple Silicon Valley VCs. Eonite is building a 3D mapping and tracking system for the virtual reality marketplace using low latency dense depth sensors.
  12. eyeSight Technologies, an Israeli vision systems startup, received $20 million from a Chinese VC group, for its vision system of sensing, gesture recognition and user awareness to be embedded into consumer products.
  13. AIO Robotics is a Los Angeles startup developing an all-in-one 3D printer scanner with an onboard CAD and modeling system. AIO received an undisclosed amount of seed funding.


  1. 5D Robotics, a San Diego area integrator of unmanned and mobile robotics using ultra-wide band (5D) communications, acquired Aerial MOB, a drone aerial cinematography startup, for an undisclosed sum. The acquisition has led to the formation of the 5D Aerial division which will provide 3D mapping, photogrammetry, thermal and multi-spectral imagery data to vertical markets including oil and gas, utilities and construction.
  2. Dematic, a global supplier of AGVs and materials handling technology, acquired (in March) NDC Automation, an AGV manufacturer in Australia and New Zealand, for an undisclosed amount.
  3. Voith GmbH, a family-owned German group of industrial and engineering companies, has sold 80% of its industrial services unit to buyout group Triton Partners for $342 million to free up capital for planned investments. Voith has a 25.1% share of Kuka’s stock which, if the $5.2 bn Midea offer passes, will be worth close to 40% more than the share value the day before the offer. According to Forbes, Voith ranks 200th in global family-owned businesses with revenue of $7.5 bn and 43,000 employees.
  4. ChemChina and a group of other investors including Chinese state funds, acquired Germany’s KraussMaffei Automation, an industrial robot integrator and plastics, carbon fiber, and rubber processor, for $1 billion – in January.


  • None. Private placements and increased investment from hedge funds, mutual funds and via corporate acquisitions appears to have dried up the robotics IPO pipeline.
  • But Moley Robotics, a UK startup developing a cooking robot, is using the new equity crowd funding rules that passed the FCC last year to offer 2% of their shares via the Seedrs crowd funding site. Details will be released soon to subscribers to the Moley and Seedrs websites.


  • RoboDynamics, a SoCal startup with a stylish mobile telepresence robot named Luna, has gone out of business.

Winning team story: The ups and downs of building a drilling robot for the Airbus Shopfloor Challenge

Felix von Drigalski, leader for Team NAIST, tells us about their robot design, unforeseen challenges, and working together to find a solution to prepare for the Airbus Shopfloor Challenge at ICRA in Stockholm, Sweden.

How did you come up with initial robot concept?

When we started talking about the contest back in January, I had the basic idea in my head fairly quickly: something that would make contact with the plate to stabilize the drill. The main inspiration was taken from the intuitive experience that it is hard to hold a position with an outstretched arm. Humans solve this problem by resting their hand when performing precision tasks such as writing, so we carried this idea over to the robot.

In mechanical terms, resting your hand reduces the amount of load-bearing structure between your tool and the workpiece. The kinematic chain is shorter and the flow of forces more direct. As such, there are fewer things that can vibrate and the stiffness is higher. And we knew that high stiffness would be key to drilling quality holes.

Further, to keep with the analogy, resting your hand while writing almost removes your arm from the process. You write with your hand, and not your arm. Similarly, in our solution the function “Drilling” would be fulfilled by the end effector, and the function “Positioning” only by the arm. If there is one thing that remains firmly hammered into my head after my engineering classes, it’s separation of function.

With our design goal formulated, we soon converged on the three-pronged frame with a separate pneumatic actuator. Instead of having all the motors in the robot arm actively compensate for the forces from the drilling, the end effector would transmit them right back to the workpiece and the arm would do nothing but move the end effector and hold it in place.


In our final design, the robot applies force during the drilling process to keep the end effector stationary. Ideally though, the robot would only be responsible for positioning. We were planning to take the next step to transfer the load off the robot and onto the end effector and the workpiece, but sadly, could not get the necessary parts to Japan in time, and simplified the idea in late April.

Everything else was mostly decided in order of simplicity. We already had the robot arm and small powerful cameras in the lab, so it was clear we would use that. A lot of the code originally comes from our research projects, that we tried to stay close to (for example, the vision system is repurposed eye-tracking code).

As an anecdote: another of the original ideas was to fine-tune the position of the drill by using the end effector. We were basically thinking of using a whole robot hand to adjust the position of the drill. However, unless the robot cannot ensure the positioning on its own, there is no advantage to adding that functionality to the end effector, so we dropped it in favor of the fixed rods with springs (rubber) on the feet. Just another lesson to be learned from Separation Of Function, the benevolent goddess of good design!

What sort of prep work went into the project prior to the competition?

After fleshing out the main idea between January and March, we decided to go for it and write a solid application. We spent a long weekend 3D printing a proof-of-concept, cutting together this flashy video and writing our application in research paper format — since that’s what grad students do. When we got the OK from Airbus to compete on March 27, I doubt we realized how much time would be spent on the project until we started our weekly meetings with our Airbus contact.

It turned out that our biggest problem was logistics. I would estimate a good 40% of our time was spent calling the airport, customs, shipping companies, university offices, and insurance companies for quotes and details about how to get our robot from Japan to Sweden. Shipping would take a whole week, a visiting PhD student needed the robot for his experiments, and with us having to work with the robot before the contest, we had to think creatively and find a workable solution fast.

The biggest issue was that our robot weighed 31 kg on its own, and no single piece of luggage can be over 32 kg in European airports. This left us just 1 kg to package the robot safely. In the end, we made a custom box by cutting a robot-shaped hole into boards of styrofoam with a soldering iron and gluing them together. We even sewed a bag for it.

Teammates Gustavo and with the robot and resourceful packaging. Photo: Felix von Drigalski
Teammates Gustavo and Lotfi with the robot and resourceful packaging. Photo: Felix von Drigalski

This is a good moment to remember that this robot arm is easily worth $80,000!

With all the logistics issues, long delivery times, and Golden Week (a string of closely-packed public holidays in Japan) delaying them even more, the time we had to prepare the solution got shorter and shorter. We had parts arriving until the last week before we left, so we had to put everything together in a hurry. We ran the first full trial before the competition on Saturday morning, the day of our flight! It was ridiculously last minute.

Thinking of prep work: we needed our robot at different heights when the plate was inclined, but I designed our stand for the wrong height! I spent almost two months under the completely unfounded misconception that the plate would be inclined by 15° instead of 30°. This is why our stand looks completely different each round and why we had a new mechanical design problem to solve each time. As if that weren’t enough, we misread the distance between the rails and were off by 50 mm. This is why the stand always looks like a mess, on top of everything else!

What was it like competing with your robot? What have you learned in the manic process? Anything you’d do differently?

When we arrived with our limited preparation on Monday to set up the robot and saw the other teams, we didn’t have high hopes. Everyone looked so well-prepared and primed to win. At that point, we just wanted to deliver something solid and keep our heads held high. We freaked out at the thought of not making it through the demo round, so we dragged the robot back to the hotel and stayed up all night to make sure we would get into the contest.


After passing the demo round (with a bug that inverted our hole pattern; look closely at the interview video), we were so happy that we were actually drilling real holes in a real plate that we just kept drilling all through the first round, almost blind. We didn’t have a good way to recalibrate on the fly that day, so we had to accept that our positioning would be off and hope for the best. In fact, our holes were so far off the mark that two thirds of them gave us negative points and we got a terrible score. However, one of the judges said that the calibration was basically the only problem we had — the holes themselves were “absolutely perfect”. That made us perk up.

We had a look at the other plates. Everyone was having more problems with the drilling than we thought. Most were applying force on the drill by moving their robot arm, and the quality of their holes were suffering because of it. Our drilling process was by far the fastest and cleanest. My design worked. It really was only a problem of calibration and our code. That was the moment we gained hope and decided to take the robot back to the hotel for another grueling night of work.

The next day, we paid for our sleep deprivation: we spent the first half hour of round 2 with the robot not moving at all, until we figured out that we never put all the things we fixed during the night into our live code. After we finally did and finished the round with barely half the plate’s holes drilled, we thought we may well have missed our shot at entering the final. However, we absolutely wanted to prove to ourselves that we could at least properly drill a whole set and take a picture of it home, so we were hoping for a chance to maybe perform on a plate separately from the contest, if it came to that. When all the team leaders were informed that the last round would be between 4 teams and we were in it, I was the first to rush out of the room and prepare (and accidentally spill the news to the neighboring and positively amazing team from India, who were the only ones to drop out that round).

In the end, we had more arcane bugs to sort out during the final and we didn’t get to drill through the entire hole pattern like we hoped. The first 10 minutes we weren’t drilling anything at all, and it was only during the last 20 minutes that we really were on point and drilling at (almost) full speed. It was also the first time I had the time to stand back and talk to spectators about what we were doing. Which is when our drill bit got sucked into the clamp with 3 minutes on the clock. Ah well, things never go as planned. We still managed to drill three more holes, although sadly not the damaged one the bit got stuck on. All of that stress fell off our shoulders when the round was called, and we could turn our attention to celebrate everything and everyone who powered through along with us.


All in all, the competition was a blast! Constant adrenaline in an incredible atmosphere. Probably the best thing was that there was not a shred of negativity anywhere, all of the teams were supportive of everyone’s efforts. Team Sirado even helped us troubleshoot KUKA software we had been struggling to install on our virtual machine the day before the competition! Everyone was simply great! Airbus made an excellent choice putting the celebration right after the final round instead of the awards. That allowed us to celebrate everyone’s efforts rather than our scores.

It was also an incomparable team-building experience. Although we had some stressful moments when our robot did not run in the second round, no one cracked or became emotional or accusatory — everyone stayed laser sharp and focused on solving the problem. We could not have done it without a heavy dose of team spirit.

In all that chaos, we learned our lessons about project planning and software development. Probably the worst mistake we made was working on the same code on multiple machines, with multiple sleep deprived people during the night, then trying to merge all those changes. To illustrate: There is a comment in our code history from 6 AM on Tuesday morning that just reads “only a miracle can save us” (the first day of the contest!!), and our live branch is called “last-minute-firebrigade”. You can imagine the state we were in!

Photo courtesy: Felix von Drigalski
Photo: Felix von Drigalski

What advice would you give to any future robotic challengers out there?

I relay these from the team:

  • Do not underestimate logistics!
  • Don’t rely exclusively on software, robust hardware design is key!
  • Just try and have fun, don’t think it’s impossible without even trying!

And I would stress how much trying is worth it, no matter what. Winning is great and all, but it was just after the final, after close to 60 hours with almost no sleep and all of our holes drilled, that we felt the most relieved and accomplished. We didn’t spend a second thinking about scores or placements — we were just proud that we made something that worked, and we didn’t give up. Thinking up and developing your own solution, working together and seeing it come to life is one of the most satisfying and fulfilling experiences. Making your performance and efforts your motivation, rather than winning, will keep you going when things look dire.

Any closing words?

After the event, we kept getting amused comments about not having Japanese members in our Japanese team. It’s true we come from Germany, Mexico, Belgium and Ecuador, but studying and living in Japan has changed all of us in ways that we sometimes don’t even realize, as anyone who has left home to go abroad understands. As such, we have always said that our team consists of 4 people with 5 nationalities — just no one with a Japanese passport.

So, for the big closing words: If you’re an aspiring youngster reading this and you want to see the world, or if you’re in your undergrad and thinking of going on an exchange, or if you dream of living somewhere else — go for it! Every journey starts with the first step, and it is worth it.

From left to right: Felix von Drigalski, Lotfi El Hafi, Pedro Uriguen, and Gustavo Garcia.

Like this story? Read more about the Airbus Shopfloor Challenge here.

Robocar news around the globe: Tesla crash, Declaration of Amsterdam, and automaker services

Wepods: The first autonomous vehicle on Dutch public roads. Source: True Form/YouTube
WEpods: The first autonomous vehicle on Dutch public roads. Source: True Form/YouTube

We have the first report of a real Tesla autopilot crash. To be fair to Tesla, their owner warnings specify very clearly that the autopilot could crash in just this situation. In the video, there is a stalled car partly in the lane and the car in front swerves around, revealing little time for the driver, or the autopilot, to react.

The deeper issue is the way that the improving quality of the Tesla Autopilot and systems like it are lulling drivers into a false sense of security. I have heard reports of people who now are trusting the Tesla system enough to work while being driven, and indeed, most people will get away with this. And as people get away with it more and more, we will see people driving like this driver, not really prepared to react. This is one of the reasons Google decided not to make a system which requires driver takeover ever. As the system gets better, does it get more dangerous?

Declaration of Amsterdam

Last month, various EU officials gathered in Amsterdam and signed the Declaration of Amsterdam that outlines a plan for normalizing EU laws around self-driving cars. The meeting included a truck automation demo in the Netherlands and a self-drive transit shuttle demonstration. It’s a fairly bland document, more an expression of the times, and it sadly spends a lot of time on the red herring of “connected” vehicles and V2V/V2I, which governments seem to love, and self-driving car developers care very little about.

Let’s hope the regulatory touch is light. The reality is that even the people building these vehicles can’t make firm pronouncements on their final form or development needs, so governments certainly can’t do that, we must be careful that attempts to “help” but may hinder. We already have a number of examples of that happening in draft and real regulations and we’ve barely gotten started. For now, government statements should be limited to, “let’s get out of the way until people start figuring out how this will actually work, unless we see somebody doing something demonstrably dangerous that can’t be stopped except through regulations.” Sadly, too many regulators and commentators imagine it should be, “let’s use our limited current knowledge to imagine what might go wrong and write rules to ban it before it happens.”

Speech from the throne

It was a sign of the times when her Majesty the Queen, giving the speech from the throne in the UK parliament, laid out elements of self-driving car plans. The Queen drove jeeps during her military days, and so routinely drives herself at her country estates, otherwise she would be among the set of people most use to never driving.

The UK has 4 pilot projects in planning. Milton Keynes is underway, and later this year, a variation of the Ultra PRT pods in use at T5 of Heathrow airport — they run on private tracks to the car park — will go out on the open road in Greenwich. They are already signing up people for rides.

 Car companies thinking differently

In deciding which car companies are going to survive the transition to robocars, one thing I look for is willingness to stop thinking like a traditional car company which makes cars and sells them to customers. Most car company CEOs have said they don’t plan to keep thinking that way, but what they do is more important than what they say.

In the past we’ve seen Daimler say it will use their Car2Go service (with the name Car2Come probably likely to cause giggles in the USA) as a way to sell rides rather than cars. BMW has said the same about DriveNow. (And now GM has said this about its partnership with Lyft.) Daimler is also promoting their moovel app which tries to combine different forms of mobility, and BMW is re-launching DriveNow in Seattle as ReachNow which adds peer-to-peer carsharing and other modes of transportation to the mix.

Of course, these are tiny efforts for these big companies, but it scores big over companies still thinking only in the old ways. I’m looking at you, most car companies.

BWM also announced the iNext electric flagship sedan will offer self-drive in 2021.

Florida tells cities — think about self-driving

In 2010, I put out a call for urban planners to starting thinking about robocars and mostly it fell on deaf ears. Times are changing and this month Florida told its cities they should include consideration of this and other future transit in their updated long-term plans.

It is a tough call. Nobody’s predictions about the future here are good enough to make a firm plan and commit billions of dollars. At the same time, we are starting to learn that certain plans — especially status quo plans — are almost certainly seriously wrong. We may not have a perfect idea on what to spend city money on, but we can start to learn what not to do.

Fortunately, transportation is becoming a digital technology, which means you can change your plans much faster than physical infrastructure plans. Robocars like bare pavement, as stupid as can be. The ‘smarts’ go in the cars, not in the roads or the cities. So if you change your mind, you just have to reprogram your cars, not rebuild your city. Which is good, because you can’t rebuild your city.

China gets in the game

China is the world’s number one car manufacturing company. A lot of people don’t know that. Last year I visited the Shanghai auto show and it was a strange trip to walk through giant hall after giant hall of automakers you have never heard of before.

Self-driving car action in China has been slow. Right now everybody has a focus on just making regular cars for the rising middle class that is buying them as fast as they can. Wealthier Chinese usually buy foreign brands, although those cars are often made in China even though they have a VW or Buick nameplate.

Baidu has been working on cars for a couple of years and promises a pilot project in 2018. Recently, Chinese automaker Changan did an autopilot demo driving 1,000 miles. China has had its own annual academic version of the Darpa Grand Challenge for several years as well.

There is a strong chance that companies like Uber, Apple or Google, when they want to get their cars made, could go to Chinese manufacturers. Now that they’re getting practice at making their own technology. China is also a major source of electric vehicles, with future robotaxis likely to be small and electric. The manufacturers and suppliers with the most experience at making such vehicles are likely to be the winners.

Apple buys into Didi

Speaking of which, Apple just put a billion dollar investment into Didi. You may not know Didi, but it is the dominant phone-hail service in China with a much larger market share than Uber. It’s one of the few places Uber has lost in the market, but of course it’s China. As the auto industry moves to being about selling rides rather than cars, it’s an interesting move by Apple, which rarely does outside investments like this.

Book Review: ‘Peer Reviews in Software, A Practical Guide,’ by Karl Wiegers

Code review of a C++ program with an error found.
Code review of a C++ program with an error found.

I have been part of many software teams where we desired to do code reviews. In most of those cases the code reviews did not take place, or were pointless and a waste of time. So the question is: how do you effectively conduct peer reviews in order to improve the quality of your systems?

I found this book, Peer Reviews in Software: A Practical Guide by Karl E. Wiegers. This book was recommended to me, and having “practical guide” in the title caught my attention —  I have reviewed other books that claimed practical, but were not. Hopefully this book will help provide me (and you) with tools for conducting valuable code reviews.

Peer Reviews in Software: A Practical Guide. Photo: Amazon
Peer Reviews in Software: A Practical Guide. Photo: Amazon

As a human, I will make mistakes when programming; finding my mistakes is often difficult since I am very close to my work. How many times have you spent hours trying to find a bug in your code only to realize you had a bad semi-colon or parentheses? Another person who has not worked on the code for hours might have been able to spot the problem right away. When I first started programming it could be embarrassing to have somebody review my code and point out the problems. However now that I am more senior, I do not view it as an embarrassment, but as a learning opportunity since everybody has a different set of experiences that influences their code. I would encourage other developers to view it as a learning experience and not be bashful about reviews. Remember the person is critiquing the work, not you; this is how to become a better developer.

According to Wiegers, there are many types of peer reviews including: inspections, team reviews, walkthroughs, pair programming, peer desk check passaround, and finally ad hoc review.

This book is divided into three sections:

  1. Cultural & social aspects
  2. Different types of reviews (with a strong focus on inspection)
  3. How to implement review process within your projects

Cultural & Social Aspects

In this first section of the book, the author makes the argument that quality work is not free and that “paying” the extra cost of peer reviews is a good investment in the future. By having peer review you can reduce failures before the product is released out into the world and any applicable reworks. Shifting the defect detection to the early stages of a product has huge potential payoff, due to high costs of fixing defects found late in the release cycle, or after release. The space shuttle program found the relative cost for fixing a defect is: $1, if found during initial inspection; $13, if found during a system test; and $92, to fix after delivery! In the book, the author documents various companies who saved substantial amounts of time and money all by having code inspection programs.

One thing I like is the reference to IEEE 1999, which talks about other items that are good to review. People don’t think about it but other things, such as marketing brochures, requirement specifications, user guides, test plans and many other things, are good candidates for peer review.

I have seen many project teams try to do error tracking and/or code reviews but fail, due to team culture. I saw one case where peer review actually worked: when a dedicated person’s only job was to manage reliability in the project. He was great at hounding people to track bugs and review code. This book discussed how team culture must be developed to value “quality”. If you are the type of person that does not want to waste time reviewing another’s code, you must remember you will want the other person to “waste” time looking at your code. In this manner, we must all learn to scratch each other’s back. There are also two traps to watch out for:

  1. Becoming lazy and submitting bad code for review since somebody else will find/fix it, or
  2. Trying to perfect your code before sharing it, in order to protect your ego from getting bruised, and to only show your best work.

We also cannot forget managers. Managers need to value quality and provide time and resources for employees to develop good practices. Managers need to understand the point of these exercises are to find flaws and people should not be punished based on those flaws. I have often seen managers not putting time in the schedule for good code reviews.

Types of reviews

Before discussing the types of reviews there is a good discussion on the guiding principles for reviews. Some of the principles are:

  • Check your egos at the door
  • Keep the review team small
  • Find problems during review, but don’t try to fix them at the review. Give up to 1 minute for discussion of fixes.
  • Limit review meeting to 2 hours max
  • Require advanced preparation

There are several types of peer reviews discussed in this book. This list starts with the least formal approach and develops until the most formal approach (the book uses the opposite order which I found non intuitive).

  1. Ad Hoc – These are the spur of the moment meetings where you call a coworker to your desk to help with a small problem. Usually, this just solves an immediate problem. (This is super useful when trying to work out various coordinate transforms)
  2. Peer passaround/deskcheck, – In this approach a copy of the work is sent to multiple reviewers, after which you can then collate all of the reviews. This allows multiple people to look at the code/item and also lets you get something if one person does not respond. In the peer deskcheck version, only one person looks at it instead of passing it around for multiple reviews.
  3. Pair Programming – This is the idea that two people program together. So while there is no official review two sets of eyes see each line of code being typed. This has an added bonus that now two people will understand the code. The downside is that often one of the coders can “doze-off” and not be effective at watching for flaws. Also, many coders might not like this.
  4. Walkthrough – This is where the author of the code walks through the code to a group of reviewers. This is often unstructured and heavily dependent on how good of a job the author prepared. In my experience this is good for helping people understand the code and finding large logic flaws, but not so much for finding smalls flaws/bugs.
  5. Team Review – This is similar to the walkthrough however reviewers are provided with documentation/code in advance to review and their results are collated.
  6. Inspection – Finally, we have the most formal approach which the author appears to favor. In this approach the author of the code does not lead the review, rather, a moderator, often with the help of checklists, will lead the meeting and read out the various sections. After the moderator reads a section, the reviewers discuss it. The author of the code can answer questions and learn how to improve various sections. Often the author might identify other instances of the same problem that the reviewers did not point out. An issue log should be maintained as a formal way of providing feedback and a list to verify fixes against.
Suggested review methods from "Peer Reviews in Software"
Suggested review methods from “Peer Reviews in Software”

The book then spends the next few chapters detailing the inspection method of peer review. Here are just a few notes. As always, read the book to find out more.

  • In most situation 3-7 is a good size group for the inspection. The number of people can be based on the item being reviewed.
  • The review needs to be planned in advance and have time to prepare content to distribute to reviewers.
  • After the meeting the author should address each item in the issue log that was created and submit it to the moderator (or other such person) to verify that the solutions are good.
  • Perform an inspection when that module is ready to pass to the next development stage. Waiting too long can leave a person with a lot of bad code that is now too hard to fix.
  • You can (sort of) measure the ROI by looking at the bugs found and how long they took to find. There are many other metrics detailed in the book.
  • Keep spelling and grammar mistakes on a separate paper and not on the main issue list.

How to implement review processes within your projects

Getting a software team and management to change can be difficult. The last part of this book is dedicated to how you can get reviews started, and how to let them naturally grow within the company. One significant thing identified is to have a key senior person act as a coordinator for building a culture of peer review and to provide training to developers. There is a nice long table in the book of the various pitfalls that an organization may encounter and how to deal with them.

This book also discusses special challenges and how it can affect your review process. Some of the situations addressed are:

  • Large work products
  • Geographic or time separation
  • Distributed reviewers
  • Asynchronous review
  • Generated and non-procedural code
  • To many participants
  • No qualified reviewers available

At the end of this book, there is a link to supplemental material online. I was excited to see this. However when I went to the site, I saw it was all for sale and not free (most things were around $5). That kind of burst my bubble of excitement for the supplemental material. There is a second website for the book that is referenced but does not seem to be valid anymore.

Throughout the book the idea is getting proper training for people on how to conduct inspection reviews. Towards the end of the book, the idea of hiring the book’s author as a trainer to help with this training is suggested.

Overall, I think this is a good book. It introduces people on how to do software reviews. The use of graphics and tables in the book are pretty good. It is practical and easy to read. I also like how this book addresses management and makes the business case for peer reviews. I give this book 4.5 out of 5 stars. The missing 0.5 stars is due to the supplemental material not being free and for not providing those forms with the book.

Disclaimer: I do not know the book author. I purchased this book myself from Amazon.

The Drone Center’s Weekly Roundup: 5/30/16

An MQ-9 Reaper flies at an air show demonstration at Cannon Air Base, NM. Cannon is home to the Air Force Special Operations Command’s drone operations. Credit: Tech. Sgt. Manuel J. Martinez / US Air Force
An MQ-9 Reaper flies at an air show demonstration at Cannon Air Base, NM. Cannon is home to the Air Force Special Operations Command’s drone operations. Credit: Tech. Sgt. Manuel J. Martinez / US Air Force

At the Center for the Study of the Drone

As more commercial drone users take to the sky, insurers are struggling to develop policies to cover the eventualities of flying. Meanwhile, insurance companies also want to fly drones themselves for appraisals and damage assessments. We spoke with Tom Karol, general counsel-federal for the National Association of Mutual Insurance Companies, to learn about the uncertain landscape that is the drone insurance industry.


Pakistan criticized the U.S. government for a drone strike that killed Mullah Akhtar Mansour, the leader of the Afghan Taliban. In a statement, Sartaj Aziz, foreign affairs special adviser to Pakistani Prime Minister Nawaz Sharif, said that the strike undermined attempts to negotiate a peace deal with the Taliban. “Pakistan believes that politically negotiated settlement remains the most viable option for bringing lasting peace to Afghanistan,” Aziz said. (Wall Street Journal)

Commentary, Analysis and Art

The editorial board at the New York Times argues that the political and strategic consequences of a drone strike are not always immediately apparent.

Also at the New York Times, Vanda Felbab-Brown contends that the drone strike that killed Mullah Mansour “may create more difficulties than it solves.”

At Lawfare, Robert Chesney considers what it would mean if the strike against Mullah Mansour had not been conducted under the Authorization for Use of Military Force.

At the National Interest, Elsa Kania and Kenneth W. Allen provide a detailed summary of China’s push to develop military drones.

At Slate, Stephen E. Henderson writes that law enforcement officers have as much right under the law to fly a drone as a private citizen.

Also at Slate, Faine Greenwood offers an etiquette guide to flying a drone.

Jarrod Hodgson, an ecologist at the University of Adelaide in Australia, is calling for scientists and hobbyists to follow a code of conduct when using drones for wildlife research. (ABC News)

In testimony before the House Committee on Homeland Security, Subcommittee on Border and Maritime Security, Rebecca Gambler, the director of homeland security and justice at the Government Accountability Office, reviewed the Customs and Border Protection’s drone program. (GAO)

At NBC News, Richard Engel reports from inside Creech Air Force Base, the Nevada home of U.S. drone operations.

At, Malek Murison takes a look at a technological solution aimed at boosting the popularity of drone racing.

A report by the NPD Group found that drone sales increased by 224 percent between April 2015 and April 2016. (MarketWatch)

At Flightglobal, Beth Stevenson examines the German military’s efforts to acquire advanced unmanned aircraft.

At Aviation Week, Tony Osborne considers the challenges that beset the Anglo-French project to develop the Taranis, an advanced fighter drone.

The Economist surveys the different drone countermeasures currently in development.

In Cities From the Sky, German photographer Stephan Zirwes captures aerial views of pools, beaches and golf courses. (Curbed)

Meanwhile, photographer Gabriel Scanu uses a drone to capture the scale of Australia’s landscapes. (Wired)

Know Your Drone

Chinese smartphone maker Xiaomi unveiled two consumer multirotor drones. (Wired)

DRS Technologies partnered with Roboteam to develop an anti-IED unmanned ground vehicle for the U.S. Army. (Press Release)

Belgian startup EagleEye Systems has developed software that allows commercial drones to operate with a high degree of autonomy. (ZDNet)

Estonian defense firm Milrem announced that its THeMIS unmanned ground vehicle has passed a round of testing by the Estonian military. (Digital Journal)

Defense contractor Raytheon is working to offer its Phalanx autonomous ship defense system as a counter-drone weapon. (Flightglobal)

Meanwhile, Raytheon and Israeli firm UVision are modifying the Hero-30, a canister-launched loitering munition drone, for the U.S. Army. (UPI)

3D printing services company Shapeways announced the winners of a competition to design 3D-printed accessories for DJI consumer drones. (

Cambridge Pixel released a radar display that can control multiple unmanned maritime vehicles. (C4ISR & Networks)

The Office of Naval Research released footage of LOCUST, a drone swarming system, in action. (Popular Science)

Drones at Work

Tom Davis, an Ohio-based engineer, offers the elderly the opportunity to fly drones. (Ozy)

Egyptian authorities used an unmanned undersea vehicle to search for debris from the downed EgyptAir flight in the Mediterranean. (Reuters)

Commercial spaceflight company SpaceX completed another successful landing of its Falcon 9 reusable rocket on an unmanned barge. (The Verge)

A South Korean activist group uses unmanned aircraft to drop flash-cards into North Korean territory. (CNN)

Hobbyists used a series of drones to make an impressive Star Wars fan film. (CNET)

The city of Denver partnered with Autodesk, 3D Robotics, and Kimley-Horn to make a drone-generated 3D map of the city’s famous Red Rocks site. (TechRepublic)

The Town of Hempstead in Long Island, New York, is considering a ban on the use of drones over beaches, pools, golf courses, and parks. (CBS New York)

A man in Rutherford County, Tennessee told WKRN that his drone was shot at as he was flying near his home.

HoneyComb, a drone services startup, offers farmers the chance to view every inch of their farms from the air. (New York Times)

A drone resembling the Iranian Shahed-129 was spotted flying over Aleppo, Syria. (YouTube)

Images obtained by Fox News appear to show a Chinese Harbin BZK-005 drone on Woody Island, one of the Paracel Islands in the South China Sea.

Insurance giant Munich Re partnered with PrecisionHawk to use drones for assessing insurance claims. (Press Release)

The Australian Navy completed flight trials of the Boeing Insitu ScanEagle drone. (UPI)

Meanwhile, Australian energy company Queensland Gas, a subsidiary of Shell, will use a Boeing Insitu ScanEagle to conduct pipeline inspections. (Aviation Business)

The FAA granted the Menlo Park Fire Protection District permission to use drones during wildfires and other emergencies. (Palo Alto Online)

Industry Intel

Defense firm Thales sold its Gecko system, which uses radars and thermal cameras to detect drones, to an undisclosed country in Southeast Asia. (Press Release)

General Atomics Aeronautical Systems, Inc. announced collaborations with the University of North Dakota and CAE, Inc. to provide equipment for the new RPA Training Academy in Grand Forks, North Dakota. (Press Release)

Ultra Electronics secured an $18.4 million contract to provide engineering support to a NATO country for a surveillance drone. (IHS Jane’s 360)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.



Advanced Robotics Manufacturing Institute in planning for USA

Robot weaving carbon fiber into rocket parts. Source: NASA/YouTube
Robot weaving carbon fiber into rocket parts. Source: NASA/YouTube

The US just moved a step closer to building an advanced robotics institute modeled on the hugely successful Fraunhofer Institutes. The proposed ARM or Advanced Robotics Manufacturing Institute is one of seven candidates moving forward in an open bid for $70 million funding from NIST for an innovation institute to join the National Network for Manufacturing Innovation. Previously funded institutes are for advanced composites, flexible electronics, digital and additive manufacturing, semiconductor technology, textiles and photonics.

The ARM Institute bid is being led by Georgia Tech and CMU, alongside several other universities, including MIT, RPI, USC, UPENN, TAMU and UC Berkeley. At a recent ‘industry day’ at UC Berkeley on May 25, an invitation was extended to “all companies, regional economic development organizations, colleges and universities, government representatives and non-­profit groups with interests in advanced and collaborative robotics industry to participate in this initiative to ensure the competitiveness of US robotics and thereby enhance the quality of life of our citizens.”

Industry participation is critical to the success of the ARM institute. Industry is expected to provide matching funding to the NIST grant, but also strategically to provide guidance in setting the priorities of the initiative. Currently defined as:

  • Collaborative Robotics
  • Rapid Deployment of Flexible Robotic Manufacturing
  • Low-cost Mass Production in Quantities of One

The vision is to create a national resource of manufacturing research and solutions, linking regional hubs and manufacturing centers. An important component of this vision is providing a pathway for SMEs and startups to connect with established industry and research partners.

Source: NASA/YouTube
Source: NASA/YouTube

To participate in the ARM Institute bid, industry leaders, SMEs and startups are expected to provide a non-binding letter of support before June 15. For more information see the ARM Institute. If the bid is successful, the ARM Institute may start as early as first quarter 2017.

#ICRA2016 photo essay


A small photo essay from this year’s ICRA in Stockholm featuring photos from the exhibition hall. At the end, you can watch a video with all the companies, their robots and systems. (All photos by Ioannis Erripis @ Robohub.)

Consequential Robotics


MIRO is a very advanced biomimetic companion robot originated from Sheffield University.


Furhat Robotics


Furhat is a very original take on human-robot interaction. They bypass the stereotypical uncanny valley problem with a mix of realism and abstraction that focus on the crucial parts of what someone may value in a conversation. Furhat may act as an interface between a human and a service like Siri. You can see in this series of photos its simple, but clever, construction with a vertical video projection to the face acting as a screen, via a tilted mirror.



Cogniteam, from Israel, presented the hamster, a small but capable rover similar in size to an R/C car.


HEBI Robotics


HEBI Robotics had on display an arrangement comprised of several X-Series modular actuators.




Husqvarna initiated a project where they provide a version of their established autonomous lawnmowers to researchers. These are modified versions that have an open architecture so one can add various modules and have direct access to the functions of the platform. The robots are not for sale, but if you are interested, and your research can make use of this platform, you can contact Husqvarna and they may collaborate with your lab or institute.



Milvus Robotics


Milvus Robotics, from Turkey, had on display their smart kiosk robot and their research platform.




Moog produces a variety of products for industrial or aerospace use. The photos show examples of metal 3D printing where the company is heavily focused. 3D printing can produce shapes that otherwise would have been impossible to manufacture. Currently, apart from fatigue, their products are on par mechanically with die-cast items. Moog is working on improving the final item properties and create products comparable to extruded metal items, or better.


Sake Robotics


Sake robotics have a direct demo of their griper which is robust, strong and relatively cheap. It has a clever system of pulleys that allow every component to be loose while standing by, but on the same time it can tighten up and hold heavy objects when under load.


ROBOTIS – Seed Robotics – Righthand Robotics


Robotis, apart from their big variety of products, hosted two other companies: seed robotics and Righthand robotics. Each company had their own specialized gripers and robotic hands.


Phoenix Technologies


Phoenix Technologies demonstrated their fast and modular 3D tracker.


PAL Robotics


The REEM-C humanoid managed to attract most of the interest on PAL Robotics stand. It was able not only to perform standing but also walked throughout the exhibition hall (loosely attached to a frame for security reasons).


Here you can watch a video with various clips from the exhibition area:

How is Pepper, SoftBank’s emotional robot, doing?

Source: Getty Images
Source: Getty Images

Pepper is a child-height human-shaped robot described as having been designed to be a genuine companion that perceives and acts upon a range of human emotions.

SoftBank, the Japanese telecom giant, acquired Aldebaran Robotics and commissioned the development of Pepper. Subsequently SoftBank joint ventured with Alibaba and Foxconn to form a development, production and marketing entity for the robots. There has been much fanfare about Pepper, particularly about its ability to use its body movement and tone of voice to communicate in a way designed to feel natural and intuitive.

The number of Peppers sold to date is newsworthy. As of today, there are likely close to 10,000 Peppers out in the world. Online sales have been 1,000 units each month for the last seven months with additional sales to businesses such as Nestle for their coffee shops and SoftBank for their telecom stores.

Source: Nestle Japan/YouTube
Source: Nestle Japan/YouTube

At around $1,600 per robot, 10,000 robots equates to $16 million in sales but Peppers are sold on a subscription contract that includes a network data plan and equipment insurance. This costs $360 per month and, over 36 months, brings the total cost of ownership to over $14,000. Consequently many are asking what Peppers are being used for, how they are being perceived, and whether they are useful? Essentially, how is Pepper doing? Does it offer value for money spent?

Two recent videos provide a window into Pepper’s state of development.


In a promotional effort, Pepper and a SoftBank publicity team came to the London offices of the Financial Times for an introduction and visit. This video shows one reporter’s attempt to understand Pepper’s capabilities and interactive abilities.

People in the FT offices were definitely attracted, amused and happy with the initial experience of being introduced to Pepper. They laughed at Peppers failures and patted its head to make it feel better. But, Pepper failed in every way to (1) be a companion, (2) recognize emotional cues, (3) be able to converse reliably and intelligently, and (4) provide any level of service other than first-time entertainment.


MasterCard unveiled the first application of their MasterPass digital payment service by a robot. It will be rolled out in Pizza Hut restaurants in Asia on Pepper robot order-takers beginning in Q4 2016. To accentuate the hook-up, MasterCard created this video showing what they hope will be a typical interaction involved in Pepper taking a customer’s order.

Tobias Puehse, vice president, innovation management, Digital Payments & Labs at MasterCard, said of the venture with SoftBank and Pepper bots:

“The app’s goal is to provide consumers with more memorable and personalized shopping experience beyond today’s self-serve machines and kiosks, by combining Pepper’s intelligence with a secure digital payment experience via MasterPass.”

One might ask what happens in a noisy, imperfect, acoustic environment? What does conversing with Pepper really add to a conveniently placed kiosk or tablet? How are Pepper’s emotional capabilities being used in this simple order-taking interaction? What happens if a customer strays from the dialogue the robot expects?


There’s no doubt that Pepper is an impressive engineering feat and that it is an advertising draw. However the emotion recognition aspects of Pepper didn’t appear to be important in both videos even though that is supposed to be Pepper’s strength. The entertainment value seemed to be what attracted the crowds. This temporary phenomena isn’t likely to persevere over time. In fact, this was shown to be true in China where restaurants began using rudimentary robots as mobile servers and busbots. In the last few months, however, there have been reports of those robots being retired because their entertainment value wore off and their inflexibility as real servers became evident.

The marketing around Pepper may have created expectations that can’t be met with this iteration of the robot. A comparison can be made here to Jibo and the problems it is having meeting deadlines and expectations. Jibo has extended the delivery date – once again – to October 2016 for crowdfunded orders, and early next year for the others.

The connection of Pepper to a telecom provider and the sales it brings in the form of 2 and 3 year data service contracts, can be big business to that provider: SoftBank is the exclusive provider of those data services in Japan. An example of the value of that business can be seen by a surge in share price of Taiwan telecom company Asia Pacific Telecom on news that the company will begin selling Pepper robots in Taiwan.

Brain-Computer Interface (BCI) livestream today


For the very first time, the 6th International Brain-Computer Interface (BCI) Meeting Series will offer free remote attendance, via live-stream.

Livestream will be available today, Monday 30 May, during the evening opening session at 19:30 PST, and tomorrow, Tuesday 31 May, during morning 9:30 PST and afternoon sessions, 13:30 and 14:30 PST respectively.

The BCI Meeting will open with the Once and Future BCI Session, featuring speakers: Eberhard Fetz, Emanuel Donchin, and Jonathan Wolpaw.

Tuesday morning will contain the State of BCI Symposium, with speakers: Nick Ramsey, Lee Miller, Donatella Mattia, Aaron Batista, and José del R. Millán. Tuesday afternoon will be the Virtual Forum of BCI Users and Selected Oral Presentations. The remainder of the BCI Meeting are poster sessions and workshops and cannot be experienced remotely.

You can read all papers submitted to the BCI meeting here.

Please pass the livestreaming link ( to anyone else who may be interested in remote participation at the 2016 BCI Meeting.

Livestreaming link

Robots Podcast #209: INNOROBO 2015 Showcase, with RB 3D, BALYO, Kawada Robotics, Partnering Robotics, and IRT Jules Verne


In this episode, Audrow Nash interviews several companies at last year’s INNOROBO, a conference that showcases innovation in robotics.

Interviews include the following:

Oliver Baudet, business manager at RB 3D, speaks about the exoskeletons displayed at the showcase.

Baptiste Mauget, responsible for marketing and communication at BALYO, speaks about BALYO’s robots for warehouse automation.

Atsushi Hayashi, an Engineer at Kawada Robotics, demonstrates a humanoid used in factories in Japan.

Abdelfettah Ighouess, Sales Director at Partnering Robotics, describes their robot for indoor air quality control.

Etienne Picard, a Research and Development Engineer at IRT Jules Verne, speaks about a large cable driven robot for manufacturing.




‘Robot kindergarten’ trains droids of the future

Photo source: ubcpublicaffairs/YouTube
Photo source: ubcpublicaffairs/YouTube

Less than 100 years from now, robots will be friendly, useful participants in our homes and workplaces, predicts UBC mechanical engineering professor and robotics expert Elizabeth Croft. We will be living in a world of Wall-Es and Rosies, walking-and-talking avatars, smart driverless cars and automated medical assistants.

But much work remains before robots will truly be integrated into our daily lives. In this short Q&A, Croft lays out the rules for engagement between humans and robots and explains why it’s crucial to get this aspect right.

What role will robots play in our lives in the future?

Elizabeth Croft, UBC mechanical engineering professor and robotics expert. Photo: UBC

They will be everywhere, helping us at home and at work. They could make you breakfast in the morning and check on your kids. They could be your frontline staff, giving visitors directions and answering questions. They could be your physician’s assistant. Or you could have a robotic avatar that will attend a meeting for you while you’re traveling on the other side of the world.

Future robots may be self-replicating, self-growing, and self-organizing. The natural evolution of robotics is toward incorporating biology. We can now grow cells around bio-compatible structures; this opens the door for the combination of the biological systems with embedded artificial intelligence, and eventually the cyber-physical workforce of the future.

What technologies are driving the growth of robotics?

Computing power continues to grow exponentially, and ubiquitous communication is being made possible by wireless technology. Dense energy storage and new energy harvesting and conversion technologies allow machines to operate in the world, unplugged. And finally, machine learning: networked computer systems have global access to huge amounts of data that, combined with robotic embodiment, allow robots to learn about the world in ways that mimic and move beyond how people learn about their environment.

Your work at UBC focuses on human-robot interaction. Why is this important? 

As robots become more and more a part of our lives, the question becomes: what are the rules of the game? What is OK for robots to do, and what is not? Robots will have abilities that we don’t have, and we need to define what they are allowed to do with that capacity.

There are some big ethical questions to consider: how does society deal with drones that can kill, for example. But there are important day-to-day questions too. If a human and a robot are accessing the same resource – the same roadway, same tools, same power source, who yields? Does the person always get their way? What if the robot is doing something for the greater good, for example, a robotic ambulance?

Researcher with robot. Photo source: ubcpublicaffairs/YouTube
Researcher with robot. Photo source: ubcpublicaffairs/YouTube

In a way, I like to think of our lab as robot kindergarten. We are teaching robots basic, building-block behaviours and ground rules for how they interact with people: how to hand over a bottle of water, how to look for things, how to take turns. Having these basic behaviours in place allows us to create human-robot interactions that are natural and fluid.

To achieve our goals, our lab welcomes researchers from different disciplines—ethics, law, machine learning, experts in human computer interaction—as well as different international cultures. Different cultures have different ideas of robots. We learn a lot from these many perspectives.

Elizabeth Croft will speak about the future of robotics at a UBC Centennial public talk on May 28. Click here for more information.

Robots can help reduce 35% of work days lost to injury

Robot in assembly at Hall 52 on June 26, 2013.  File:  062513GR34
Robot in assembly. Source: bmwgroup

What’s the biggest benefit for using collaborative robots? It’s not better efficiency. It’s not the extra hours the robot can work in a shift. It’s not even having improved consistency across your product. Whilst these are all great bonuses the biggest benefit of robots is their impact on reducing workplace injury.

Workplace injury is an issue that affects millions of workers worldwide, each year. It costs businesses billions in revenue. Although it’s not possible to avoid all injuries completely, many workplace injuries are avoidable. Musculoskeletal disorders are often preventable, as they are usually caused by bad workplace ergonomics. Collaborative robots are a great way to solve this problem.

In a previous article, we found how collaborative robots are designed to be ergonomic products in themselves. In this article, we’ll look at how collaborative robots can be used to reduce ergonomic problems across your workplace. We’ll also show how you can pick the best tasks for your collaborative robot by putting on your “ergonomics glasses.”

Musculoskeletal Disorders: Too important to ignore

Musculoskeletal disorders (or MSDs) refer to a set of injuries and disorders that affect the human body’s movement or musculoskeletal system, for example, the muscles, joints, tendons, nerves and ligaments. According to FitForWork Europe, MSDs are a huge problem in the modern world. 21.3% of disabilities worldwide are due to MSDs, and they are estimated to cost the European Union around 240 billion euros each year in lost productivity and absence, due to sickness. MSDs accounted for 35% of all work days lost in 2007 in Austria.

As they are caused by repetitive physical stresses on the human body some industries are more affected by it than others. Manufacturing and food processing are classed as high-risk, as outlined this report on the impact of MSDs in the USA. Some jobs are prone to specific injuries due to the type of work involved, such as industrial inspection and packaging jobs, which are prone to upper extremity MSDs. In 2012, the manufacturing industry had the fourth highest number of MSDs, with 37.4 incidents per 10,000 workers.

All this injury costs your business money; a lot of money. Ergo-Plus shows you would have to generate $8 million worth of extra sales to cover the cost associated with the most common MSDs. This is crazy, as the injuries are preventable by simply applying basic ergonomic principles.

How robots can reduce workplace injury

In 2013, we reported on a case study at Volkswagen, who had applied the UR5 collaborative robot to their facility. In it, Jürgen Häfner explained their reasons for introducing the robot:

“We would like to prevent long ­term burdens on our employees in all areas of our company with an ergonomic workplace layout. By using robots without guards, they can work hand ­in ­hand together with the robots. In this way, the robot becomes a production assistant in manufacture and as such can release staff from ergonomically unfavorable work.”

Source: Robotiq
Source: Robotiq

You can improve a task’s ergonomics in two ways:

  1. Use ergonomic principles to redesign the task, to reduce the physical stress on the worker.
  2. Find a way to complete the stressful part of the task differently, without using a human worker at all. This can hugely reduce the chance of MSDs.

The use of collaborative robots falls squarely into the second category. However, you still need to have a knowledge of ergonomic principles to know if a task can cause MSDs.

How to tell if a task can cause MSDs

Ergonomic professionals sometimes talk about having “ergonomics glasses” to mean viewing the workplace from an ergonomics perspective. Before you learned about ergonomics, you could have easily failed to notice that a task could injure a worker. After learning about them, ergonomics issues will jump out at you as you walk through your workplace. There are two different types of ergonomics:

  1. Proactive Ergonomics – This involves solving ergonomics issues before they arise, either by walking around your workplace whilst “wearing your ergonomics glasses” or by incorporating ergonomic principles into the initial design of processes.
  2. Reactive Ergonomics – This is what usually happens. It’s solving a problem when it is already a problem. A worker suffers an injury as a result of the task and so we retrospectively try to improve the ergonomics of the task.

At the very least, we should adopt a more proactive approach to ergonomics. In an ideal world, all ergonomics issues would be solved proactively, before they arise. However, being realistic, we’re likely to end up with a combination of the two approaches.

Three steps to improve ergonomics with collaborative robots

Applying ergonomics principles is really quite simple. It just involves a slight change of mindset and three easy steps.

Step 1: Learn what to look out for

The first step to proactive ergonomics is to learn how to spot bad ergonomics. There are some great resources online, particularly the free resources and blogs from Ergo-Plus, the International Labor OrganizationDan MacLeod and FitForWork Europe.

There are a few fundamental principles of ergonomics, which can vary slightly in wording depending on which resource you consult. Tasks which violate one or more of these will need to be redesigned or passed off to a robot, to avoid inflicting injury on the worker over time:

  1. Workers must maintain a neutral posture without putting their body in awkward positions.
  2. Reduce excessive forces and vibrations on the human body.
  3. Keep everything in easy reach and at the proper height, to allow the worker to operate in the natural power/comfort zones of the human body.
  4. Reduce excessive motions in the task. This is especially important in repeated motions.
  5. Minimize fatigue, static load and pressure points on the human body.
  6. Provide adequate clearance and lighting in the workplace.
  7. Allow the worker to move, exercise and stretch. After all, the human body is “designed to move” not stay in the same position for long periods of time.ergo-principles-4.jpg

Neutral and awkward back postures (source)

Become familiar with these principles by reading over the linked resources and looking at example images of good and bad ergonomic practices.

Step 2: Stand up and walk!

The second step is to stand up off your chair and walk around your workplace, noticing everything you can about the ergonomics of the tasks. Ask yourself (and then ask your colleagues) the following two questions:

  1. How could this task be improved to make it more comfortable (less physically stressful) to the worker?
  2. Which parts of the task could we give to a collaborative robot to solve the ergonomic issues?

We recommend that you take photos and videos of the tasks, to document the ergonomics improvement process. This will be useful in two ways, as you can then use the same photos to help you to design your collaborative robot process.

Step 3: Apply collaborative robots to improve ergonomics

Once you have identified problem areas in the workplace, you are likely to have a list of tasks which need improvement. Some of the tasks will be possible to carry out with a collaborative robot. Others will not, so you will need to improve their ergonomics in other ways.

Best practices for looking at tasks ergonomically

With so much information available about ergonomics, you might be thinking that you’d have to invest big in training to become a proactive ergonomics business. However, this is not necessarily the case. You can start small and still see big benefits. The International Labor Organization gives three simple things to remember when applying ergonomics to your workplace:

  1. It is most effective to examine tasks on a case-by-case basis. Ergonomics issues can be very specific to a task, so don’t think that doing exactly the same everywhere in the workplace will be effective. Start small, looking at just one or two tasks, and build up gradually.
  2. Even minor changes to ergonomics can drastically reduce injury. It might seem trivial to have a robot pick up a part and move it 50cm to be closer to a human operator, but after an 8 hour shift this simple movement can mount up to a huge physical strain on the worker. Simply knowing that the optimal working radius of a table is 25cm allows you to understand this and put it into practice.
  3. Staff should be involved in making any ergonomics changes to the workplace. People themselves are the best source of insight into how to improve their work tasks. Get your workers involved right from the start to identify problem areas and apply collaborative robots.

Why businesses must get ready for the era of robotic things


We’ve entered a period of epic technological transformation that is impacting society in ways that are leaving even veteran tech observers speechless. In some ways, it might seem like 1998 all over again. The internet was then in its infancy and cyberspace was uncharted territory for much of the population. The Dot-com boom and eventual bust was inevitable and reflected the markets expanding and contracting with the newfound surge of interest, but obviously overinflated speculation, around the potential of the Internet to transform society. Following a tremendous growth spurt in the first 5-6 years, the World Wide Web ended its first decade by learning to become sociable.

Indeed, the rise of the digital social network through channels like Facebook, Twitter, Instagram, and Snapchat have transformed how businesses and individuals work, think, and communicate. And now that the Internet has crossed the threshold into young adulthood, it’s continuing to grow and drive massive transformations throughout all parts of business and society. Soon-to-be-released autonomous cars, wearable tech, drones, 3-D printing, smart machines, home automation, virtual assistants like Siri, you name it . . . the pace of change is staggering.

Graphic by Predikto on company blog.
Graphic by Predikto on company blog.

Many of the breakthroughs and innovations we’re seeing right now are a result of three major confluences over the past 8 years: mobile, cloud, and Big Data. These technologies have collectively resulted in quicker and more efficient means of collaboration, development, and production, which in turn have allowed businesses to achieve unprecedented levels of growth and expansion. New ways of working, often remotely, mean that more people can do their jobs outside of traditional corporate structures. We’ve entered not only the freelancer economy, but the startup one as well. Now everyone is an entrepreneur and innovation and new business growth is through the roof. What’s more is that technology processes and production cycles have become commoditized. This means that anyone with the right idea, system, skills, and network in place today can effectively build a billion dollar business with very low overhead costs.

This crazy rate of change is certainly great from the consumer standpoint, but also a bit unnerving for businesses simply trying to keep their head above the water. How are companies today to keep up with the market, their competitors, and with consumer expectations? What does all this change mean for startups struggling to gain traction, for more traditional brick and mortar businesses, and even for established enterprises that might be too big to pivot quickly?

These are more comprehensive questions that we’ll try to answer in another blog post. But for now, here is what we do know about the massive impacts over the next few years. The first thing is that Internet of Things is taking the world by storm with projections that 21 billion objects will be connected by the year 2020. That’s just about 3 for every man, woman, and child on the planet! A few years ago Cisco estimated that the IoT market would create $19 trillion of economic value in the next decade.

What’s more is that the global robotics industry is also undergoing a major transformation. Market intelligence firm Tractica released a report in November 2015 forecasting that global robotics will grow from $28.3 billion worldwide in 2015 to $151.7 billion by 2020. What’s especially significant is that this market share will encompass mostly non-industrial robots, including segments like consumer, enterprise, medical, military, UAVs, and autonomous vehicles. Tractica anticipates an increase in annual robots shipments from 8.8 million in 2015 to 61.4 million by 2020; in fact, by 2020 over half of this volume will come from consumer robots.


Putting together the two major industry trends, it doesn’t take rocket science to figure out that the two industries – Internet of Things and Robotics – will together lead to a “perfect storm” of global market disruption, opportunities, and growth in the next 4 years and beyond. This confluence is part of a larger epic transformation, which has appropriately been called the Second Machine Age. Listen to how this FastCompany article sums it up:

The fact is we’re now on the cusp of a “Second Machine Age,” one powered not by clanging factory equipment but by automation, artificial intelligence, and robotics. Self-driving cars are expected to be widespread in the coming decade. Already, automated checkout technology has replaced cashiers, and computerized check-in is the norm at airports. Just like the Industrial Revolution more than 200 years ago, the AI and robotics revolution is poised to touch virtually every aspect of our lives—from health and personal relations to government and, of course, the workplace.

This is a mouthful but in case it’s not clear, let me spell it out: there’s never been a better time than now to get onboard with robotics and Internet of Things!

If you’re a startup or small business owner, and especially feeling behind the technology curve, you’re certainly not alone. But instead of commiserating about all of the changes, proactively start today to ask yourself what it will take to get your organization to the next level of innovation. Set yourself up with a 6 month, 12 month, 18 month and 2 year innovation plan which maps to a broader 2020 strategy. Time is of the essence but it’s not too late to pivot and get onboard with the robotics and IoT revolution. As the famous statement goes, “The journal of a thousand miles starts with one step.”

Multi-robotic fabrication method has potential to build complex, stable, three-dimensional constructions

Figure 1: Multi-robotic assembly of spatial discrete elements structures.
Figure 1: Multi-robotic assembly of spatial discrete elements structures. Source: NCCR Digital Fabrication.

Multi-robotic fabrication methods can strongly increase the potential of robotic fabrication for architectural applications through the definition of cooperative assembly tasks. As such, the objective of this research is to investigate and develop methods and techniques for the multi-robotic assembly of discrete elements into geometrically complex space frame structures.

This endeavour implies the definition of an integrative digital design method that leads to fabrication and structure informed assemblies that can be automatically built up into custom configurations. The research is being conducted at Gramazio Kohler Research as part of the interdisciplinary research program of the Swiss National Centre of Competence in Research (NCCR) Digital Fabrication. It started in September 2014 by Stefana Parascho and currently includes collaborations with Augusto Gandía and Thomas Kohlhammer.

Spatial Structures

Space frames structures developed during the industrial revolution as efficient systems for large-span constructions, but quickly reached a limitation of their variability through the necessity of standardisation as well as complex connection detailing. Through the development of a multi-robotic assembly method as well as an integrated joining system, irregular space frame geometries become buildable, enhancing existing typologies through their potential for variability and efficient material use. The use of robotic fabrication techniques and the avoidance of pre-fabricated, rigid connections lead to a system that relies not only on digital planning and manufacturing but includes digital assembly as an addition to the digital chain. A process of robotic build-up of triangulated structures was developed, based on the alternating placement of rods. This way, one robot always serves as a support for the already built structure while the other assembles a new element (Figure 02). As a result, the built structures do not require any additional support structures and are constantly stabilised by the robots.

Figure 2: Conceptual Diagram of multi-robotic assembly strategy, exemplified through the sequential build-up of a spatial triangulated structure. Two robots are alternating in order to position the elements and at the same time serve as support structure.
Figure 2: Conceptual Diagram of multi-robotic assembly strategy, exemplified through the sequential build-up of a spatial triangulated structure. Two robots are alternating in order to position the elements and at the same time serve as support structure. Source: NCCR Digital Fabrication.

Integrative Design Methods

Traditional architectural design methods commonly follow a top-down strategy in which both construction and fabrication are subordinated to a previously predefined geometry. In an integrative design approach, the fabrication, structural performance and given boundary constraints can simultaneously function as design drivers, allowing for a much higher flexibility and performance of the system. As such, the presented research focuses on the development of a design strategy in which various factors, such as constraints and characteristics of the multi-robotic fabrication process, are included in the geometric definition process of the structures.

Multi-robotic fabrication

The use of multiple robots for the assembly of discrete element structures opens up potentials for the build-up of complex, stable, three-dimensional constructions. At the same time, the process introduces various challenges such as the necessity for collision avoidance strategies between multiple robots and respective robotic path planning. In order to generate buildable structures, the design process needs to integrate the robots’ constraints, such as robot reach and kinematic behaviour, and at the same time process data from robotic simulation in order to foresee the robots’ precise movements. Through the collaboration with Augusto Gandía from Gramazio Kohler Research a strategy for implementing robotic simulation into a CAD environment and for generating collision free trajectories for multi-robotic applications was developed.

Figure 3: Mid-air build-up of tetrahedral structure without the use of any additional support structure. Source: NCCR Digital Fabrication.
Figure 3: Mid-air build-up of tetrahedral structure without the use of any additional support structure. Source: NCCR Digital Fabrication.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Page 246 of 247
1 244 245 246 247