Archive 09.02.2018

Page 4 of 5
1 2 3 4 5

Learning robot objectives from physical human interaction

impedance_control

Humans physically interact with each other every day – from grabbing someone’s hand when they are about to spill their drink, to giving your friend a nudge to steer them in the right direction, physical interaction is an intuitive way to convey information about personal preferences and how to perform a task correctly.

So why aren’t we physically interacting with current robots the way we do with each other? Seamless physical interaction between a human and a robot requires a lot: lightweight robot designs, reliable torque or force sensors, safe and reactive control schemes, the ability to predict the intentions of human collaborators, and more! Luckily, robotics has made many advances in the design of personal robots specifically developed with humans in mind.

However, consider the example from the beginning where you grab your friend’s hand as they are about to spill their drink. Instead of your friend who is spilling, imagine it was a robot. Because state-of-the-art robot planning and control algorithms typically assume human physical interventions are disturbances, once you let go of the robot, it will resume its erroneous trajectory and continue spilling the drink. The key to this gap comes from how robots reason about physical interaction: instead of thinking about why the human physically intervened and replanning in accordance with what the human wants, most robots simply resume their original behavior after the interaction ends.

We argue that robots should treat physical human interaction as useful information about how they should be doing the task. We formalize reacting to physical interaction as an objective (or reward) learning problem and propose a solution that enables robots to change their behaviors while they are performing a task according to the information gained during these interactions.

Reasoning About Physical Interaction: Unknown Disturbance versus Intentional Information

The field of physical human-robot interaction (pHRI) studies the design, control, and planning problems that arise from close physical interaction between a human and a robot in a shared workspace. Prior research in pHRI has developed safe and responsive control methods to react to a physical interaction that happens while the robot is performing a task. Proposed by Hogan et. al., impedance control is one of the most commonly used methods to move a robot along a desired trajectory when there are people in the workspace. With this control method, the robot acts like a spring: it allows the person to push it, but moves back to an original desired position after the human stops applying forces. While this strategy is very fast and enables the robot to safely adapt to the human’s forces, the robot does not leverage these interventions to update its understanding of the task. Left alone, the robot would continue to perform the task in the same way as it had planned before any human interactions.

Why is this the case? It boils down to what assumptions the robot makes about its knowledge of the task and the meaning of the forces it senses. Typically, a robot is given a notion of its task in the form of an objective function. This objective function encodes rewards for different aspects of the task like “reach a goal at location X” or “move close to the table while staying far away from people”. The robot uses its objective function to produce a motion that best satisfies all the aspects of the task: for example, the robot would move toward goal X while choosing a path that is far from a human and close to the table. If the robot’s original objective function was correct, then any physical interaction is simply a disturbance from its correct path. Thus, the robot should allow the physical interaction to perturb it for safety purposes, but it will return to the original path it planned since it stubbornly believes it is correct.

In contrast, we argue that human interventions are often intentional and occur because the robot is doing something wrong. While the robot’s original behavior may have been optimal with respect to its pre-defined objective function, the fact that a human intervention was necessary implies that the original objective function was not quite right. Thus, physical human interactions are no longer disturbances but rather informative observations about what the robot’s true objective should be. With this in mind, we take inspiration from inverse reinforcement learning (IRL), where the robot observes some behavior (e.g., being pushed away from the table) and tries to infer an unknown objective function (e.g., “stay farther away from the table”). Note that while many IRL methods focus on the robot doing better the next time it performs the task, we focus on the robot completing its current task correctly.

Formalizing Reacting to pHRI

With our insight on physical human-robot interactions, we can formalize pHRI as a dynamical system, where the robot is unsure about the correct objective function and the human’s interactions provide it with information. This formalism defines a broad class of pHRI algorithms, which includes existing methods such as impedance control, and enables us to derive a novel online learning method.

We will focus on two parts of the formalism: (1) the structure of the objective function and (2) the observation model that lets the robot reason about the objective given a human physical interaction. Let be the robot’s state (e.g., position and velocity) and be the robot’s action (e.g., the torque it applies to its joints). The human can physically interact with the robot by applying an external torque, called , and the robot moves to the next state via its dynamics, .

The Robot Objective: Doing the Task Right with Minimal Human Interaction

In pHRI, we want the robot to learn from the human, but at the same time we do not want to overburden the human with constant physical intervention. Hence, we can write down an objective for the robot that optimizes both completing the task and minimizing the amount of interaction required, ultimately trading off between the two.

Here, encodes the task-related features (e.g., “distance to table”, “distance to human”, “distance to goal”) and determines the relative weight of each of these features. In the function, encapsulates the true objective – if the robot knew exactly how to weight all the aspects of its task, then it could compute how to perform the task optimally. However, this parameter is not known by the robot! Robots will not always know the right way to perform a task, and certainly not the human-preferred way.

The Observation Model: Inferring the Right Objective from Human Interaction

As we have argued, the robot should observe the human’s actions to infer the unknown task objective. To link the direct human forces that the robot measures with the objective function, the robot uses an observation model. Building on prior work in maximum entropy IRL as well as the Bolzmann distributions used in cognitive science models of human behavior, we model the human’s interventions as corrections which approximately maximize the robot’s expected reward at state while taking action . This expected reward emcompasses the immediate and future rewards and is captured by the -value:

Intuitively, this model says that a human is more likely to choose a physical correction that, when combined with the robot’s action, leads to a desirable (i.e., high-reward) behavior.

Learning from Physical Human-Robot Interactions in Real-Time

Much like teaching another human, we expect that the robot will continuously learn while we interact with it. However, the learning framework that we have introduced requires that the robot solve a Partially Observable Markov Decision Process (POMDP); unfortunately, it is well known that solving POMDPs exactly is at best computationally expensive, and at worst intractable. Nonetheless, we can derive approximations from this formalism that can enable the robot to learn and act while humans are interacting.

To achieve such in-task learning, we make three approximations summarized below:

1) Separate estimating the true objective from solving for the optimal control policy. This means at every timestep, the robot updates its belief over possible values, and then re-plans an optimal control policy with the new distribution.

2) Separate planning from control. Computing an optimal control policy means computing the optimal action to take at every state in a continuous state, action, and belief space. Although re-computing a full optimal policy after every interaction is not tractable in real-time, we can re-compute an optimal trajectory from the current state in real-time. This means that the robot first plans a trajectory that best satisfies the current estimate of the objective, and then uses an impedance controller to track this trajectory. The use of impedance control here gives us the nice properties described earlier, where people can physically modify the robot’s state while still being safe during interaction.

Looking back at our estimation step, we will make a similar shift to trajectory space and modify our observation model to reflect this:

Now, our observation model depends only on the cumulative reward along a trajectory, which is easily computed by summing up the reward at each timestep. With this approximation, when reasoning about the true objective, the robot only has to consider the likelihood of a human’s preferred trajectory, , given the current trajectory it is executing, .

But what is the human’s preferred trajectory, ? The robot only gets to directly measure the human’s force $u_H$. One way to infer what is the human’s preferred trajectory is by propagating the human’s force throughout the robot’s current trajectory, . Figure 1. builds up the trajectory deformation based on prior work from Losey and O’Malley, starting from the robot’s original trajectory, then the force application, and then the deformation to produce .

deformation_process

Fig 1. To infer the human’s prefered trajectory given the current planned trajectory, the robot first measures the human’s interaction force, $u_H$, and then smoothly deforms the waypoints near interaction point to get the human’s preferred trajectory, $\xi_H$.

3) Plan with maximum a posteriori (MAP) estimate of . Finally, because is a continuous variable and potentially high-dimensional, and since our observation model is not Gaussian, rather than planning with the full belief over , we will plan only with the MAP estimate. We find that the MAP estimate under a 2nd order Taylor Series Expansion about the robot’s current trajectory with a Gaussian prior is equivalent to running online gradient descent:

At every timestep, the robot updates its estimate of in the direction of the cumulative feature difference, , between its current optimal trajectory and the human’s preferred trajectory. In the Learning from Demonstration literature, this update rule is analogous to online Max Margin Planning; it is also analogous to coactive learning, where the user modifies waypoints for the current task to teach a reward function for future tasks.

Ultimately, putting these three steps together leads us to an elegant approximate solution to the original POMDP. At every timestep, the robot plans a trajectory and begins to move. The human can physically interact, enabling the robot to sense their force $u_H$. The robot uses the human’s force to deform its original trajectory and produce the human’s desired trajectory, . Then the robot reasons about what aspects of the task are different between its original and the human’s preferred trajectory, and updates in the direction of that difference. Using the new feature weights, the robot replans a trajectory that better aligns with the human’s preferences.

algorithm

For a more thorough description of our formalism and approximations, please see our recent paper from the 2017 Conference on Robot Learning.

Learning from Humans in the Real World

To evaluate the benefits of in-task learning on a real personal robot, we recruited 10 participants for a user study. Each participant interacted with the robot running our proposed online learning method as well as a baseline where the robot did not learn from physical interaction and simply ran impedance control.

Fig 2. shows the three experimental household manipulation tasks, in each of which the robot started with an initially incorrect objective that participants had to correct. For example, the robot would move a cup from the shelf to the table, but without worrying about tilting the cup (perhaps not noticing that there is liquid inside).






Fig 2. Trajectory generated with initial objective marked in black, and the desired trajectory from true objective in blue. Participants need to correct the robot to teach it to hold the cup upright (left), move closer to the table (center), and avoid going over the laptop (right).

We measured the robot’s performance with respect to the true objective, the total effort the participant exerted, the total amount of interaction time, and the responses of a 7-point Likert scale survey.

cup gif

In Task 1, participants have to physically intervene when they see the robot tilting the cup and teach the robot to keep the cup upright.

table gif

Task 2 had participants teaching the robot to move closer to the table.

laptop gif

For Task 3, the robot’s original trajectory goes over a laptop. Participants have to physically teach the robot to move around the laptop instead of over it.

The results of our user studies suggest that learning from physical interaction leads to better robot task performance with less human effort. Participants were able to get the robot to execute the correct behavior faster with less effort and interaction time when the robot was actively learning from their interactions during the task. Additionally, participants believed the robot understood their preferences more, took less effort to interact with, and was a more collaborative partner.






Fig 3. Learning from interaction significantly outperformed not learning for each of our objective measures, including task cost, human effort, interaction time.

Ultimately, we propose that robots should not treat human interactions as disturbances, but rather as informative actions. We showed that robots imbued with this sort of reasoning are capable of updating their understanding of the task they are performing and completing it correctly, rather than relying on people to guide them until the task is done.

This work is merely a step in exploring learning robot objectives from pHRI. Many open questions remain including developing solutions that can handle dynamical aspects (like preferences about the timing of the motion) and how and when to generalize learned objectives to new tasks. Additionally, robot reward functions will often have many task-related features and human interactions may only give information about a certain subset of relevant weights. Our recent work in HRI 2018 studied how a robot can disambiguate what the person is trying to correct by learning about only a single feature weight at a time. Overall, not only do we need algorithms that can learn from physical interaction with humans, but these methods must also reason about the inherent difficulties humans experience when trying to kinesthetically teach a complex – and possibly unfamiliar – robotic system.


Thank you to Dylan Losey and Anca Dragan for their helpful feedback in writing this blog post. This article was initially published on the BAIR blog, and appears here with the authors’ permission.


This post is based on the following papers:

  • A. Bajcsy* , D.P. Losey*, M.K. O’Malley, and A.D. Dragan. Learning Robot Objectives from Physical Human Robot Interaction. Conference on Robot Learning (CoRL), 2017.

  • A. Bajcsy , D.P. Losey, M.K. O’Malley, and A.D. Dragan. Learning from Physical Human Corrections, One Feature at a Time. International Conference on Human-Robot Interaction (HRI), 2018.

Why ethical robots might not be such a good idea after all

Last week my colleague Dieter Vanderelst presented our paper: The Dark Side of Ethical Robots at AIES 2018 in New Orleans.

I blogged about Dieter’s very elegant experiment here, but let me summarise. With two NAO robots he set up a demonstration of an ethical robot helping another robot acting as a proxy human, then showed that with a very simple alteration of the ethical robot’s logic it is transformed into a distinctly unethical robot – behaving either competitively or aggressively toward the proxy human.

Here are our paper’s key conclusions:

The ease of transformation from ethical to unethical robot is hardly surprising. It is a straightforward consequence of the fact that both ethical and unethical behaviours require the same cognitive machinery with – in our implementation – only a subtle difference in the way a single value is calculated. In fact, the difference between an ethical (i.e. seeking the most desirable outcomes for the human) robot and an aggressive (i.e. seeking the least desirable outcomes for the human) robot is a simple negation of this value.

On the face of it, given that we can (at least in principle) build explicitly ethical machines then it would seem that we have a moral imperative to do so; it would appear to be unethical not to build ethical machines when we have that option. But the findings of our paper call this assumption into serious doubt. Let us examine the risks associated with ethical robots and if, and how, they might be mitigated. There are three.

  1. First there is the risk that an unscrupulous manufacturermight insert some unethical behaviours into their robots in order to exploit naive or vulnerable users for financial gain, or perhaps to gain some market advantage (here the VW diesel emissions scandal of 2015 comes to mind). There are no technical steps that would mitigate this risk, but the reputational damage from being found out is undoubtedly a significant disincentive. Compliance with ethical standards such as BS 8611 guide to the ethical design and application of robots and robotic systems, or emerging new IEEE P700X ‘human’ standards would also support manufacturers in the ethical application of ethical robots. 
  2. Perhaps more serious is the risk arising from robots that have user adjustable ethics settings. Here the danger arises from the possibility that either the user or a technical support engineer mistakenly, or deliberately, chooses settings that move the robot’s behaviours outside an ‘ethical envelope’. Much depends of course on how the robot’s ethics are coded, but one can imagine the robot’s ethical rules expressed in a user-accessible format, for example, an XML like script. No doubt the best way to guard against this risk is for robots to have no user adjustable ethics settings, so that the robot’s ethics are hard-coded and not accessible to either users or support engineers. 
  3. But even hard-coded ethics would not guard against undoubtedly the most serious risk of all, which arises when those ethical rules are vulnerable to malicious hacking. Given that cases of white-hat hacking of cars have already been reported, it’s not difficult to envisage a nightmare scenario in which the ethics settings for an entire fleet of driverless cars are hacked, transforming those vehicles into lethal weapons. Of course, driverless cars (or robots in general) without explicit ethics are also vulnerable to hacking, but weaponising such robots is far more challenging for the attacker. Explicitly ethical robots focus the robot’s behaviours to a small number of rules which make them, we think, uniquely vulnerable to cyber-attack.

Ok, taking the most serious of these risks: hacking, we can envisage several technical approaches to mitigating the risk of malicious hacking of a robot’s ethical rules. One would be to place those ethical rules behind strong encryption. Another would require a robot to authenticate its ethical rules by first connecting to a secure server. An authentication failure would disable those ethics, so that the robot defaults to operating without explicit ethical behaviours. Although feasible, these approaches would be unlikely to deter the most determined hackers, especially those who are prepared to resort to stealing encryption or authentication keys.

It is very clear that guaranteeing the security of ethical robots is beyond the scope of engineering and will need regulatory and legislative efforts. Considering the ethical, legal and societal implications of robots, it becomes obvious that robots themselves are not where responsibility lies. Robots are simply smart machines of various kinds and the responsibility to ensure they behave well must always lie with human beings. In other words, we require ethical governance, and this is equally true for robots with or without explicit ethical behaviours.

Two years ago I thought the benefits of ethical robots outweighed the risks. Now I’m not so sure. I now believe that – even with strong ethical governance – the risks that a robot’s ethics might be compromised by unscrupulous actors are so great as to raise very serious doubts over the wisdom of embedding ethical decision making in real-world safety critical robots, such as driverless cars. Ethical robots might not be such a good idea after all.

Here is the full reference to our paper:

Vanderelst D and Winfield AFT (2018), The Dark Side of Ethical Robots, AAAI/ACM Conf. on AI Ethics and Society (AIES 2018), New Orleans.

The robots will see you now

For more than a decade, biomimetic robots have been deployed alongside live animals to better understand the drivers of animal behavior, including social cues, fear, leadership, and even courtship. The encounters have always been unidirectional; the animals observe and respond to the robots. But in the lab of Maurizio Porfiri, a professor of mechanical and aerospace engineering at the NYU Tandon School of Engineering, the robots can now watch back.

#253: eSIM in Wearable Technology, with Karl Weaver

Image from the South China Morning Post

In this episode, Audrow Nash speaks with Karl Weaver (魏卡爾), formerly the Original Equipment Manufacturer Business Development Director for Oasis Smart SIM. Weaver discusses how wearable technology is growing as a form of payment system in China. He speaks about wireless technology, including Near-Field Communications (NFC) and Embedded SIM cards (eSIM), in wearable technology and in other applications, such as bike rental.

Karl Weaver (魏卡爾)

Since the recording of this interview, Karl has begun working for ARM promoting eSIM and iUICC to Greater China and Asia.

Karl Weaver (魏卡爾) previously worked as the Original Equipment Manufacturer Business Development Director (North America and Northeast Asia) for Oasis Smart SIM. Prior to Oasis, Karl worked for Rivetz Corp, a Massachusetts-based start-up to promote developer tools for design-in of TEE-enables applications on Smartphones for payment and security. Karl also spent 5 years working in China for Gemalto (and Trustonic) as liaison and evangelist of embedded Mobile Near-Field Communications Payments & TEE security technologies to the OEM Smartphone/Tablet PC ecosystem. Weaver has a B.S degree in Business Management from Salve Regina University and a Certification in Mandarin Chinese Language, Customs, and Culture from National Taiwan Normal University.

 

Links

The millimeter-scale robot opens new avenues for microsurgery, microassembly and micromanipulation

Completely unfolded, the milliDelta with 15 mm-by-15 mm-20 mm roughly compares to a cent piece, and uses piezoelectric actuators, and flexural joints in its three arms to control high-frequency movements of a stage on top. Credit: Wyss Institute at Harvard University

By Benjamin Boettner

Because of their high precision and speed, Delta robots are deployed in many industrial processes, including pick-and-place assemblies, machining, welding and food packaging. Starting with the first version developed by Reymond Clavel for a chocolate factory to quickly place chocolate pralines in their packages, Delta robots use three individually controlled and lightweight arms that guide a platform to move fast and accurately in three directions. The platform is either used as a stage, similar to the ones being used in flight simulators, or coupled to a manipulating device that can, for example, grasp, move, and release objects in prescribed patterns. Over time, roboticists have designed smaller and smaller Delta robots for tasks in limited workspaces, yet shrinking them further to the millimeter scale with conventional manufacturing techniques and components has proven fruitless.

Reported in Science Robotics, a new design, the milliDelta robot, developed by Robert Wood’s team at Harvard’s Wyss Institute for Biologically Inspired Engineering and John A. Paulson School of Engineering and Applied Sciences (SEAS) overcomes this miniaturization challenge. By integrating their microfabrication technique with high-performance composite materials that can incorporate flexural joints and bending actuators, the milliDelta can operate with high speed, force, and micrometer precision, which make it compatible with a range of micromanipulation tasks in manufacturing and medicine.

In 2011, inspired by pop-up books and origami, Wood’s team developed a micro-fabrication approach that enables the assembly of robots from flat sheets of composite materials. Pop-up MEMS (short for “microelectromechanical systems”) manufacturing has since been used for the construction of dynamic centimeter-scale machines that can simply walk away, or, as in the case of the RoboBee, can fly. In their new study, the researchers applied their approach to develop a Delta robot measuring a mere 15 mm-by-15 mm-by-20 mm.

“The physics of scaling told us that bringing down the size of Delta robots would increase their speed and acceleration, and pop-up MEMS manufacturing with its ability to use any material or combination of materials seemed an ideal way to attack this problem,” said Wood, who is a Core Faculty member at the Wyss Institute and co-leader of its Bioinspired Robotics platform. Wood is also the Charles River Professor of Engineering and Applied Sciences at SEAS. “This approach also allowed us to rapidly go through a number of iterations that led us to the final milliDelta.”

The milliDelta design incorporates a composite laminate structure with embedded flexural joints that approximate the more complicated joints found in large scale Delta robots. “With the help of an assembly jig, this laminate can be precisely folded into a millimeter-scale Delta robot. The milliDelta also utilizes piezoelectric actuators, which allow it to perform movements at frequencies 15 to 20 times higher than those of other currently available Delta robots,” said first-author Hayley McClintock, a Wyss Institute Staff Researcher on Wood’s team.

In addition, the team demonstrated that the milliDelta can operate in a workspace of about seven cubic millimeters and that it can apply forces and exhibit trajectories that, together with its high frequencies, could make it ideal for micromanipulations in industrial pick-and-place processes and microscopic surgeries such as retinal microsurgeries performed on the human eye.

The work by Wood’s team demonstrating the enhanced speed and control of their milliDelta robot at the millimeter scale opens entirely new avenues of development for industrial and medical robots, which are currently beyond the reach of existing technologies. Donald Ingber

Putting the milliDelta’s potential for microsurgeries and other micromanipulations to a first test, the researchers explored their robot as a hand tremor-cancelling device. “We first mapped the paths that the tip of a toothpick circumscribed when held by an individual, computed those, and fed them into the milliDelta robot, which was able to match and cancel them out,” said co-first author Fatma Zeynep Temel, Ph.D., a SEAS Postdoctoral Fellow in Wood’s team. The researchers think that specialized milliDelta robots could either be added on to existing robotic devices, or be developed as standalone devices like, for example, platforms for the manipulation of cells in research and clinical laboratories.

“The work by Wood’s team demonstrating the enhanced speed and control of their milliDelta robot at the millimeter scale opens entirely new avenues of development for industrial and medical robots, which are currently beyond the reach of existing technologies. It’s yet another example of how our Bioinspired Robotics platform is leading the way into the future,” said Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at HMS and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at SEAS.

Other authors on the study are Neel Doshi, a SEAS Graduate Student, and Je-sung Koh, Ph.D., a former Postdoctoral Fellow on Wood’s Wyss Institute/SEAS team and now Assistant Professor at Ajou University in Korea. The work was funded by Harvard’s Wyss Institute for Biologically Inspired Engineering and a National Defense Science and Engineering Graduate Fellowship.

Robotic interiors

MIT Media Lab spinout Ori is developing smart robotic furniture that transforms into a bedroom, working or storage area, or large closet — or slides back against the wall — to optimize space in small apartments.
Courtesy of Ori

By Rob Matheson

Imagine living in a cramped studio apartment in a large city — but being able to summon your bed or closet through a mobile app, call forth your desk using voice command, or have everything retract at the push of a button.

MIT Media Lab spinout Ori aims to make that type of robotic living a reality. The Boston-based startup is selling smart robotic furniture that transforms into a bedroom, working or storage area, or large closet — or slides back against the wall — to optimize space in small apartments.

Based on years of Media Lab work, Ori’s system is an L-shaped unit installed on a track along a wall, so can slide back and forth. One side features a closet, a small fold-out desk, and several drawers and large cubbies. At the bottom is a pull-out bed. The other side of the unit includes a horizontal surface that can open out to form a table. The vertical surface above that features a large nook where a television can be placed, and additional drawers and cubbies. The third side, opposite the wall, contains still more shelving, and pegs to hang coats and other items.

Users control the unit through a control hub plugged into a wall, or through Ori’s mobile app or a smart home system, such as Amazon’s Echo.

Essentially, a small studio can at any time become a bedroom, lounge, walk-in closet, or living and working area, says Ori founder and CEO Hasier Larrea SM ’15. “We use robotics to … make small spaces act like they were two or three times bigger,” he says. “Around 200 square feet seems too small [total area] to live in, but a 200-square-foot bedroom or living room doesn’t seem so small.” Larrea was named to Forbes’ 2017 30 Under 30 list for his work with Ori.

The first commercial line of the systems, which goes for about $10,000, is now being sold to real estate developers in Boston and other major cities across the U.S. and Canada, for newly built or available apartments. In Boston, partners include Skanska, which has apartments in the Seaport; Samuels and Associates, with buildings around Harvard Square; and Hines for its Marina Bay units. Someday, Larrea says, the system could be bought directly by consumers.

Once the system catches on and the technology evolves, Larrea imagines future apartments could be furnished entirely with robotic furniture from Ori and other companies.

“These technologies can evolve for kitchens, bathrooms, and general partition walls. At some point, a two-bedroom apartment could turn into a large studio, transform into three rooms for your startup, or go into ‘party mode,’ where it all opens up again,” Larrea says. “Spaces will adapt to us, instead of us adapting to spaces, which is what we’ve been doing for so many years.”

Architectural robotics

In 2011, Larrea joined the Media Lab’s City Science research group, directed by Principal Research Scientist Kent Larson, which included his three co-founders: Chad Bean ’14, Carlos Rubio ’14, and Ivan Fernandez de Casadevante, who was a visiting researcher.

The group’s primary focus was tackling challenges of mass urbanization, as cities are becoming increasingly popular living destinations. “Data tells us that, in places like China and India, 600 million people will move from towns to cities in the next 15 years,” Larrea says. “Not only is the way we move through cities and feed people going to need to evolve, but so will the way people live and work in spaces.”

A second emerging phenomenon was the Internet of Things, which saw an influx of smart gadgets, including household items and furniture, designed to connect to the Internet. “Those two megatrends were bound to converge,” Larrea says.

The group started a project called CityHome, creating what it called “architectural robotics,” which integrated robotics, architecture, computer science, and engineering to design smart, modular furniture. The group prototyped a moveable wall that could be controlled via gesture control — which looked similar to today’s Ori system — and constructed a mock 200-square-foot studio apartment on the fifth floor of the Media Lab to test it out. Within the group, the unit was called “furniture with superpowers,” Larrea says, as it made small spaces seem bigger.

After they had constructed their working prototype, in early 2015 the researchers wanted to scale up. Inspiration came from the Media Lab-LEGO MindStorms collaboration from the late 1990s, where researchers created kits that incorporated sensors and motors inside traditional LEGO bricks so kids could build robots and researchers could prototype.

Drawing from that concept, the group built standardized components that could be assembled into a larger piece of modular furniture — what Ori now calls the robotic “muscle,” “skeleton,” “brains,” and the furniture “skins.” Specifically, the muscle consists of the track, motors, and electronics that actuate the system. The skeleton is the frame and the wheels that give the unit structure and movement. The brain is the microcomputer that controls all the safety features and connects the device to the Internet. And the skin is the various pieces of furniture that can be integrated, using the same robotic architecture.

Today, units fit full- or queen-size mattresses and come in different colors. In the future, however, any type of furniture could be integrated, creating units of various shapes, sizes, uses, and price. “The robotics will keep evolving but stay standardized … so, by adding different skins, you can really create anything you can imagine,” Larrea says.

Kickstarting Ori

Going through the Martin Trust Center for MIT Entrepreneurship’s summer accelerator delta V (then called the Global Founders Skills Accelerator) in 2015 “kickstarted” the startup, Larrea says. One lesson that particularly stood out: the importance of conducting market research. “At MIT, sometimes we assume, because we have such a cool technology, marketing it will be easy. … But we forget to talk to people,” he says.

In the early days, the co-founders put tech development aside to speak with owners of studios, offices, and hotels, as well as tenants. In doing so, they learned studio renters in particular had three major complaints: Couples wanted separate living areas, and everyone wanted walk-in closets and space to host parties. The startup then focused on developing a furniture unit that addressed those issues.

After earning one of its first investors in the Media Lab’s E14 Fund in fall 2015, the startup installed an early version of its system in several Boston apartments for renters to test and provide feedback. Soon after, the system hit apartments in 10 major cities across the U.S. and Canada, including San Francisco, Vancouver, Chicago, Miami, and New York. Over the past two years, the startup has used feedback from those pilots to refine the system into today’s commercial model.

Ori will ship an initial production run of 500 units for apartments over the next few months. Soon, Larrea says, the startup also aims to penetrate adjacent markets, such as hotels, dormitories, and offices. “The idea is to prove this isn’t a one-trick pony,” Larrea says. “It’s part of a more comprehensive strategy to unlock the potential of space.”

Robotics industry fundings, acquisitions & IPOs: January 2018


Twenty-five different startups were funded in January cumulatively raising $784 million; a great start for the new year. Four acquisitions were reported during the month while the IPO front lay waiting for something to happen.

According to Silicon Valley Bank’s annual survey of 1,000+ executives, startup founders are confident 2018 will be a good year for funding and for business conditions – except hiring and retaining foreign talent.

More than half of startups surveyed (51%) reported that at least one of their founders is an immigrant. One-third of startups said laws and regulations prompted them to locate facilities (or move non-sales operations) offshore. U.S. immigration policy was cited as being the biggest driver, followed by tax policies and the regulatory environment.

An interesting aside in the bank’s Chinese version of the survey is that about 2/3 of Chinese startups have at least one woman in a senior role, a considerably higher percentage than in the US where 71% of startups have no women on their board of directors and only 43% have women in executive positions.

Fundings

1. Carbon 3D, a 2013 Silicon Valley 3D manufacturing startup using elastomers and carbon steel, raised $200 million for a Series D funding led by Johnson & Johnson Innovation. Other funders included Baillie Gifford, Fidelity Management & Research Company ARCHINA Capital, Hydra Ventures (the corporate venturing arm of Adidas), GE Ventures, JSR Corporation, and Emerson Elemental. The funding is also supported by current investors including BMW Ventures, Sequoia Capital, Silver Lake Kraftwerk, and Reinet Investments and others.

2. Pony.ai, a Fremont, Calif. and China-based self-driving startup, raised $112 million in Series A funding. Morningside Venture Capital and Legend Capital led the round, and were joined by investors including Sequoia China, IDG Capital, Hongtai Capital, Legend Star, Puhua Capital, Polaris Capital, DCM Ventures, Comcast Ventures and Silicon Valley Future Capital.

3. Rokid, a Chinese and Silicon Valley developer of award-winning consumer products rivaling Amazon’s Echo (in China), raised $100 million in a Series C round from Temasek Holdings with participation from Credit Suisse, IDG Capital Partners and CDIB Capital.

4. Nuro, a Silicon Valley startup developing a self-driving delivery vehicle, raised $92 million in a Series A funding led by Banyan Capital and Greylock Partners plus individual investor Simon Rothman.

5. Precision Hawk, a Raleigh, NC provider of drone technology and analytics, raised $75 million in funding. Third Point Ventures led the round, and was joined by investors including Comcast Ventures, Senator Investor Group, Constellation Technology Ventures andSyngenta Ventures. Existing investors Intel Capital, Millennium Technology Value Partners, DuPont, Verizon Ventures, and Indiana University’s Innovate Indiana Fund participated.

6. AImotive, a Hungarian startup offering full-stack autonomous vehicle technology, raised $38 million in Series C funding. B Capital Group and Prime Ventures led the round, and was joined by investors including Cisco Investments, Samsung Catalyst Fund, and Series A and B investors Robert Bosch Venture Capital, Inventure, Draper Associates and Day One Capital.

7. Knightscope, a Mountain View, Calif.-based maker of security robots, raised more than $25 million in funding. Investors include Konica Minolta and Bright Success Capital.

8. EasyMile, a French self-driving systems startup with tests running in Finland, Greece, France and Switzerland, raised around $17.4 million in a Series A round funded by Alstom.

9. Occipital, a Boulder, Colo.-based developer of mobile computer vision applications, raised $15 million in Series C funding, according to TechCrunch. Foundry Group led the round. With this round, Occipital is looking to expand its tracking platform into what it calls its “Perception Engine,” which will require it making some deeper moves into machine learning, pushing into technologies that reside outside of simply defining the geometry of a space. The startup wants its tracking tech to recognize people and identify objects.

10. Voyage, a Silicon Valley autonomous taxi service, raised $15 million from led by InMotion Ventures with additional funds from SV Angel, Khosla Ventures, Initialized Capital, CRV and Amino Capital. Voyage recently announced plans to deploy a fleet of AV taxis in a 125k resident retirement community in Florida.

11. Huazhi IMT, a Chinese industrial robot integrator, received $15 million in early stage funding from Ying Capital and Banyan Capital.

Canvas Technology is building mobile robots for warehouses. (Credit: Canvas Technology)

12. Canvas Technology, a Boulder, CO startup building mobile robots for warehouses, raised $15 million in a Series A round led by Playground Global, with previous investors Xplorer Capital, AME Cloud Ventures and Morado Ventures.

13. Mall Parking, a Shenzhen mobile parking system developer, raised $12.5 million in Series B funding from the Holdfound Group.

14. Robotis Dynamixel, a Korean maker of actuators, parts and educational robot kits, raised $8.5 million from LG Electronics for a 10.12% share of the company.

15. Beijing Aqiu Technology (Aqrose), a Chinese start-up that applies machine vision and machine learning technology into industrial automation, raised $8 million in a series A round led by Baidu Ventures and DCM China with participation by Long Capital, Powercloud Venture Capital and Innoangel Fund.

16. Iris Automation, a San Francisco-based provider of collision avoidance for commercial drones, raised $8 million in Series A funding led by Bessemer Venture Partners and including Bee Partners.

17. SkySpecs, an Ann Arbor, Michigan-based automated infrastructure inspection company, raised $8 million in Series B funding. Investors include Statkraft Ventures, UL Ventures, Capital Midwest, Venture Investors, and Huron River Ventures.

18. CalmCar Vehicle Vision, a Chinese driving vision system provider, raised $4.6 million from Shenzhen Guozhong Venture Capital Management.

19. Doxel, a Silicon Valley construction productivity startup using AI and vision systems, raised $4.5 million in a round led by Andreessen Horowitz with participation by Alchemist Accelerator, Pear Ventures, SV Angel and Steelhead Ventures participating. Doxel uses autonomous devices to monitor a site every day, with LIDAR and HD cameras. Their AI then processes this visual data, inspects installation quality, and quantifies how much material has been installed correctly and then provide real-time feedback on productivity.

20. Altitude Angel, a UK startup providing a drone traffic management platform and airspace integration system, raised $4.5 million in Series A funding led by Seraphim Space Fund with participation from ADV and Frequentis AG.

21. Iron Ox, a San Francisco, Calif.-based developer of robots for crop harvesting, raised $3 million in seed funding. Eniac Ventures led the round, and was joined by investors including Amplify Partners.

22. Understand.ai, a machine-learning startup for training and validation data in autonomous vehicles, raised $2.8 million in seed funding. LEA Partners led the round, and was joined by investors including Frontline Ventures, Synapse Partners and Agile Partners.

23. Converge Industries, a San Francisco-based startup providing AI powered software for commercial drones for insurance adjusters, raised $750k in a seed round led by Samsung’s NEXT Ventures, Techstars and Kima Ventures with participation from Right Side Capital Management.

24. Maidbot, an Austin, TX industrial cleaning robot startup, received an undisclosed amount in an A round led by Bissell with participation from existing and new investors including 1517 Fund, Comet Labs, and Rough Draft Ventures, along with strategic hospitality and industrial cleaning organizations.

25. Connexient, a New York City-based provider of indoor mapping, navigation and location-based services for companies with large, complex buildings and campuses raised an undisclosed amount of funding from Riverside Acceleration Capital.

Acquisitions

1. Uber sold a 15% stake to SoftBank for $7.75 billion. $6.5 billion of the SoftBank investment was at a 30% discount against Uber’s private valuation of $69 billion. The additional $1.25 billion was at Uber’s current/higher valuation. That’s a major deal at a significant discount for SoftBank.

2. Key Technology, a Walla Walla, WA-based industrial robots integrator, was acquired by Duravant, a food processing equipment solutions provider, for $175 million. Duravent issued a tender offer to acquire all of the outstanding shares of Key common stock at $26.75 per share ($175 million).

3. PaR Systems, a Minnesota-based integrator of industrial robotics and also engineering services for mission-critical special situations, was acquired by the Pohlad family (which also owns the Minnesota Twins baseball team. PaR has 380 employees. Financial details were not disclosed. Pohlad bought PaR from MML Capital Partners, an equity fund.

4. Visual Components, a Finnish software and AI company providing 3D simulation software to manufacturers including Kuka, Foxconn and Samsung, was acquired by Kuka.

IPOs

Nada. Zip. Nothing. In the SV Bank survey referenced above, executives were asked what their realistic long-term goal and/or exit plan was and 57% responded they thought that acquisition was the most common path for them. Only 18% said they would go public.

Page 4 of 5
1 2 3 4 5