Archive 20.02.2019

Page 2 of 4
1 2 3 4

Robots track moving objects with unprecedented precision

MIT Media Lab researchers are using RFID tags to help robots home in on moving objects with unprecedented speed and accuracy, potentially enabling greater collaboration in robotic packaging and assembly and among swarms of drones.
Photo courtesy of the researchers

A novel system developed at MIT uses RFID tags to help robots home in on moving objects with unprecedented speed and accuracy. The system could enable greater collaboration and precision by robots working on packaging and assembly, and by swarms of drones carrying out search-and-rescue missions.

In a paper being presented next week at the USENIX Symposium on Networked Systems Design and Implementation, the researchers show that robots using the system can locate tagged objects within 7.5 milliseconds, on average, and with an error of less than a centimeter.

In the system, called TurboTrack, an RFID (radio-frequency identification) tag can be applied to any object. A reader sends a wireless signal that reflects off the RFID tag and other nearby objects, and rebounds to the reader. An algorithm sifts through all the reflected signals to find the RFID tag’s response. Final computations then leverage the RFID tag’s movement — even though this usually decreases precision — to improve its localization accuracy.

The researchers say the system could replace computer vision for some robotic tasks. As with its human counterpart, computer vision is limited by what it can see, and it can fail to notice objects in cluttered environments. Radio frequency signals have no such restrictions: They can identify targets without visualization, within clutter and through walls.

To validate the system, the researchers attached one RFID tag to a cap and another to a bottle. A robotic arm located the cap and placed it onto the bottle, held by another robotic arm. In another demonstration, the researchers tracked RFID-equipped nanodrones during docking, maneuvering, and flying. In both tasks, the system was as accurate and fast as traditional computer-vision systems, while working in scenarios where computer vision fails, the researchers report.

“If you use RF signals for tasks typically done using computer vision, not only do you enable robots to do human things, but you can also enable them to do superhuman things,” says Fadel Adib, an assistant professor and principal investigator in the MIT Media Lab, and founding director of the Signal Kinetics Research Group. “And you can do it in a scalable way, because these RFID tags are only 3 cents each.”

In manufacturing, the system could enable robot arms to be more precise and versatile in, say, picking up, assembling, and packaging items along an assembly line. Another promising application is using handheld “nanodrones” for search and rescue missions. Nanodrones currently use computer vision and methods to stitch together captured images for localization purposes. These drones often get confused in chaotic areas, lose each other behind walls, and can’t uniquely identify each other. This all limits their ability to, say, spread out over an area and collaborate to search for a missing person. Using the researchers’ system, nanodrones in swarms could better locate each other, for greater control and collaboration.

“You could enable a swarm of nanodrones to form in certain ways, fly into cluttered environments, and even environments hidden from sight, with great precision,” says first author Zhihong Luo, a graduate student in the Signal Kinetics Research Group.

The other Media Lab co-authors on the paper are visiting student Qiping Zhang, postdoc Yunfei Ma, and Research Assistant Manish Singh.

Super resolution

Adib’s group has been working for years on using radio signals for tracking and identification purposes, such as detecting contamination in bottled foods, communicating with devices inside the body, and managing warehouse inventory.

Similar systems have attempted to use RFID tags for localization tasks. But these come with trade-offs in either accuracy or speed. To be accurate, it may take them several seconds to find a moving object; to increase speed, they lose accuracy.

The challenge was achieving both speed and accuracy simultaneously. To do so, the researchers drew inspiration from an imaging technique called “super-resolution imaging.” These systems stitch together images from multiple angles to achieve a finer-resolution image.

“The idea was to apply these super-resolution systems to radio signals,” Adib says. “As something moves, you get more perspectives in tracking it, so you can exploit the movement for accuracy.”

The system combines a standard RFID reader with a “helper” component that’s used to localize radio frequency signals. The helper shoots out a wideband signal comprising multiple frequencies, building on a modulation scheme used in wireless communication, called orthogonal frequency-division multiplexing.

The system captures all the signals rebounding off objects in the environment, including the RFID tag. One of those signals carries a signal that’s specific to the specific RFID tag, because RFID signals reflect and absorb an incoming signal in a certain pattern, corresponding to bits of 0s and 1s, that the system can recognize.

Because these signals travel at the speed of light, the system can compute a “time of flight” — measuring distance by calculating the time it takes a signal to travel between a transmitter and receiver — to gauge the location of the tag, as well as the other objects in the environment. But this provides only a ballpark localization figure, not subcentimter precision.

Leveraging movement

To zoom in on the tag’s location, the researchers developed what they call a “space-time super-resolution” algorithm.

The algorithm combines the location estimations for all rebounding signals, including the RFID signal, which it determined using time of flight. Using some probability calculations, it narrows down that group to a handful of potential locations for the RFID tag.

As the tag moves, its signal angle slightly alters — a change that also corresponds to a certain location. The algorithm then can use that angle change to track the tag’s distance as it moves. By constantly comparing that changing distance measurement to all other distance measurements from other signals, it can find the tag in a three-dimensional space. This all happens in a fraction of a second.

“The high-level idea is that, by combining these measurements over time and over space, you get a better reconstruction of the tag’s position,” Adib says.

The work was sponsored, in part, by the National Science Foundation.

What do California disengagement reports tell us?

California has released the disengagement reports the law requires companies to file and it’s a lot of data. Also worth noting is Waymo’s own blog post on their report where they report their miles per disengagement has improved from 5,600 to 11,000.

Fortunately some hard-working redditors and others have done some summation of the data, including this one from Last Driver’s Licence Holder. Most notable are an absolutely ridiculous number from Apple, and that only Waymo and Cruise have numbers suggesting real capability, with Zoox coming in from behind.

The problem, of course, is that “disengagements” is a messy statistic. Different teams report different things. Different disengagements have different importance. And it matters how complex the road you are driving is. (Cruise likes to make a big point of that.)

Safety drivers are trained to disengage if they feel at all uncomfortable. This means that they will often disengage when it is not actually needed. So it’s important to do what Waymo does, namely to play back the situation in simulator to see what would have happened if the driver had not taken over. That playback can reveal if it was:

  • Paranoia (as expected) from the safety driver, but no actual issue.
  • A tricky situation that is the fault of another driver.
  • A situation where the vehicle would have done something undesired, but not dangerous.
  • A situation like the above, but dangerous, though nothing would have actually happened. Example — temporarily weaving out of a lane when nobody else is there.
  • A situation which would have resulted in a “contact” — factored with the severity of the contact, from nothing, to ding, to crash, to injury, to fatality.

A real measurement involves a complex mix of all these, and I’ll be writing up more about how we could possibly score these.

We know the numbers for these events for humans thanks to “naturalistic” driving studies and other factors. Turns out that humans are making mistakes all the time. We’re constantly not paying attention to something on the road we should be looking at, but we get away with it. We constantly find ourselves drifting out of a lane, or find we must brake harder than we would want to. But mostly, nothing happens. Robots aren’t handled that way — any mistake is a serious issue. Robocars will have fewer crashes because “somebody else was in the wrong place when I wasn’t looking.” Their crashes will often have causes that are foreign to humans.

In Waymo’s report you can actually see a few disengagements because the perception system didn’t see something. That’s definitely something to investigate and fix, but humans don’t see something very frequently, and we still do tolerably well.

A summary of the numbers for humans on US roads:

  • Some sort of “ding” accident every 100,000 miles of driving (roughly).
  • An accident reported to insurance every 250,000 miles.
  • An accident reported to police every 500,000 miles.
  • An injury accident every 1.5M miles.
  • A fatality every 80M miles of all driving.
  • A highway fatality every 180M of highway driving.
  • A pedestrian killed every 600M miles of total driving.

Software disengagements

The other very common type of disengagement is a software disengagement. Here, the software decides to disengage because it detects something is going wrong. These are quite often not safety incidents. Modern software is loaded with diagnostic tests, always checking if things are going as expected. When one fails, most software just logs a warning, or “throws an exception” to code that handles the problem. Most of the time, that code does indeed handle the problem, and there is no safety incident. But during testing, you want to disengage to be on the safe side. Once again, the team examines the warning/exception to find out the cause and tries to fix it and figure out how serious it would have been.

That’s why Waymo’s 11,000 miles is a pretty good number. They have not published it in a long time, but their number of “necessary interventions” is much higher than that. In fact, we can bet that in the Phoenix area, where they have authorized limited operations with no safety driver, that it’s better than the numbers above.

#280: Semantics in Robotics, with Amy Loutfi



In this episode, Audrow Nash interviews Amy Loutfi, a professor at Örebro University, about how semantic representations can be used to help robots reason about the world.  Loutfi discusses semantics in general, as well as how semantics have been used for a simulated quad rotor to do path planning within constraints.

Amy Loutfi

Amy Loutfi is head of the Center for Applied Autonomous Sensor Systems (www.aass.oru.se) at Örebro University. She is also a professor in Information Technology at Örebro University. She received her Ph.d in Computer Science with a focus on the integration of artificial olfaction on robotic and intelligent systems. She currently leads one of the labs at the Center, the machine perception and interaction lab (www.mpi.aass.oru.se). Her general interests are in the area of integration of artificial intelligence with autonomous systems, and over the years has looked into applications where robots closely interact with humans in both industry and domestic environments.

Links

 

Swift Navigation’s Duro Ruggedized RTK GNSS Receiver

Duro® and Duro Inertial are enclosed dual-frequency RTK GNSS receivers. Designed and built to survive long-term, outdoor deployments, the easy-to-deploy Duro and Duro Inertial combine centimeter-accurate positioning with military ruggedness at a breakthrough price. Duro Inertial features an integrated IMU for continuous centimeter-accurate positioning in the harshest of outdoor deployments.

IPR Robotics – Right-Sized 7th Axis Robot Linear Rails

IPR Robotics offers a wide range of servo-driven 7th axis linear rails for industrial robots. These rails come in ten different sizes and are constructed from modular high strength extruded aluminum sections to handle payloads of 100 kg to 1600 kg or from steel to handle 2000 kg payloads. This variety of rail sizes allows each application to be sized correctly, controlling the space required and the price point. The drive train design of these rails utilizes helical gear-racks and is proven over 10 years to be repeatable and reliable, even in tough foundry applications.

Learning preferences by looking at the world

By Rohin Shah and Dmitrii Krasheninnikov

It would be great if we could all have household robots do our chores for us. Chores are tasks that we want done to make our houses cater more to our preferences; they are a way in which we want our house to be different from the way it currently is. However, most “different” states are not very desirable:

Surely our robot wouldn’t be so dumb as to go around breaking stuff when we ask it to clean our house? Unfortunately, AI systems trained with reinforcement learning only optimize features specified in the reward function and are indifferent to anything we might’ve inadvertently left out. Generally, it is easy to get the reward wrong by forgetting to include preferences for things that should stay the same, since we are so used to having these preferences satisfied, and there are so many of them. Consider the room below, and imagine that we want a robot waiter that serves people at the dining table efficiently. We might implement this using a reward function that provides 1 reward whenever the robot serves a dish, and use discounting so that the robot is incentivized to be efficient. What could go wrong with such a reward function? How would we need to modify the reward function to take this into account? Take a minute to think about it.


Here’s an incomplete list we came up with:

  • The robot might track dirt and oil onto the pristine furniture while serving food, even if it could clean itself up, because there’s no reason to clean but there is a reason to hurry.
  • In its hurry to deliver dishes, the robot might knock over the cabinet of wine bottles, or slide plates to people and knock over the glasses.
  • In case of an emergency, such as the electricity going out, we don’t want the robot to keep trying to serve dishes – it should at least be out of the way, if not trying to help us.
  • The robot may serve empty or incomplete dishes, dishes that no one at the table wants, or even split apart dishes into smaller dishes so there are more of them.

Note that we’re not talking about problems with robustness and distributional shift: while those problems are worth tackling, the point is that even if we achieve robustness, the simple reward function still incentivizes the above unwanted behaviors.

It’s common to hear the informal solution that the robot should try to minimize its impact on the environment, while still accomplishing the task. This could potentially allow us to avoid the first three problems above, though the last one still remains as an example of specification gaming. This idea leads to impact measures that attempt to quantify the “impact” that an agent has, typically by looking at the difference between what actually happened and what would have happened had the robot done nothing. However, this also penalizes things we want the robot to do. For example, if we ask our robot to get us coffee, it might buy coffee rather than making coffee itself, because that would have “impact” on the water, the coffee maker, etc. Ultimately, we’d like to only prevent negative impacts, which means that we need our AI to have a better idea of what the right reward function is.

Our key insight is that while it might be hard for humans to make their preferences explicit, some preferences are implicit in the way the world looks: the world state is a result of humans having acted to optimize their preferences. This explains why we often want the robot to by default “do nothing” – if we have already optimized the world state for our preferences, then most ways of changing it will be bad, and so doing nothing will often (though not always) be one of the better options available to the robot.

Since the world state is a result of optimization for human preferences, we should be able to use that state to infer what humans care about. For example, we surely don’t want dirty floors in our pristine room; otherwise we would have done that ourselves. We also can’t be indifferent to dirty floors, because then at some point we would have walked around the room with dirty shoes and gotten a dirty floor. The only explanation is that we want the floor to be clean.

A simple setting

Let’s see if we can apply this insight in the simplest possible setting: gridworlds with a small number of states, a small number of actions, a known dynamics model (i.e. a model of “how the world works”), but an incorrect reward function. This is a simple enough setting that our robot understands all of the consequences of its actions. Nevertheless, the problem remains: while the robot understands what will happen, it still cannot distinguish good consequences from bad ones, since its reward function is incorrect. In these simple environments, it’s easy to figure out what the correct reward function is, but this is infeasible in a real, complex environment.

For example, consider the room to the right, where Alice asks her robot to navigate to the purple door. If we were to encode this as a reward function that only rewards the robot while it is at the purple door, the robot would take the shortest path to the purple door, knocking over and breaking the vase – since no one said it shouldn’t do that. The robot is perfectly aware that its plan causes it to break the vase, but by default it doesn’t realize that it shouldn’t break the vase.

In this environment, does it help us to realize that Alice was optimizing the state of the room for her preferences? Well, if Alice didn’t care about whether the vase was broken, she would have probably broken it some time in the past. If she wanted the vase broken, she definitely would have broken it some time in the past. So the only consistent explanation is that Alice cared about the vase being intact, as illustrated in the gif below.

While this example has the robot infer that it shouldn’t take the action of breaking a vase, the robot can also infer goals that it should actively pursue. For example, if the robot observes a basket of apples near an apple tree, it can reasonably infer that Alice wants to harvest apples, since the apples didn’t walk into the basket themselves – Alice must have put effort into picking the apples and placing them in the basket.

Reward Learning by Simulating the Past

We formalize this idea by considering an MDP in which our robot observes the initial state $s_0$ at deployment, and assumes that it is the result of a human optimizing some unknown reward for $T$ timesteps.

Before we get to our actual algorithm, consider a completely intractable algorithm that should do well: for each possible reward function, simulate the trajectories that Alice would take if she had that reward, and see if the resulting states are compatible with $s_0$. This set of compatible reward functions give the candidates for Alice’s reward function. This is the algorithm that we implicitly use in the gif above.

Intuitively, this works because:

  • Anything that requires effort on Alice’s part (e.g. keeping a vase intact) will not happen for the vast majority of reward functions, and will force the reward functions to incentivize that behavior (e.g. by rewarding intact vases).
  • Anything that does not require effort on Alice’s part (e.g. a vase becoming dusty) will happen for most reward functions, and so the inferred reward functions need not incentivize that behavior (e.g. there’s no particular value on dusty/clean vases).

Another way to think of it is that we can consider all possible past trajectories that are compatible with $s_0$, infer the reward function that makes those trajectories most likely, and keep those reward functions as plausible candidates, weighted by the number of past trajectories they explain. Such an algorithm should work for similar reasons. Phrased this way, it sounds like we want to use inverse reinforcement learning to infer rewards for every possible past trajectory, and aggregate the results. This is still intractable, but it turns out we can take this insight and turn it into a tractable algorithm.

We follow Maximum Causal Entropy Inverse Reinforcement Learning (MCEIRL), a commonly used algorithm for small MDPs. In this framework, we know the action space and dynamics of the MDP, as well as a set of good features of the state, and the reward is assumed to be linear in these features. In addition, the human is modelled as Boltzmann-rational: Alice’s probability of taking a particular action from a given state is assumed to be proportional to the exponent of the state-action value function Q, computed using soft value iteration. Given these assumptions, we can calculate $p(\tau \mid \theta_A)$, the distribution over the possible trajectories $\tau = s_{-T} a_{-T} \dots s_{-1} a_{-1} s_0$ under the assumption that Alice’s reward was $\theta_A$. MCEIRL then finds the $\theta_A$ that maximizes the probability of a set of trajectories .

Rather than considering all possible trajectories and running MCEIRL on all of them to maximize each of their probabilities individually, we instead maximize the probability of the evidence that we see: the single state $s_0$. To get a distribution over $s_0$, we marginalize out the human’s behavior prior to the robot’s initialization:

We then find a reward $\theta_A$ that maximizes the likelihood above using gradient ascent, where the gradient is analytically computed using dynamic programming. We call this algorithm Reward Learning by Simulating the Past (RLSP) since it infers the unknown human reward from a single state by considering what must have happened in the past.

Using the inferred reward

While RLSP infers a reward that captures the information about human preferences contained in the initial state, it is not clear how we should use that reward. This is a challenging problem – we have two sources of information, the inferred reward from $s_0$, and the specified reward $\theta_{\text{spec}}$, and they will conflict. If Alice has a messy room, $\theta_A$ is not going to incentivize cleanliness, even though $\theta_{\text{spec}}$ might.

Ideally, we would note the scenarios under which the two rewards conflict, and ask Alice how she would like to proceed. However, in this work, to demonstrate the algorithm we use the simple heuristic of adding the two rewards, giving us a final reward $\theta_A + \lambda \theta_{\text{spec}}$, where $\lambda$ is a hyperparameter that controls the tradeoff between the rewards.

We designed a suite of simple gridworlds to showcase the properties of RLSP. The top row shows the behavior when optimizing the (incorrect) specified reward, while the bottom row shows the behavior you get when you take into account the reward inferred by RLSP. A more thorough description of each environment is given in the paper. The last environment in particular shows a limitation of our method. In a room where the vase is far away from Alice’s most probable trajectories, the only trajectories that Alice could have taken to break the vase are all very long and contribute little to the RLSP likelihood. As a result, observing the intact vase doesn’t tell the robot much about whether Alice wanted to actively avoid breaking the vase, since she wouldn’t have been likely to break it in any case.

What’s next?

Now that we have a basic algorithm that can learn the human preferences from one state, the natural next step is to scale it to realistic environments where the states cannot be enumerated, the dynamics are not known, and the reward function is not linear. This could be done by adapting existing inverse RL algorithms, similarly to how we adapted Maximum Causal Entropy IRL to the one-state setting.

The unknown dynamics setting, where we don’t know “how the world works”, is particularly challenging. Our algorithm relies heavily on the assumption that our robot knows how the world works – this is what gives it the ability to simulate what Alice “must have done” in the past. We certainly can’t learn how the world works just by observing a single state of the world, so we would have to learn a dynamics model while acting that can then be used to simulate the past (and these simulations will get better as the model gets better).

Another avenue for future work is to investigate the ways to decompose the inferred reward into $\theta_{A, \text{task}}$ which says which task Alice is performing (“go to the black door”), and $\theta_{\text{frame}}$, which captures what Alice prefers to keep unchanged (“don’t break the vase”). Given the separate $\theta_{\text{frame}}$, the robot could optimize $\theta_{\text{spec}}+\theta_{\text{frame}}$ and ignore the parts of the reward function that correspond to the task Alice is trying to perform.

Since $\theta_{\text{frame}}$ is in large part shared across many humans, we could infer it using models where multiple humans are optimizing their own unique $\theta_{H,\text{task}}$ but the same $\theta_{\text{frame}}$, or we could have one human whose task change over time. Another direction would be to assume a different structure for what Alice prefers to keep unchanged, such as constraints, and learn them separately.

You can learn more about this research by reading our paper, or by checking out our poster at ICLR 2019. The code is available here.

This article was initially published on the BAIR blog, and appears here with the authors’ permission.

Is the green new deal sustainable?

This week Washington DC was abuzz with news that had nothing to do with the occupant of The While House. A group of progressive legislators, led by Alexandra Ocasio-Cortez, in the House of Representatives, introduced “The Green New Deal.” The resolution by the Intergovernmental Panel on Climate Change was in response to the alarming Fourth National Climate Assessment and aims to reduce global “greenhouse gas emissions from human sources of 40 to 60 percent from 2010 levels by 2030; and net-zero global emissions by 2050.” While the bill is largely targeting the transportation industry, many proponents suggest that it would be more impactful, and healthier, to curb America’s insatiable appetite for animal agriculture.

In a recent BBC report, “Food production accounts for one-quarter to one-third of all anthropogenic greenhouse gas emissions worldwide, and the brunt of responsibility for those numbers falls to the livestock industry.” The average US family, “emits more greenhouse gases because of the meat they eat than from driving two cars,” quipped Professor Tim Benton of the University of Leeds. “Most people don’t think of the consequences of food on climate change. But just eating a little less meat right now might make things a whole lot better for our children and grandchildren,” signed Benton.

Americans continue to chow down more than 26 billion pounds of meat a year, distressing environmentalists who assert that the current status quo is unsustainable. While veganism would provide a 70% relief to greenhouse gases worldwide, it is not foreseeable that 7 billion people would instantly change their diets to save the planet. Robotics, and even more so, artificial intelligence, is now being embraced by venture-backed entrepreneurs to artificially grow meat alternatives as creative gastronomic replacements.

Screen Shot 2019-02-08 at 3.45.34 PM

Chilean startup, Not Company (NotCo), built a machine learning platform named Giuseppe to search animal ingredient substitutes. NotCo founder Matias Muchnick explains, “Giuseppe was created to understand molecular connections between food and the human perception of taste and texture.” While Muchnick did not disclose his techniques, he revealed to Business Insider that the company has hired teams of food and data scientists to classify ingredients into bits for Giuseppe. Muchnick explains the AI begins the work of processing the “data regarding how the brain works when it’s given certain flavors, when you taste salty, umami, [or] sweet.” Today, the company has a line of egg and milk alternatives on the shelves including: “Not Mayo,” Not Cheese,” “Not Yogurt and “Not Milk.” The NotCo website states that this is only the first step in a larger scheme for the deep learning algorithm: “NotCo currently has a very ambitious development plan for Giuseppe, which includes the generation of new databases with information of a different nature, such as production processes and other molecular properties of food, in such a way that Giuseppe gets closer and closer to be the most advanced chef and food scientist in the world.”

NotCo competes in a growing landscape of other animal substitute upstarts. Hampton Creek, which recently rebranded as JUST, also offers an array of dairy and egg alternatives from plant-based ingredients. The ultimate test for all these companies is creating meat in a petri dish. When responding to the challenge, JUST announced, “Through a first-of-its-kind partnership, JUST will develop cultured Wagyu beef using cells from Toriyama prized cows. Then, Awano Food Group (a premier international supplier of meat and seafood) will market and sell the meat to clients exactly how they do today with conventionally produced Toriyama Wagyu.” Today, a handful of companies, many ironically backed by livestock corporations, are also tackling the $90 billion cellular agriculture market, including: Mosa MeatImpossible Burger, Beyond Meat, and Memphis Meats. Mosa, backed by Google founder Sergi Brin, unveiled the first synthetic burger in 2013 at a staggering cost of nearly a half million dollars.

Screen Shot 2019-02-08 at 4.27.06 PM
While costs are declining, cultured meat is too expensive to supplement the American diet, especially when $1 still buys one a fast food dinner. The key to mass acceptance is attacking the largest pain point in the lab – acquiring enough genetic material from bovine tissue. Currently, the cost of such serums are close to $1,000 an ounce, and not exactly cruelty free as they are derived from animals. Many clean meat founders are proudly vegan with the implicit goal of replacing animal ingredients altogether. In order to accomplish this task, companies like JUST have invested in building robust AI and robotic systems to automatically scour the globe for plant-based alternatives. “Over 300,000 species are in the plant kingdom. That’s over 18 billion proteins, 108 million lipids, and 4 million polysaccharides. It’s an abundance almost entirely unexplored, until now,” exclaims their website. The company boasts that it is on the verge of major discoveries, “The more we explore, the more data we gather along the way. And the faster we’ll find the answers. It’s almost impossible to look at the data and say, ‘Here’s a pattern. Here’s an answer.’ So, we have to come up with algorithms to rank the materials and give downstream experiments a recommendation. In this way, we’re using data to increase the probability of discoveries.”

Screen Shot 2019-02-10 at 1.20.40 PM

The next few years will unearth major breakthroughs, already Mosa announced it will have an affordable product on the shelves by 2021. To accomplish this task, the company turned to Merck’s corporate venture arm, M Ventures, and Bell Food to lead its previous  financing round. Last July, Forbes reported that the strategic partnerships are critical to Mosa’s vision in mass producing meat. According to Mosa’s founder, Mark Post, “Merck’s experience with cell cultures is very attractive from a strategic standpoint. Cell production is key to scaling cultured meat production, as they still need to figure out how to get cells to grow more rapidly and at higher numbers. In short, new technology needs to be developed. That’s where companies like Merck can lend a hand.” In addition to leveraging the conglomerates expertise in the lab, food-packaging powerhouse, Bell Food, provides a huge distribution advantage. Already, Lorenz Wyss, CEO of Bell Food Group, excitedly predicts, “Meat demand is soaring and in the future it won’t be met by livestock agriculture alone. We believe this technology can become a true alternative for environment-conscious consumers, and we are delighted to bring our know-how and expertise of the meat business into this strategic partnership with Mosa Meat.”

Screen Shot 2019-02-10 at 2.36.22 PM
While the Green New Deal has been met with skepticism, the charging forces of climate change and technology are steaming ahead. Today, we have the computational and the mechatronic power to turn back the tides of destruction to implant positive change across the planet, quite possibly starting with scaling back animal agriculture. Even Winston Churchill commented in the 1931, “We shall escape the absurdity of growing a whole chicken in order to eat the breast or wing, by growing these parts separately under a suitable medium.”

Is our food source and AgTech networks under attack? Learn more at the next RobotLab on “Cybersecurity & Machines” with John Frankel of ffVC and Guy Franklin of SOSA on February 12th in New York City, RSVP Today!

Page 2 of 4
1 2 3 4