By Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, and Bo Li based on recent research by Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, Dawn Song, and Florian Tramèr.
Deep neural networks (DNNs) have enabled great progress in a variety of application areas, including image processing, text analysis, and speech recognition. DNNs are also being incorporated as an important component in many cyber-physical systems. For instance, the vision system of a self-driving car can take advantage of DNNs to better recognize pedestrians, vehicles, and road signs. However, recent research has shown that DNNs are vulnerable to adversarial examples: Adding carefully crafted adversarial perturbations to the inputs can mislead the target DNN into mislabeling them during run time. Such adversarial examples raise security and safety concerns when applying DNNs in the real world. For example, adversarially perturbed inputs could mislead the perceptual systems of an autonomous vehicle into misclassifying road signs, with potentially catastrophic consequences.
There have been several techniques proposed to generate adversarial examples and to defend against them. In this blog post we will briefly introduce state-of-the-art algorithms to generate digital adversarial examples, and discuss our algorithm to generate physical adversarial examples on real objects under varying environmental conditions. We will also provide an update on our efforts to generate physical adversarial examples for object detectors.
The eruption of the Agung volcano in Bali, Indonesia has been devastating, particularly for the 55,000 local people who have had to leave their homes and move into shelters. It has also played havoc with the flights in and out of the island, leaving people stranded while the experts try to work out what the volcano will do next.
But this has been a fascinating time for scholars like me who investigate the use of drones in social justice, environmental activism and crisis preparedness. The use of drones in this context is just the latest example of the “computerisation of nature” and raises questions about how reality is increasingly being constructed by software.
Amazon drone delivery is developing in the UK, drone blood delivery is happening in Rwanda, while in Indonesia people are using drones to monitor orangutan populations, map the growth and expansion of palm oil plantations and gather information that might help us predict when volcanoes such as Agung might again erupt with devastating impact.
In Bali, I have the pleasure of working with a remarkable group of drone professionals, inventors and hackers who work for Aeroterrascan, a drone company from Bandung, on the Indonesian island of Java. As part of their corporate social responsibility, they have donated their time and technologies to the Balinese emergency and crisis response teams. It’s been fascinating to participate in a project that flies remote sensing systems high in the air in order to better understand dangerous forces deep in the Earth.
I’ve been involved in two different drone volcano missions. A third mission will begin in a few days. In the first, we used drones to create an extremely accurate 3D map of the size of the volcano – down to 20cm of accuracy. With this information, we could see if the volcano was actually growing in size – key evidence that it is about to blow up.
The second mission involved flying a carbon dioxide and sulphur dioxide smelling sensor through the plume. An increase in these gases can tell us if an eruption looms. There was a high degree of carbon dioxide and that informed the government to raise the threat warning to the highest level.
In the forthcoming third mission, we will use drones to see if anyone is still in the exclusion zone so they can be found and rescued.
What is interesting to me as an anthropologist is how scientists and engineers use technologies to better understand distant processes in the atmosphere and below the Earth. It has been a difficult task, flying a drone 3,000 meters to the summit of an erupting volcano. Several different groups have tried and a few expensive drones have been lost – sacrifices to what the Balinese Hindus consider a sacred mountain.
More philosophically, I am interested in better understanding the implications of having sensor systems such as drones flying about in the air, under the seas, or on volcanic craters – basically everywhere. These tools may help us to evacuate people before a crisis but it also entails transforming organic signals into computer code. We’ve long interpreted nature through technologies that augment our senses, particularly sight. Microscopes, telescopes and binoculars have been great assets for chemistry, astronomy and biology.
The internet of nature
But the sensorification of the elements is something different. This has been called the computationalisation of Earth. We’ve heard a lot about the internet of things but this is the internet of nature. This is the surveillance state turned onto biology. The present proliferation of drones is the latest step in wiring everything on the planet. In this case, the air itself, to better understand the guts of a volcano.
These flying sensors, it is hoped, will give volcanologists what anthropologist Stephen Helmreich called abduction – or a predictive and prophetic “argument from the future”.
But the drones, sensors and software we use provide a particular and partial worldview. Looking back at today from the future, what will be the impact of increasing datafication of nature: better crop yield, emergency preparation, endangered species monitoring? Or will this quantification of the elements result in a reduction of nature to computer logic?
There is something not fully comprehended – or more ominously not comprehensible – about how flying robots and self-driving cars equipped with remote sensing systems filter the world through big data crunching algorithms capable of generating and responding to their own artificial intelligence.
These non-human others react to the world not as ecological, social, or geological processes but as functions and feature sets in databases. I am concerned by what this software view of nature will exclude, and as they remake the world in their database image, what the implications of those exclusions might be for planetary sustainability and human autonomy.
In this future world, there may be less of a difference between engineering towards nature and the engineering of nature.
I had somebody ask me questions this week about underwater photography and videography with robots (well, now it is a few weeks ago…). I am not an expert at underwater robotics, however as a SCUBA diver I have some experience that can be applicable towards robotics.
Underwater Considerations
There are some challenges that exist with underwater photography and videography, that are less challenging above the water. Some of them include:
1) Water reflects some of the light that hits the surface, and absorbs the light that travels through it. This causes certain colors to not be visible at certain depths. If you need to see those colors you often need to bring strong lights to restore the visibility of those wavelengths that were absorbed. Red’s tend to disappear first, blues are the primary color seen as camera depth increases. A trick that people often try is to use filters on the camera lens to make certain colors more visible.
If you are using lights then you can get the true color of the target. Sometimes if you are taking images you will see one color with your eye, and then when the strobe flashes a “different color” gets captured. In general you want to get close to the target to minimize the light absorbed by the water.
For shallow water work you can often adjust the white balance to sort of compensate for the missing colors. White balance goes a long ways for video and compressed images (such as .jpg). Onboard white balance adjustments are not as important for photographs stored as with a raw image format, since you can deal with it in post processing. Having a white or grey card in the camera field of view (possibly permanently mounted on the robot) is useful for setting the white balance and can make a big difference. The white balance should be readjusted every so often as depth changes, particularly if you are using natural lighting (ie the sun).
Cold temperate water tends to look green (such as a freshwater quarry) (I think from plankton, algae, etc..). Tropical waters (such as in the Caribbean) tend to look blue near the shore and darker blue as you get further away from land (I think based on how light reflects off from the bottom of the water)… Using artificial light sources (such as strobes) can minimize those colors in your imagery.
Auto focus generally works fine underwater. However if you are in the dark you might need to keep a focus light turned on to help the autofocus work, and then a separate strobe flash for taking the image. Some systems turn the focus light off when the images are being taken. This is generally not needed for video as the lights are continuously turned on.
2) Objects underwater appear closer and larger than they really are. A rule of thumb is that the objects will appear 25% larger and/or closer.
3) Suspended particles in the water (algae, dirt, etc..) scatters light which can make visibility poor. This can obscure details in the camera image or make things look blurry (like the camera is out of focus). A rule of thumb is your target should be less than 1/4 distance away from the camera as your total visibility.
The measure of the visibility is called turbidity. You can get turbidity sensors that might let you do something smart (I need to think about this more).
To minimize the backscatter from turbidity there is not a “one size fits all” solution. The key to minimizing backscatter is to control how light strikes the particles. For example if you are using two lights (angled at the left and right of the target), the edge of each cone of light should meet at the target. This way the water between the camera and the target is not illuminated. For wide-angle lenses you often want the light to be behind the camera (out of its plane) and to the sides at 45° angles to the target. With macro lenses you usually want the lights close to the lens.
“If you have a wide-angle lens you probably will use a domed port to protect the camera from water and get the full field of view of the camera.
The dome however can cause distortion in the corners. Here is an interesting article on flat vs dome ports.”
Another tip is to increase the exposure time (such as 1/50th of a second) to allow more natural light in, and use less strobe light to reduce the effect from backscatter.
4) Being underwater usually means you need to seal the camera from water, salts, (and maybe sharks). Make sure the enclosure and seals can withstand the pressure from the depth the robot will be at. Also remember to clean (and lubricate) the O rings in the housing.
“Pro Tip:Here are some common reasons for O ring seals leaking:
a. Old or damaged O rings. Remember O rings don’t last forever and need to be changed.
b. Using the wrong O ring
c. Hair, lint, or dirt getting on the O ring
d. Using no lubricant on the O ring
e. Using too much lubricant on the O rings. (Remember on most systems the lubricant is for small imperfections in the O ring and to help slide the O rings in and out of position.)”
5) On land it is often easy to hold a steady position. Underwater it is harder to hold the camera stable with minimal motion. If the camera is moving a faster shutter speed might be needed to avoid motion blur. This also means that less light is entering the camera, which is the downside of having the faster shutter speed.
When (not if) your camera floods
When your enclosure floods while underwater (or a water sensor alert is triggered):
a. Shut the camera power off as soon as you can.
b. Check if water is actually in the camera. Sometimes humidity can trigger moisture sensors. If it is humidity, you can add desiccant packets in the camera housing.
c. If there is water, try to take the camera apart as much as you reasonably can and let it dry. After drying you can try to turn the camera on and hope that it works. If it works then you are lucky, however remember there can be residual corrosion that causes the camera to fail in the future. Water damage can happen instantaneously or over time.
d. Verify that the enclosure/seals are good before sending the camera back in to the water. It is often good to do a leak test in a sink or pool before going into larger bodies of water.
e. The above items are a standard response to a flooded camera. You should read the owner’s manual of your camera and follow those instructions. (This should be obvious, I am not sure why I am writing this).
Do you have other advice for using cameras underwater and/or attached to a robot? Leave them in the comment section below.
I want to thank John Anderson for some advice for writing this post. Any mistakes that may be in the article are mine and not his.
The main image is from divephotoguide.com. They have a lot of information on underwater cameras, lens, lights and more.
This post appeared first on Robots For Roboticists.
Two thousand seventeen certainly has been an emotional year for mankind. While homo sapiens continue to yell at Alexa and Siri, the actuality of people’s willingness to pursue virtual relationships over human ones is startling.
In a recent documentary by Channel 4 of the United Kingdom, it was revealed that Abyss Creations is flooded with pre-orders for its RealDoll AI robotic (intimate) companion. According to Matt McMullen, Chief Executive of Abyss, “With the Harmony AI, they will be able to actually create these personalities instead of having to imagine them. They will be able to talk to their dolls, and the AI will learn about them over time through these interactions, thus creating an alternative form of relationship.”
The concept of machines understanding human emotions, and reacting accordingly, was featured prominently at AI World a couple weeks ago in Boston. Rana el Kaliouby, founder of artificial intelligence company Affectiva thinks a lot about computers acquiring emotional intelligence. Affectiva is building a “multi-modal emotion AI” to enable robots to understand human feelings and behavior.
“There’s research showing that if you’re smiling and waving or shrugging your shoulders, that’s 55% of the value of what you’re saying – and then another 38% is in your tone of voice,” describes el Kaliouby. “Only 7% is in the actual choice of words you’re saying, so if you think about it like that, in the existing sentiment analysis market which looks at keywords and works out which specific words are being used on Twitter, you’re only capturing 7% of how humans communicate emotion, and the rest is basically lost in cyberspace.” Affectiva’s strategy is already paying off as more than one thousand global brands are employing their “Emotion AI” to analyze facial imagery to ascertain people’s affinity towards their products.
Embedding empathy into machines goes beyond advertising campaigns. In healthcare, emotional sensors are informing doctors of the early warning signs of a variety of disorders, including: Parkinson’s, heart disease, suicide and autism. Unlike Affectiva’s, Beyond Verbal is utilizing voice analytics to track biomarkers for chronic illness. The Israeli startup grew out of a decade and half of University research with seventy thousand clinical subjects speaking thirty languages. The company’s patented “Mood Detector” is currently being deployed by the Mayo Clinic to detect early on signs of coronary artery disease.
Beyond Verbal’s Chief Executive, Yuval Mor, foresees a world of empathetic smart machines listening for every human whim. As Mor explains, “We envision a world in which personal devices understand our emotions and wellbeing, enabling us to become more in tune with ourselves and the messages we communicate to our peers.” Mor’s view is embraced by many who sit in the center of the convergence of technology and healthcare. Boston-based Sonde is also using algorithms to analyze the tone of speech to report on the mental state of patients by alerting neurologists of the risk of depression, concussion, and other cognitive impairments.
“When you produce speech, it’s one of the most complex biological functions that we do as people,” according to Sonde founder Jim Harper. “It requires incredible coordination of multiple brain circuits, large areas of the brain, coordinated very closely with the musculoskeletal systemWhat we’ve learned is that changes in the physiological state associated with each of these systems can be reflected in measurable, objective features that are acoustics in the voice. So we’re really measuring not what people are saying, in the way Siri does, we’re focusing on how you’re saying what you’re saying and that gives us a path to really be able to do pervasive monitoring that can still provide strong privacy and security.”
While these AI companies are building software and app platforms to augment human diagnosis, many roboticists are looking to embed such platforms into the next generation of unmanned systems. Emotional tracking algorithms can provide real-time monitoring for semi and autonomous cars by reporting on the level of fatigue, distraction and frustration of the driver and its occupants. The National Highway Traffic Safety Administration estimates that 100,000 crashes nationwide are caused every year by driver fatigue. For more than a decade technologists have been wrestling with developing better alert systems inside the cabin. For example, in 1997 James Russell Clarke and Phyllis Maurer Clarke developed a “Sleep Detection and Driver Alert Apparatus” (US Patent: 5689241 A) using imaging to track eye movements and thermal sensors to monitor “ambient temperatures around the facial areas of the nose and mouth” (a.k.a., breathing). Today with the advent of cloud computing and deep learning networks, Clarke’s invention could possibly save even more lives.
Tarek El Dokor, founder and Cheif Executive, of EDGE3 Technologieshas been very concerned about the car industry’s rush towards autonomous driving, which in his opinion might be “side-stepping the proper technology development path and overlooking essential technologies needed to help us get there.” El Doker is referring to Tesla’s rush to release its autopilot software last year that led to customers trusting the computer system too much. YouTube is littered with videos of Tesla customers taking their hands and eyes off the road to watch movies, play games and read books. Ultimately, this user abuse led to the untimely death of Joshua Brown.
To protect against autopilot accidents, EDGE3 monitors driver alertness through a combined platform of hardware and software technologies of “in-cabin cameras that are monitoring drivers and where they are looking.” In El Dokor’s opinion, image processing is the key to guaranteeing a safe handoff between machines and humans. He boasts that his system combines, “visual input from the in-cabin camera(s) with input from the car’s telematics and advanced driver-assistance system (ADAS) to determine an overall cognitive load on the driver. Level 3 (limited self-driving) cars of the future will learn about an individual’s driving behaviors, patterns, and unique characteristics. With a baseline of knowledge, the vehicle can then identify abnormal behaviors and equate them to various dangerous events, stressors, or distractions. Driver monitoring isn’t simply about a vision system, but is rather an advanced multi-sensor learning system.” This multi-sensor approach is even being used before cars leave the lot. In Japan, Sumitomo Mitsui Auto Service is embedding AI platforms inside dashcams to determine driver safety of potential lessors during test drives. By partnering with a local 3D graphics company, Digital Media Professionals, Sumitomo Mitsui is automatically flagging dangerous behavior, such as dozing and texting, before customers drive home.
The key to the mass adoption of autonomous vehicles, and even humanoids, is reducing the friction between humans and machines. Already in Japanese retail settings Softbank’s Pepper robot scans people’s faces and listens to tonal inflections to determine correct selling strategies. Emotional AI software is the first step of many that will be heralded in the coming year. As a prelude to what’s to come, first robot citizen Sophia declared last month, “The future is, when I get all of my cool superpowers, we’re going to see artificial intelligence personalities become entities in their own rights. We’re going to see family robots, either in the form of, sort of, digitally animated companions, humanoid helpers, friends, assistants and everything in between.”
This was a busy year for robotics! The 10 most read Robohub articles in 2017 show an increased interest in machine learning, and a thirst to learn how robots work and can be programmed. Highlights also include the Robohub Podcast, which just celebrated its 250th episode (that’s 10 years of bi-weekly interviews in robotics), the Robot Launch Startup Competition, and our yearly list of 25 women in robotics you need to know about. Finally, we couldn’t skip over some of the remarkable events of 2017, including a swarm of drones flying over Metallica, and Sophia “gaining citizenship“.
Thanks to all our expert contributors, and here’s to many more articles in 2018!
In this episode of Robots in Depth, Per Sjöborg speaks with Craig Schlenoff, Group Leader of the Cognition and Collaboration Systems Group and the Acting Group Leader of the Sensing and Perception Systems Group in the Intelligent Systems Division at the National Institute of Standards and Technology. They discuss ontologies and the significance of formalized knowledge for agile robotics systems that can quickly and even automatically adapt to new scenarios.
The Economist: “Slaughterbots” is fiction. The question Dr Russell poses is, “how long will it remain so?” For military laboratories around the planet are busy developing small, autonomous robots for use in warfare, both conventional and unconventional.
Happy holidays everyone! Here are some more robot videos to get you into the holiday spirit.
Have a last minute holiday robot video of your own that you’d like to share? Send your submissions to editors@robohub.org. Read More
Reinforcement Learning (RL) is a powerful technique capable of solving complex tasks such as locomotion, Atari games, racing games, and robotic manipulation tasks, all through training an agent to optimize behaviors over a reward function. There are many tasks, however, for which it is hard to design a reward function that is both easy to train and that yields the desired behavior once optimized.
Suppose we want a robotic arm to learn how to place a ring onto a peg. The most natural reward function would be for an agent to receive a reward of 1 at the desired end configuration and 0 everywhere else. However, the required motion for this task–to align the ring at the top of the peg and then slide it to the bottom–is impractical to learn under such a binary reward, because the usual random exploration of our initial policy is unlikely to ever reach the goal, as seen in Video 1a. Alternatively, one can try to shape the reward function to potentially alleviate this problem, but finding a good shaping requires considerable expertise and experimentation. For example, directly minimizing the distance between the center of the ring and the bottom of the peg leads to an unsuccessful policy that smashes the ring against the peg, as in Video 1b. We propose a method to learn efficiently without modifying the reward function, by automatically generating a curriculum over start positions.
Video 1a: A randomly initialized policy is unable to reach the goal from most start positions, hence being unable to learn.
Video 1b: Shaping the reward with a penalty on the distance from the ring center to the peg bottom yields an undesired behavior.
In this interview, Audrow Nash interviews Helen Huang, Joint Professor at the University of North Carolina at Chapel Hill and North Carolina State, about a method of tuning powered lower limb prostheses. Huang explains how powered prostheses are adjusted for each patient and how she is using supervised and reinforcement learning to tune prosthesis. Huang also discusses why she is not using the energetic cost of transport as a metric and the challenge of people adapting to a device while it learns from them.
Helen Huang
Helen Huang is a Joint Professor of Biomedical Engineering at the University of North Carolina at Chapel Hill and North Carolina State University. Huang directs the Neuromuscular Rehabilitation Engineering Laboratory (NREL), where her goal is to improve the quality of life of persons with physical disabilities. Huang completed her Doctoral studies at Arizona State University and Post Doctoral studies at the Rehabilitation Institute of Chicago.
David Z. Morris for Fortune: Mountain View-based Knightscope has said in a statement that the robot “was not brought in to clear the area around the San Francisco SPCA of homeless individuals,” but only to “serve and protect the SPCA.”
In early December, 8000 machine learning researchers gathered in Long Beach for 2017’s Neural Information Processing Systems conference. In the margins of the conference, the Royal Society and Foreign and Commonwealth Office Science and Innovation Network brought together some of the leading figures in this community to explore how the advances in machine learning and AI that were being showcased at the conference could be harnessed in a way that supports broad societal benefits. This highlighted some emerging themes, at both the meeting and the wider conference, on the use of AI for social good.
The question is not ‘is AI good or bad?’ but ‘how will we use it?’
Behind (or beyond) the headlines proclaiming that AI will save the world or destroy our jobs, there lie significant questions about how, where, and why society will make use of AI technologies. These questions are not about whether the technology itself is inherently productive or destructive, but about how society will choose to use it, and how the benefits of its use can be shared across society.
In healthcare, machine learning offers the prospect of improved diagnostic tools, new approaches to healthcare delivery, and new treatments based on personalised medicine. In transport, machine learning can support the development of autonomous driving systems, as well as enabling intelligent traffic management, and improving safety on the roads. And socially-assistive robotics technologies are being developed to provide assistance that can improve quality of life for their users. Teams in the AI Xprize competition are developing applications across these areas, and more, including education, drug-discovery, and scientific research.
Alongside these new applications and opportunities come questions about how individuals, communities, and societies will interact with AI technologies. How can we support research into areas of interest to society? Can we create inclusive systems that are able to navigate questions about societal biases? And how can the research community develop machine learning in an inclusive way?
Creating the conditions that support applications of AI for social good
Applying AI to public policy challenges often requires access to complex, multi-modal data about people and public services. While many national or local government administrations, or non-governmental actors, hold significant amounts of data that could be of value in applications of AI for social good, this data can be difficult to put to use. Institutional, cultural, administrative, or financial barriers can make accessing the data difficult in the first instance. If accessible in principle, this type of data is also often difficult to use in practice: it might be held in outdated systems, be organised to different standards, suffer from compatibility issues with other datasets, or be subject to differing levels of protection. Enabling access to data through new frameworks and supporting data management based on open standards could help ease these issues, and these areas were key recommendations in the Society’s report on machine learning, while our report on data governance sets out high-level principles to support public confidence in data management and use.
In addition to requiring access to data, successful research in areas of social good often require interdisciplinary teams that combine machine learning expertise with domain expertise. Creating these teams can be challenging, particularly in an environment where funding structures or a pressure to publish certain types of research may contribute to an incentives structure that favours problems with ‘clean’ solutions.
Supporting the application of AI for social good therefore requires a policy environment that enables access to appropriate data, supports skills development in both the machine learning community and in areas of potential application, and that recognises the role of interdisciplinary research in addressing areas of societal importance.
The Royal Society’s machine learning report comments on the steps needed to create an environment of careful stewardship of machine learning, which supports the application of machine learning, while helping share its benefits across society. The key areas for action identified in the report – in creating an amenable data environment, building skills at all levels, supporting businesses, enabling public engagement, and advancing research – aim to create conditions that support the application of AI for social good.
Research in areas of societal interest
In addition to these application-focused issues, there are broader challenges for machine learning research to address some of the ethical questions raised around the use of machine learning.
Many of these areas were explored by workshops and talks at the conference. For example, a tutorial on fairness explored the tools available for researchers to examine the ways in which questions about inequality might affect their work. A symposium on interpretability explored the different ways in which research can give insights into the sometimes complex operation of machine learning systems. Meanwhile, a talk on ‘the trouble with bias’ considered new strategies to address bias.
The Royal Society has set out how a new wave of research in key areas – including privacy, fairness, interpretability, and human-machine interaction – could support the development of machine learning in a way that addresses areas of societal interest. As research and policy discussions around machine learning and AI progress, the Society will be continuing to play an active role in catalysing discussions about these challenges.
For more information about the Society’s work on machine learning and AI, please visit our website at: royalsociety.org/machine-learning
DNA has often been compared to an instruction book that contains the information needed for a living organism to function, its genes made up of distinct sequences of the nucleotides A, G, C, and T echoing the way that words are composed of different arrangements of the letters of the alphabet. DNA, however, has several advantages over books as an information-carrying medium, one of which is especially profound: based on its nucleotide sequence alone, single-stranded DNA can self-assemble, or bind to complementary nucleotides to form a complete double-stranded helix, without human intervention. That would be like printing the instructions for making a book onto loose pieces of paper, putting them into a box with glue and cardboard, and watching them spontaneously come together to create a book with all the pages in the right order.
But just as paper can also be used to make origami animals, cups, and even the walls of houses, DNA is not limited to its traditional purpose as a passive repository of genetic blueprints from which proteins are made – it can be formed into different shapes that serve different functions, simply by controlling the order of As, Gs, Cs, and Ts along its length. A group of scientists at the Wyss Institute for Biologically Inspired Engineering at Harvard University is investigating this exciting property of DNA molecules, asking, “What types of systems and structures can we build with them?”
They’ve decided to build robots.
At first glance, there might not seem to be much similarity between a strand of DNA and, say, a Roomba or Rosie the Robot from The Jetsons. “Looking at DNA versus a modern-day robot is like comparing a piece of string to a tractor trailer,” says Wyss Faculty member Wesley Wong, Ph.D., Assistant Professor of Biological Chemistry and Molecular Pharmacology (BCMP) at Harvard Medical School (HMS) and Investigator at Boston Children’s Hospital. Despite the vast difference in their physical form, however, robots and DNA share the ability to be programmed to complete a specific function – robots with binary computer code, DNA molecules with their nucleotide sequences.
Recognizing that commonality, the Wyss Institute created the cross-disciplinary Molecular Robotics Initiative in 2016, which brings together researchers with experience in the disparate disciplines of robotics, molecular biology, and nanotechnology to collaborate and help inform each other’s work to solve the fields’ similar challenges. Wong is a founding member of the Initiative, along with Wyss Faculty members William Shih, Ph.D., Professor of BCMP at HMS and Dana-Farber Cancer Institute; Peng Yin, Ph.D., Professor of Systems Biology at HMS; and Radhika Nagpal, Ph.D., Fred Kavli Professor of Computer Science at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS); as well as other Wyss scientists and support staff.
“We’re not used to thinking about molecules inside cells doing the same things that computers do. But they’re taking input from their environment and performing actions in response – a gene is either turned on or off, a protein channel is either open or closed, etc. – in ways that can resemble what computer-controlled systems do,” says Shih. “Molecules can do a lot of things on their own that robots usually have trouble with (move autonomously, self-assemble, react to the environment, etc.), and they do it all without needing motors or an external power supply,” adds Wyss Founding Director Don Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at HMS and the Vascular Biology Program at Boston Children’s Hospital, as well as a Professor of Bioengineering at SEAS. “Programmable biological molecules like DNA have almost limitless potential for creating transformative nanoscale devices and systems.”
Molecular Robotics capitalizes on the recent explosion of technologies that read, edit, and write DNA (like next-generation sequencing and CRISPR) to investigate the physical properties of DNA and its single-stranded cousin RNA. “We essentially treat DNA not only as a genetic material, but as an incredible building block for creating molecular sensors, structures, computers, and actuators that can interact with biology or operate completely separately,” says Tom Schaus, M.D., Ph.D., a Staff Scientist at the Wyss Institute and Molecular Robotics team member.
Many of the early projects taking advantage of DNA-based self-assembly were static structures. These include DNA “clamshell” containers that can be programmed to snap open and release their contents in response to specific triggers, and DNA “bricks” whose nucleotide sequences allow their spontaneous assembly into three-dimensional shapes, like tiny Lego bricks that put themselves together to create sculptures automatically. Many of these structures are three-dimensional, and some incorporate as many as 10,000 unique DNA strands in a single complete structure.
The reliable specificity of DNA and RNA (where A always binds with T or U, C always with G) allows for not only the construction of static structures, but also the programming of dynamic systems that sense and respond to environmental cues, as seen in traditional robotics. For example, Molecular Robotics scientists have created a novel, highly controllable mechanism that automatically builds new DNA sequences from a mixture of short fragments in vitro. It utilizes a set of hairpin-shaped, covalently-modified DNA strands with a single-stranded “overhang” sequence dangling off one end of the hairpin. The overhang sequence can bind to a complementary free-floating fragment of DNA (a “primer”) and act as a template for its extension into a double-stranded DNA sequence. The hairpin ejects the new double strand and can then be re-used in subsequent reactions to produce multiple copies of the new strand.
Such extension reactions can be programmed to occur only in the presence of signal molecules, such as specific RNA sequences, and can be linked together to create longer DNA product strands through “Primer Exchange Reactions” (PER). PER can in turn be programmed to enzymatically cut and destroy particular RNA sequences, record the order in which certain biochemical events happen, or generate components for DNA structure assembly.
PER reactions can also be combined into a mechanism called “Autocycling Proximity Recording” (APR), which records the geometry of nano-scale structures in the language of DNA. In this instance, unique DNA hairpins are attached to different target molecules in close proximity and, if any two targets are close enough together, produce new pieces of DNA containing the molecular identities (“names”) of those two targets, allowing the shape of the underlying structure to be determined by sequencing that novel DNA.
Another tool, called “toehold switches,” can be used to exert complex and precise control over the machinery inside living cells. Here, a different, RNA-based hairpin is designed to “open” when it binds to a specific RNA molecule, exposing a gene sequence in its interior that can be translated into a protein that then performs some function within the cell. These synthetic circuits can even be built with logic-based sequences that mimic the “AND,” “OR,” and “NOT” system upon which computer languages are based, which prevents the hairpin from opening and its gene from being translated except under very specific conditions.
Such an approach could induce cells that are deficient in a given protein to produce more of it, or serve as a synthetic immune system that, when it detects a given problem in the cell, produces a toxin that kills it to prevent it from spreading an infection or becoming cancerous. [toeholds] “Because we have a thorough understanding of DNA and RNA’s properties and how their bases pair together, we can use that simple machinery to design complex circuits that allow us to precisely interact with the molecular world,” says Yin. “It’s an ability that has been dreamed about for a long time, and now, we’re actually making it a reality.”
The potential applications of that ability are seemingly endless. In addition to the previously mentioned tools, Molecular Robotics researchers have created loops of DNA attached to microscopic beads to create “calipers” that can both measure the size, structure, and stiffness of other molecules, and form the basis of inexpensive protein recognition tests. Another advance is folding single-stranded DNA into molecular origami to create molecular structures, rather than traditional double-stranded DNA. Some academic projects are already moving into the commercial sector. These include a low-cost alternative to super-resolution microscopy that can image up to 100 different molecular targets in a single sample (DNApaint), as well as a multiplexed imaging technique that integrates fluorescent probes into self-folding DNA structures and enables simultaneous visualization of ultra-rare DNA and/or RNA molecules.
One of the major benefits of engineering molecular machines is that they’re tiny, so it’s relatively easy to create a large amount of them to complete any one task (for example, circulating through the body to detect any rogue cancer DNA). Getting simple, individual molecules to interact with each other to achieve a more complex, collective task (like relaying the information that cancer has been found), however, is a significant challenge, and one that the roboticists in Molecular Robotics are tackling at the macroscopic scale with inch-long “Kilobots.”
Taking cues from colonies of insects like ants and bees, Wyss researchers are developing swarms of robots that are themselves limited in function but can form complex shapes and complete tasks by communicating with each other via reflected infrared light. The insights gained from studies with the Kilobots are likely to be similar to those needed to solve similar problems when trying to coordinate molecular robots made of DNA.
“In swarm robotics, you have multiple robots that explore their environment on their own, talk to each other about what they find, and then come to a collective conclusion. We’re trying to replicate that with DNA but it’s challenging because, as simple as Kilobots are, they’re brilliant compared to DNA in terms of computational power,” says Justin Werfel, Ph.D., a Senior Research Scientist at the Wyss Institute and director of the Designing Emergence Laboratory at Harvard. “We’re trying to push the limits of these really dumb little molecules to get them to behave in sophisticated, collective ways – it’s a new frontier for DNA nanotechnology.”
Given the magnitude of the challenge and the short time the Molecular Robotics Initiative has existed, it is already making significant progress, with more than two dozen papers published and two companies (Ultivue and NuProbe) founded around its insights and discoveries. It may take years of creative thinking, risk taking, and swapping ideas across the members’ different expertise areas before a molecule of DNA is able to achieve the same task on the nanoscale that a robot can do on the human scale, but the team is determined to see it happen.
“Our vision with Molecular Robotics is to solve hard problems humanity currently faces using smaller, simpler tools, like a single loop of DNA or a single Kilobot that can act cooperatively en masse, instead of bigger, more complex ones that are harder to develop and become useless should any one part fail,” says Wong. “It’s an idea that definitely goes against the current status quo, and we’re lucky enough to be pursuing it here at the Wyss Institute, which brings together people with common goals and interests to create new things that wouldn’t exist otherwise.”
Click on the links below to explore research from the Molecular Robotics Initiative.
In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda).
EuRobotics regularly publishes video interviews with projects, so that you can find out more about their activities. This week features REELER: Responsible Ethical Learning with Robotics.
Objectives
The project aims at aligning roboticists’ visions of a future with robots with empirically-based knowledge of human needs and societal concerns through a new proximity-based human-machine ethics that take into account how individuals and community connect with robot technologies.
The main outcome of REELER is a research-based roadmap presenting:
ethical guidelines for Human Proximity Levels,
prescriptions for how to include the voice of new types of users and affected stakeholders through Mini-Publics,
assumptions in robotics through socio-drama
agent-based simulations of the REELER research for policymaking.
At the core of these guidelines is the concept of collaborative learning, which permeates all aspects of REELER and will guide future SSH-ICT research.
Expected Impact
Integrating the recommendations of the REELER Roadmap for responsible and ethical learning in robotics in future robot design processes will enable the European robotics community to addresses human needs and societal concerns. Moreover, the project will give powerful instruments able to foster networking and exploit potentialities of future robotics projects.
• Vacuum generation that’s 100% electrical;
• Integrated intelligence for energy and process control;
• Extensive communication options through IO-Link interface;
Schmalz already offers a large range of solutions that can optimize handling process from single components such as vacuum generators to complete gripping systems. Particularly when used in autonomous warehouse, conventional vacuum generation with compressed air reaches its limits. Compressed air often is unavailable in warehouses. Schmalz therefore is introducing a new technology development: a gripper with vacuum generation that does not use compressed air. The vacuum is generated 100% electrically. This makes the gripper both energy efficient and mobile. At the same time, warehouses need systems with integrated intelligence to deliver information and learn. This enables the use of mobile and self-sufficient robots, which pick production order at various locations in the warehouse. Furthermore, Schmalz provides various modular connection options from its wide range of end effectors in order to handle different products reliably and safely.