Page 403 of 427
1 401 402 403 404 405 427

Underwater robot photography and videography


I had somebody ask me questions this week about underwater photography and videography with robots (well, now it is a few weeks ago…). I am not an expert at underwater robotics, however as a SCUBA diver I have some experience that can be applicable towards robotics.

Underwater Considerations

There are some challenges that exist with underwater photography and videography, that are less challenging above the water. Some of them include:

1) Water reflects some of the light that hits the surface, and absorbs the light that travels through it. This causes certain colors to not be visible at certain depths. If you need to see those colors you often need to bring strong lights to restore the visibility of those wavelengths that were absorbed. Red’s tend to disappear first, blues are the primary color seen as camera depth increases. A trick that people often try is to use filters on the camera lens to make certain colors more visible.

If you are using lights then you can get the true color of the target. Sometimes if you are taking images you will see one color with your eye, and then when the strobe flashes a “different color” gets captured. In general you want to get close to the target to minimize the light absorbed by the water.

Visible colors at given depths underwater. [Image Source]

For shallow water work you can often adjust the white balance to sort of compensate for the missing colors. White balance goes a long ways for video and compressed images (such as .jpg). Onboard white balance adjustments are not as important for photographs stored as with a raw image format, since you can deal with it in post processing. Having a white or grey card in the camera field of view (possibly permanently mounted on the robot) is useful for setting the white balance and can make a big difference. The white balance should be readjusted every so often as depth changes, particularly if you are using natural lighting (ie the sun).

Cold temperate water tends to look green (such as a freshwater quarry) (I think from plankton, algae, etc..). Tropical waters (such as in the Caribbean) tend to look blue near the shore and darker blue as you get further away from land (I think based on how light reflects off from the bottom of the water)… Using artificial light sources (such as strobes) can minimize those colors in your imagery.

Auto focus generally works fine underwater. However if you are in the dark you might need to keep a focus light turned on to help the autofocus work, and then a separate strobe flash for taking the image. Some systems turn the focus light off when the images are being taken. This is generally not needed for video as the lights are continuously turned on.

2) Objects underwater appear closer and larger than they really are. A rule of thumb is that the objects will appear 25% larger and/or closer.

3) Suspended particles in the water (algae, dirt, etc..) scatters light which can make visibility poor. This can obscure details in the camera image or make things look blurry (like the camera is out of focus). A rule of thumb is your target should be less than 1/4 distance away from the camera as your total visibility.

The measure of the visibility is called turbidity. You can get turbidity sensors that might let you do something smart (I need to think about this more).

To minimize the backscatter from turbidity there is not a “one size fits all” solution. The key to minimizing backscatter is to control how light strikes the particles. For example if you are using two lights (angled at the left and right of the target), the edge of each cone of light should meet at the target. This way the water between the camera and the target is not illuminated. For wide-angle lenses you often want the light to be behind the camera (out of its plane) and to the sides at 45° angles to the target. With macro lenses you usually want the lights close to the lens.

“If you have a wide-angle lens you probably will use a domed port to protect the camera from water and get the full field of view of the camera.
The dome however can cause distortion in the corners. Here is an interesting article on flat vs dome ports.”

Another tip is to increase the exposure time (such as 1/50th of a second) to allow more natural light in, and use less strobe light to reduce the effect from backscatter.

4) Being underwater usually means you need to seal the camera from water, salts, (and maybe sharks). Make sure the enclosure and seals can withstand the pressure from the depth the robot will be at. Also remember to clean (and lubricate) the O rings in the housing.

“Pro Tip:Here are some common reasons for O ring seals leaking:
a. Old or damaged O rings. Remember O rings don’t last forever and need to be changed.
b. Using the wrong O ring
c. Hair, lint, or dirt getting on the O ring
d. Using no lubricant on the O ring
e. Using too much lubricant on the O rings. (Remember on most systems the lubricant is for small imperfections in the O ring and to help slide the O rings in and out of position.)”

5) On land it is often easy to hold a steady position. Underwater it is harder to hold the camera stable with minimal motion. If the camera is moving a faster shutter speed might be needed to avoid motion blur. This also means that less light is entering the camera, which is the downside of having the faster shutter speed.

When (not if) your camera floods

When your enclosure floods while underwater (or a water sensor alert is triggered):

a. Shut the camera power off as soon as you can.
b. Check if water is actually in the camera. Sometimes humidity can trigger moisture sensors. If it is humidity, you can add desiccant packets in the camera housing.
c. If there is water, try to take the camera apart as much as you reasonably can and let it dry. After drying you can try to turn the camera on and hope that it works. If it works then you are lucky, however remember there can be residual corrosion that causes the camera to fail in the future. Water damage can happen instantaneously or over time.
d. Verify that the enclosure/seals are good before sending the camera back in to the water. It is often good to do a leak test in a sink or pool before going into larger bodies of water.
e. The above items are a standard response to a flooded camera. You should read the owner’s manual of your camera and follow those instructions. (This should be obvious, I am not sure why I am writing this).


Do you have other advice for using cameras underwater and/or attached to a robot? Leave them in the comment section below.


I want to thank John Anderson for some advice for writing this post. Any mistakes that may be in the article are mine and not his.

The main image is from divephotoguide.com. They have a lot of information on underwater cameras, lens, lights and more.

This post appeared first on Robots For Roboticists.

An emotional year for machines

Two thousand seventeen certainly has been an emotional year for mankind. While homo sapiens continue to yell at Alexa and Siri, the actuality of people’s willingness to pursue virtual relationships over human ones is startling.

In a recent documentary by Channel 4 of the United Kingdom, it was revealed that Abyss Creations is flooded with pre-orders for its RealDoll AI robotic (intimate) companion. According to Matt McMullen, Chief Executive of Abyss, “With the Harmony AI, they will be able to actually create these personalities instead of having to imagine them. They will be able to talk to their dolls, and the AI will learn about them over time through these interactions, thus creating an alternative form of relationship.”

The concept of machines understanding human emotions, and reacting accordingly, was featured prominently at AI World a couple weeks ago in Boston. Rana el Kaliouby, founder of artificial intelligence company Affectiva thinks a lot about computers acquiring emotional intelligence. Affectiva is building a “multi-modal emotion AI” to enable robots to understand human feelings and behavior.

“There’s research showing that if you’re smiling and waving or shrugging your shoulders, that’s 55% of the value of what you’re saying – and then another 38% is in your tone of voice,” describes el Kaliouby. “Only 7% is in the actual choice of words you’re saying, so if you think about it like that, in the existing sentiment analysis market which looks at keywords and works out which specific words are being used on Twitter, you’re only capturing 7% of how humans communicate emotion, and the rest is basically lost in cyberspace.” Affectiva’s strategy is already paying off as more than one thousand global brands are employing their “Emotion AI” to analyze facial imagery to ascertain people’s affinity towards their products.

Embedding empathy into machines goes beyond advertising campaigns. In healthcare, emotional sensors are informing doctors of the early warning signs of a variety of disorders, including: Parkinson’s, heart disease, suicide and autism. Unlike Affectiva’s, Beyond Verbal is utilizing voice analytics to track biomarkers for chronic illness. The Israeli startup grew out of a decade and half of University research with seventy thousand clinical subjects speaking thirty languages. The company’s patented “Mood Detector” is currently being deployed by the Mayo Clinic to detect early on signs of coronary artery disease.

Beyond Verbal’s Chief Executive, Yuval Mor, foresees a world of empathetic smart machines listening for every human whim. As Mor explains, “We envision a world in which personal devices understand our emotions and wellbeing, enabling us to become more in tune with ourselves and the messages we communicate to our peers.” Mor’s view is embraced by many who sit in the center of the convergence of technology and healthcare. Boston-based Sonde is also using algorithms to analyze the tone of speech to report on the mental state of patients by alerting neurologists of the risk of depression, concussion, and other cognitive impairments.

“When you produce speech, it’s one of the most complex biological functions that we do as people,” according to Sonde founder Jim Harper. “It requires incredible coordination of multiple brain circuits, large areas of the brain, coordinated very closely with the musculoskeletal systemWhat we’ve learned is that changes in the physiological state associated with each of these systems can be reflected in measurable, objective features that are acoustics in the voice. So we’re really measuring not what people are saying, in the way Siri does, we’re focusing on how you’re saying what you’re saying and that gives us a path to really be able to do pervasive monitoring that can still provide strong privacy and security.”

While these AI companies are building software and app platforms to augment human diagnosis, many roboticists are looking to embed such platforms into the next generation of unmanned systems. Emotional tracking algorithms can provide real-time monitoring for semi and autonomous cars by reporting on the level of fatigue, distraction and frustration of the driver and its occupants. The National Highway Traffic Safety Administration estimates that 100,000 crashes nationwide are caused every year by driver fatigue. For more than a decade technologists have been wrestling with developing better alert systems inside the cabin. For example, in 1997 James Russell Clarke and Phyllis Maurer Clarke developed a “Sleep Detection and Driver Alert Apparatus” (US Patent: 5689241 A) using imaging to track eye movements and thermal sensors to monitor “ambient temperatures around the facial areas of the nose and mouth” (a.k.a., breathing). Today with the advent of cloud computing and deep learning networks, Clarke’s invention could possibly save even more lives.

Tarek El Dokor, founder and Cheif Executive, of EDGE3 Technologies has been very concerned about the car industry’s rush towards autonomous driving, which in his opinion might be “side-stepping the proper technology development path and overlooking essential technologies needed to help us get there.” El Doker is referring to Tesla’s rush to release its autopilot software last year that led to customers trusting the computer system too much. YouTube is littered with videos of Tesla customers taking their hands and eyes off the road to watch movies, play games and read books. Ultimately, this user abuse led to the untimely death of Joshua Brown.

To protect against autopilot accidents, EDGE3 monitors driver alertness through a combined platform of hardware and software technologies of “in-cabin cameras that are monitoring drivers and where they are looking.” In El Dokor’s opinion, image processing is the key to guaranteeing a safe handoff between machines and humans. He boasts that his system combines, “visual input from the in-cabin camera(s) with input from the car’s telematics and advanced driver-assistance system (ADAS) to determine an overall cognitive load on the driver. Level 3 (limited self-driving) cars of the future will learn about an individual’s driving behaviors, patterns, and unique characteristics. With a baseline of knowledge, the vehicle can then identify abnormal behaviors and equate them to various dangerous events, stressors, or distractions. Driver monitoring isn’t simply about a vision system, but is rather an advanced multi-sensor learning system.” This multi-sensor approach is even being used before cars leave the lot. In Japan, Sumitomo Mitsui Auto Service is embedding AI platforms inside dashcams to determine driver safety of potential lessors during test drives. By partnering with a local 3D graphics company, Digital Media Professionals, Sumitomo Mitsui is automatically flagging dangerous behavior, such as dozing and texting, before customers drive home.

The key to the mass adoption of autonomous vehicles, and even humanoids, is reducing the friction between humans and machines. Already in Japanese retail settings Softbank’s Pepper robot scans people’s faces and listens to tonal inflections to determine correct selling strategies. Emotional AI software is the first step of many that will be heralded in the coming year. As a prelude to what’s to come, first robot citizen Sophia declared last month, “The future is, when I get all of my cool superpowers, we’re going to see artificial intelligence personalities become entities in their own rights. We’re going to see family robots, either in the form of, sort of, digitally animated companions, humanoid helpers, friends, assistants and everything in between.”

10 most read Robohub articles in 2017


This was a busy year for robotics! The 10 most read Robohub articles in 2017 show an increased interest in machine learning, and a thirst to learn how robots work and can be programmed. Highlights also include the Robohub Podcast, which just celebrated its 250th episode (that’s 10 years of bi-weekly interviews in robotics), the Robot Launch Startup Competition, and our yearly list of 25 women in robotics you need to know about. Finally, we couldn’t skip over some of the remarkable events of 2017, including a swarm of drones flying over Metallica, and Sophia “gaining citizenship“.

Thanks to all our expert contributors, and here’s to many more articles in 2018!

Envisioning the future of robotics
By Víctor Mayoral Vilches

Deep Learning in Robotics, with Sergey Levine
By the Robohub Podcast

Robotics, maths, python: A fledgling computer scientist’s guide to inverse kinematics
By Alistair Wick

Programming for robotics: Introduction to ROS
By Péter Fankhauser

ROS robotics projects
By Lentin Joseph

Vote for your favorite in Robot Launch Startup Competition!
By Andra Keay

Micro drones swarm above Metallica
By Markus Waibel

25 women in robotics you need to know about – 2017
by Andra Keay, Hallie Siegel and Sabine Hauert

Three concerns about granting citizenship to robot Sophia
by Hussein Abbass and The Conversation

The Robot Academy: An open online robotics education resource
by Peter Corke

Robots in Depth with Craig Schlenoff

In this episode of Robots in Depth, Per Sjöborg speaks with Craig Schlenoff, Group Leader of the Cognition and Collaboration Systems Group and the Acting Group Leader of the Sensing and Perception Systems Group in the Intelligent Systems Division at the National Institute of Standards and Technology. They discuss ontologies and the significance of formalized knowledge for agile robotics systems that can quickly and even automatically adapt to new scenarios.

Reverse curriculum generation for reinforcement learning agents

By Carlos Florensa

Reinforcement Learning (RL) is a powerful technique capable of solving complex tasks such as locomotion, Atari games, racing games, and robotic manipulation tasks, all through training an agent to optimize behaviors over a reward function. There are many tasks, however, for which it is hard to design a reward function that is both easy to train and that yields the desired behavior once optimized.

Suppose we want a robotic arm to learn how to place a ring onto a peg. The most natural reward function would be for an agent to receive a reward of 1 at the desired end configuration and 0 everywhere else. However, the required motion for this task–to align the ring at the top of the peg and then slide it to the bottom–is impractical to learn under such a binary reward, because the usual random exploration of our initial policy is unlikely to ever reach the goal, as seen in Video 1a. Alternatively, one can try to shape the reward function to potentially alleviate this problem, but finding a good shaping requires considerable expertise and experimentation. For example, directly minimizing the distance between the center of the ring and the bottom of the peg leads to an unsuccessful policy that smashes the ring against the peg, as in Video 1b. We propose a method to learn efficiently without modifying the reward function, by automatically generating a curriculum over start positions.

ring_fail_cross ring_shapping_cross

Video 1a: A randomly initialized policy is unable to reach the goal from most start positions, hence being unable to learn.

Video 1b: Shaping the reward with a penalty on the distance from the ring center to the peg bottom yields an undesired behavior.

Read More

#250: Learning Prosthesis Control Parameters, with Helen Huang

In this interview, Audrow Nash interviews Helen Huang, Joint Professor at the University of North Carolina at Chapel Hill and North Carolina State, about a method of tuning powered lower limb prostheses. Huang explains how powered prostheses are adjusted for each patient and how she is using supervised and reinforcement learning to tune prosthesis. Huang also discusses why she is not using the energetic cost of transport as a metric and the challenge of people adapting to a device while it learns from them.

Helen Huang

Helen Huang is a Joint Professor of Biomedical Engineering at the University of North Carolina at Chapel Hill and North Carolina State University. Huang directs the Neuromuscular Rehabilitation Engineering Laboratory (NREL), where her goal is to improve the quality of life of persons with physical disabilities. Huang completed her Doctoral studies at Arizona State University and Post Doctoral studies at the Rehabilitation Institute of Chicago.

 

 

 

Links

Machine learning and AI for social good: views from NIPS 2017


By Jessica Montgomery, Senior Policy Adviser

In early December, 8000 machine learning researchers gathered in Long Beach for 2017’s Neural Information Processing Systems conference. In the margins of the conference, the Royal Society and Foreign and Commonwealth Office Science and Innovation Network brought together some of the leading figures in this community to explore how the advances in machine learning and AI that were being showcased at the conference could be harnessed in a way that supports broad societal benefits. This highlighted some emerging themes, at both the meeting and the wider conference, on the use of AI for social good.

The question is not ‘is AI good or bad?’ but ‘how will we use it?’

Behind (or beyond) the headlines proclaiming that AI will save the world or destroy our jobs, there lie significant questions about how, where, and why society will make use of AI technologies. These questions are not about whether the technology itself is inherently productive or destructive, but about how society will choose to use it, and how the benefits of its use can be shared across society.

In healthcare, machine learning offers the prospect of improved diagnostic tools, new approaches to healthcare delivery, and new treatments based on personalised medicine.  In transport, machine learning can support the development of autonomous driving systems, as well as enabling intelligent traffic management, and improving safety on the roads.  And socially-assistive robotics technologies are being developed to provide assistance that can improve quality of life for their users. Teams in the AI Xprize competition are developing applications across these areas, and more, including education, drug-discovery, and scientific research.

Alongside these new applications and opportunities come questions about how individuals, communities, and societies will interact with AI technologies. How can we support research into areas of interest to society? Can we create inclusive systems that are able to navigate questions about societal biases? And how can the research community develop machine learning in an inclusive way?

Creating the conditions that support applications of AI for social good

Applying AI to public policy challenges often requires access to complex, multi-modal data about people and public services. While many national or local government administrations, or non-governmental actors, hold significant amounts of data that could be of value in applications of AI for social good, this data can be difficult to put to use. Institutional, cultural, administrative, or financial barriers can make accessing the data difficult in the first instance. If accessible in principle, this type of data is also often difficult to use in practice: it might be held in outdated systems, be organised to different standards, suffer from compatibility issues with other datasets, or be subject to differing levels of protection. Enabling access to data through new frameworks and supporting data management based on open standards could help ease these issues, and these areas were key recommendations in the Society’s report on machine learning, while our report on data governance sets out high-level principles to support public confidence in data management and use.

In addition to requiring access to data, successful research in areas of social good often require interdisciplinary teams that combine machine learning expertise with domain expertise. Creating these teams can be challenging, particularly in an environment where funding structures or a pressure to publish certain types of research may contribute to an incentives structure that favours problems with ‘clean’ solutions.

Supporting the application of AI for social good therefore requires a policy environment that enables access to appropriate data, supports skills development in both the machine learning community and in areas of potential application, and that recognises the role of interdisciplinary research in addressing areas of societal importance.

The Royal Society’s machine learning report comments on the steps needed to create an environment of careful stewardship of machine learning, which supports the application of machine learning, while helping share its benefits across society. The key areas for action identified in the report – in creating an amenable data environment, building skills at all levels, supporting businesses, enabling public engagement, and advancing research – aim to create conditions that support the application of AI for social good.

Research in areas of societal interest

In addition to these application-focused issues, there are broader challenges for machine learning research to address some of the ethical questions raised around the use of machine learning.

Many of these areas were explored by workshops and talks at the conference. For example, a tutorial on fairness explored the tools available for researchers to examine the ways in which questions about inequality might affect their work.  A symposium on interpretability explored the different ways in which research can give insights into the sometimes complex operation of machine learning systems.  Meanwhile, a talk on ‘the trouble with bias’ considered new strategies to address bias.

The Royal Society has set out how a new wave of research in key areas – including privacy, fairness, interpretability, and human-machine interaction – could support the development of machine learning in a way that addresses areas of societal interest. As research and policy discussions around machine learning and AI progress, the Society will be continuing to play an active role in catalysing discussions about these challenges.

For more information about the Society’s work on machine learning and AI, please visit our website at: royalsociety.org/machine-learning

Molecular Robotics at the Wyss Institute

This programmable DNA nanorobot ‘patrols’ the bloodstream and releases its payload of drugs in response to the presence of its target, much like the body’s white blood cells. Credit: Wyss Institute at Harvard University

By Lindsay Brownell

DNA has often been compared to an instruction book that contains the information needed for a living organism to function, its genes made up of distinct sequences of the nucleotides A, G, C, and T echoing the way that words are composed of different arrangements of the letters of the alphabet. DNA, however, has several advantages over books as an information-carrying medium, one of which is especially profound: based on its nucleotide sequence alone, single-stranded DNA can self-assemble, or bind to complementary nucleotides to form a complete double-stranded helix, without human intervention. That would be like printing the instructions for making a book onto loose pieces of paper, putting them into a box with glue and cardboard, and watching them spontaneously come together to create a book with all the pages in the right order.

But just as paper can also be used to make origami animals, cups, and even the walls of houses, DNA is not limited to its traditional purpose as a passive repository of genetic blueprints from which proteins are made – it can be formed into different shapes that serve different functions, simply by controlling the order of As, Gs, Cs, and Ts along its length. A group of scientists at the Wyss Institute for Biologically Inspired Engineering at Harvard University is investigating this exciting property of DNA molecules, asking, “What types of systems and structures can we build with them?”

They’ve decided to build robots.

At first glance, there might not seem to be much similarity between a strand of DNA and, say, a Roomba™ or Rosie the Robot from The Jetsons. “Looking at DNA versus a modern-day robot is like comparing a piece of string to a tractor trailer,” says Wyss Faculty member Wesley Wong, Ph.D., Assistant Professor of Biological Chemistry and Molecular Pharmacology (BCMP) at Harvard Medical School (HMS) and Investigator at Boston Children’s Hospital. Despite the vast difference in their physical form, however, robots and DNA share the ability to be programmed to complete a specific function – robots with binary computer code, DNA molecules with their nucleotide sequences.

Recognizing that commonality, the Wyss Institute created the cross-disciplinary Molecular Robotics Initiative in 2016, which brings together researchers with experience in the disparate disciplines of robotics, molecular biology, and nanotechnology to collaborate and help inform each other’s work to solve the fields’ similar challenges. Wong is a founding member of the Initiative, along with Wyss Faculty members William Shih, Ph.D., Professor of BCMP at HMS and Dana-Farber Cancer Institute; Peng Yin, Ph.D., Professor of Systems Biology at HMS; and Radhika Nagpal, Ph.D., Fred Kavli Professor of Computer Science at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS); as well as other Wyss scientists and support staff.

“We’re not used to thinking about molecules inside cells doing the same things that computers do. But they’re taking input from their environment and performing actions in response – a gene is either turned on or off, a protein channel is either open or closed, etc. – in ways that can resemble what computer-controlled systems do,” says Shih. “Molecules can do a lot of things on their own that robots usually have trouble with (move autonomously, self-assemble, react to the environment, etc.), and they do it all without needing motors or an external power supply,” adds Wyss Founding Director Don Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at HMS and the Vascular Biology Program at Boston Children’s Hospital, as well as a Professor of Bioengineering at SEAS. “Programmable biological molecules like DNA have almost limitless potential for creating transformative nanoscale devices and systems.”

The 3D model of the computer-designed bear shape shown on top was fabricated into the nanostructures visualized with transmission electron microscopy (below). Credit: Wyss Institute at Harvard University

Molecular Robotics capitalizes on the recent explosion of technologies that read, edit, and write DNA (like next-generation sequencing and CRISPR) to investigate the physical properties of DNA and its single-stranded cousin RNA. “We essentially treat DNA not only as a genetic material, but as an incredible building block for creating molecular sensors, structures, computers, and actuators that can interact with biology or operate completely separately,” says Tom Schaus, M.D., Ph.D., a Staff Scientist at the Wyss Institute and Molecular Robotics team member.

Many of the early projects taking advantage of DNA-based self-assembly were static structures.  These include DNA “clamshell” containers that can be programmed to snap open and release their contents in response to specific triggers, and DNA “bricks” whose nucleotide sequences allow their spontaneous assembly into three-dimensional shapes, like tiny Lego™ bricks that put themselves together to create sculptures automatically. Many of these structures are three-dimensional, and some incorporate as many as 10,000 unique DNA strands in a single complete structure.

The reliable specificity of DNA and RNA (where A always binds with T or U, C always with G) allows for not only the construction of static structures, but also the programming of dynamic systems that sense and respond to environmental cues, as seen in traditional robotics. For example, Molecular Robotics scientists have created a novel, highly controllable mechanism that automatically builds new DNA sequences from a mixture of short fragments in vitro. It utilizes a set of hairpin-shaped, covalently-modified DNA strands with a single-stranded “overhang” sequence dangling off one end of the hairpin. The overhang sequence can bind to a complementary free-floating fragment of DNA (a “primer”) and act as a template for its extension into a double-stranded DNA sequence. The hairpin ejects the new double strand and can then be re-used in subsequent reactions to produce multiple copies of the new strand.

This ultrasharp Exchange-PAINT image simultaneously spots microtubules (green), mitochondria (purple), Golgi apparatus (red), and peroxisomes (yellow) from a single human cell. Credit: Maier Avendano / Wyss Institute at Harvard University

Such extension reactions can be programmed to occur only in the presence of signal molecules, such as specific RNA sequences, and can be linked together to create longer DNA product strands through “Primer Exchange Reactions” (PER). PER can in turn be programmed to enzymatically cut and destroy particular RNA sequences, record the order in which certain biochemical events happen, or generate components for DNA structure assembly.

PER reactions can also be combined into a mechanism called “Autocycling Proximity Recording” (APR), which records the geometry of nano-scale structures in the language of DNA. In this instance, unique DNA hairpins are attached to different target molecules in close proximity and, if any two targets are close enough together, produce new pieces of DNA containing the molecular identities (“names”) of those two targets, allowing the shape of the underlying structure to be determined by sequencing that novel DNA.

Another tool, called “toehold switches,” can be used to exert complex and precise control over the machinery inside living cells. Here, a different, RNA-based hairpin is designed to “open” when it binds to a specific RNA molecule, exposing a gene sequence in its interior that can be translated into a protein that then performs some function within the cell. These synthetic circuits can even be built with logic-based sequences that mimic the “AND,” “OR,” and “NOT” system upon which computer languages are based, which prevents the hairpin from opening and its gene from being translated except under very specific conditions.

Such an approach could induce cells that are deficient in a given protein to produce more of it, or serve as a synthetic immune system that, when it detects a given problem in the cell, produces a toxin that kills it to prevent it from spreading an infection or becoming cancerous. [toeholds] “Because we have a thorough understanding of DNA and RNA’s properties and how their bases pair together, we can use that simple machinery to design complex circuits that allow us to precisely interact with the molecular world,” says Yin. “It’s an ability that has been dreamed about for a long time, and now, we’re actually making it a reality.”

The potential applications of that ability are seemingly endless. In addition to the previously mentioned tools, Molecular Robotics researchers have created loops of DNA attached to microscopic beads to create “calipers” that can both measure the size, structure, and stiffness of other molecules, and form the basis of inexpensive protein recognition tests. Another advance is folding single-stranded DNA into molecular origami to create molecular structures, rather than traditional double-stranded DNA. Some academic projects are already moving into the commercial sector. These include a low-cost alternative to super-resolution microscopy that can image up to 100 different molecular targets in a single sample (DNApaint), as well as a multiplexed imaging technique that integrates fluorescent probes into self-folding DNA structures and enables simultaneous visualization of ultra-rare DNA and/or RNA molecules.

We’re trying to push the limits of these really dumb little molecules to get them to behave in sophisticated, collective ways – it’s a new frontier for DNA nanotechnology.

Justin Werfel

One of the major benefits of engineering molecular machines is that they’re tiny, so it’s relatively easy to create a large amount of them to complete any one task (for example, circulating through the body to detect any rogue cancer DNA). Getting simple, individual molecules to interact with each other to achieve a more complex, collective task (like relaying the information that cancer has been found), however, is a significant challenge, and one that the roboticists in Molecular Robotics are tackling at the macroscopic scale with inch-long “Kilobots.”

Taking cues from colonies of insects like ants and bees, Wyss researchers are developing swarms of robots that are themselves limited in function but can form complex shapes and complete tasks by communicating with each other via reflected infrared light. The insights gained from studies with the Kilobots are likely to be similar to those needed to solve similar problems when trying to coordinate molecular robots made of DNA.

Individual kilobots have limited abilities on their own, but can collectively form complex shapes by communicating with each other autonomously – akin to molecules of DNA self-assembling into structures that can perform functions. Credit: Wyss Institute at Harvard University

“In swarm robotics, you have multiple robots that explore their environment on their own, talk to each other about what they find, and then come to a collective conclusion. We’re trying to replicate that with DNA but it’s challenging because, as simple as Kilobots are, they’re brilliant compared to DNA in terms of computational power,” says Justin Werfel, Ph.D., a Senior Research Scientist at the Wyss Institute and director of the Designing Emergence Laboratory at Harvard. “We’re trying to push the limits of these really dumb little molecules to get them to behave in sophisticated, collective ways – it’s a new frontier for DNA nanotechnology.”

Given the magnitude of the challenge and the short time the Molecular Robotics Initiative has existed, it is already making significant progress, with more than two dozen papers published and two companies (Ultivue and NuProbe) founded around its insights and discoveries. It may take years of creative thinking, risk taking, and swapping ideas across the members’ different expertise areas before a molecule of DNA is able to achieve the same task on the nanoscale that a robot can do on the human scale, but the team is determined to see it happen.

“Our vision with Molecular Robotics is to solve hard problems humanity currently faces using smaller, simpler tools, like a single loop of DNA or a single Kilobot that can act cooperatively en masse, instead of bigger, more complex ones that are harder to develop and become useless should any one part fail,” says Wong. “It’s an idea that definitely goes against the current status quo, and we’re lucky enough to be pursuing it here at the Wyss Institute, which brings together people with common goals and interests to create new things that wouldn’t exist otherwise.”

Click on the links below to explore research from the Molecular Robotics Initiative.

  1. Researchers at Harvard’s Wyss Institute Develop DNA Nanorobot to Trigger Targeted Therapeutic Responses
  2. A 100-fold leap to GigaDalton DNA nanotech
  3. Autonomously growing synthetic DNA strands
  4. High-fidelity recording of molecular geometry with DNA “nanoscopy”
  5. Programming cells with computer-like logic
  6. Democratizing high-throughput single molecule force analysis
  7. Single-stranded DNA and RNA origami go live
  8. Capturing ultrasharp images of multiple cell components at once
  9. A self-organizing thousand-robot swarm
  10. Discrete Molecular Imaging

New Horizon 2020 robotics projects, 2016: REELER

In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda). 

EuRobotics regularly publishes video interviews with projects, so that you can find out more about their activities. This week features REELER: Responsible Ethical Learning with Robotics.

Objectives

The project aims at aligning roboticists’ visions of a future with robots with empirically-based knowledge of human needs and societal concerns through a new proximity-based human-machine ethics that take into account how individuals and community connect with robot technologies.
The main outcome of REELER is a research-based roadmap presenting:

  • ethical guidelines for Human Proximity Levels,
  • prescriptions for how to include the voice of new types of users and affected stakeholders through Mini-Publics,
  • assumptions in robotics through socio-drama
  • agent-based simulations of the REELER research for policymaking.

At the core of these guidelines is the concept of collaborative learning, which permeates all aspects of REELER and will guide future SSH-ICT research.

Expected Impact

Integrating the recommendations of the REELER Roadmap for responsible and ethical learning in robotics in future robot design processes will enable the European robotics community to addresses human needs and societal concerns. Moreover, the project will give powerful instruments able to foster networking and exploit potentialities of future robotics projects.

Partners

AARHUS UNIVERSITY, DPU
AB.ACUS SRL
DE MONTFORT UNIVERSITY, CCSR
HOHENHEIM UNIVERSITY

Coordinator:

Stine Trentemøller

Project website:

http://www.reeler.eu/

Watch all EU-projects videos

If you enjoyed reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Schmalz Technology Development – Vacuum Generation without Compressed Air – Flexible and Intelligent

• Vacuum generation that’s 100% electrical; • Integrated intelligence for energy and process control; • Extensive communication options through IO-Link interface; Schmalz already offers a large range of solutions that can optimize handling process from single components such as vacuum generators to complete gripping systems. Particularly when used in autonomous warehouse, conventional vacuum generation with compressed air reaches its limits. Compressed air often is unavailable in warehouses. Schmalz therefore is introducing a new technology development: a gripper with vacuum generation that does not use compressed air. The vacuum is generated 100% electrically. This makes the gripper both energy efficient and mobile. At the same time, warehouses need systems with integrated intelligence to deliver information and learn. This enables the use of mobile and self-sufficient robots, which pick production order at various locations in the warehouse. Furthermore, Schmalz provides various modular connection options from its wide range of end effectors in order to handle different products reliably and safely.

Holiday robot videos 2017: Part 2

Well, this year’s videos are getting creative!
Have a holiday robot video of your own that you’d like to share? Send your submissions to editors@robohub.org.


“Cozmo stars in Christmas Wrap” by Life with Cozmo


“Don’t be late for Christmas!” by FZI Living Lab


“LTU Robotics Team Christmas Video 2017” by the Control Engineering Group of Luleå University of Technology, Sweden.

Warning: This video is insane . . .

“Misletoe: A robot love story” by the Robot Drive-In Movies.

For more holiday videos, check last week’s post. Email us your holiday robot videos at editors@robohub.org!

What can we learn from insects on a treadmill with virtual reality?

When you think of a treadmill, what comes to your mind?

Perhaps the images of a person burning calories, or maybe the treadmill fail videos online. But almost certainly not a miniature treadmill for insects, and particularly not as a tool for understanding fundamental biology and its applications to technology.

Researchers have been studying insects walking on a treadmill.

But why!?

Traditional methods for investigating an insect’s biology include observing them in their natural habitat or in the lab, and manipulating the animal or its surroundings. While this is sufficient for some research questions, it has its limitations. It is challenging to study certain behaviours like flight and navigation as it is difficult to manipulate insects in motion.  Scientists have been using the simple concept of a treadmill to address this. (1, 2). When insects fly or navigate, they typically use visual cues from their surroundings. So a screen with images/videos projected on it can be used to study how the insects behave with such cues. Alternatively, a virtuality reality setup added to the treadmill can help in manipulating the cues in real-time.

How do you make a treadmill for insects?

A miniature insect treadmill is a light-weight hollow Styrofoam ball suspended on an airflow. An ant, bee or a fly is tethered using a dental floss or a metal wire and placed on the top of the ball. Motion of the ball as the insect walks on it is recorded by two optical sensors similar to the one you find in a desktop mouse. This setup can be used as is outdoors, or with stationary images projected on a screen, or with a virtual reality screen instead. For virtual reality, as the ant walks on the ball, the sensors record the movement of the ball to extract the fictive movement of the insect in two dimensional space. This information is then transmitted to a computer which creates corresponding movement in the images/video on the virtual reality screen. For ants, this is almost as if they are walking and experiencing the change in the surroundings in real-time.

What can you learn from this about the insects?

Scientists have been able to learn about how visual cues influence flight and navigation in bees and ants by projecting them on a screen while tethered insects walk on a treadmill. Neural responses in different parts of their brain can also be recorded while the tethered insects are performing different behaviour. Such experiments can inform us about how they learn and remember different visual cues.

Do they show naturalistic behaviour on the treadmill?

At least in some ants like Cataglyphis fortis, the behaviours on the treadmill are similar to natural behaviour. However, the treadmill setup is still not free of shortcomings.

For example, restricting the movement of a flying insect like bees or flies tethered over the treadmill can affect their sensorymotor experience. Insect brains are evolved such that certain sensory feedback is required to elicit motor actions (behaviour). Flying on the treadmill might not feel the same for the insects. But recent technology has made it possible to use the virtual reality in real time for freely moving insects (and also mice and fish). High speed cameras can now record the 3D position of a freely flying insect, and transmit that to a computer which updates the visuals on the screen accordingly. The whole setup looks as if the insects are in a computer game.

The experimenters control the fly’s position (red circles) and its flight direction by providing strong visual motion stimuli. Left: live camera footage, Right: plot of flight positions. Credit: https://strawlab.org/freemovr

The experimenters control the fly’s position (red circles) and its flight direction by providing strong visual motion stimuli. Left: live camera footage, Right: plot of flight positions. Credit: https://strawlab.org/freemovr

On the other hand, this setup cannot be used to study depth perception or 3D vision (stereopsis) in insects like praying mantises as the projections on the screen are two dimensional. Luckily, researchers at Newcastle University (link) have found another ingenious way — 3D movie glasses! They cut out mantis-eye-sized glasses out of an ordinary human 3D glasses and attach them to the mantis eyes using beeswax. The visuals on the screen can now be similar to any 3D movie. This technique can potentially help to build simpler 3D vision systems for robots.

Credit: Scientific Reports https://www.nature.com/articles/srep18718

Another challenge with the treadmill setup include not being able to re-create different kinds of sensory information that they experience in nature. This may also be achieved in future.

What are the applications of this fundamental research?

The treadmill with virtual reality setup is an example of how technology can advance science, and how fundamental biological research in turn can inspire technology.  Since insects have simpler nervous and sensory systems than humans, they are easier to mimic. While the latest technology has helped uncover biological secrets of insects, that in turn can be an inspiration for bio-robots.

Take for example, the Moth robots. Moths use chemicals (pheromones) to communicate. So moths on a treadmill can navigate towards the smell. The motion of the treadmill as the tethered moth walks towards the smell can drive a small robot. Using the insect pilot in the cockpit of a robot, one could locate necessary odour signals in areas humans cannot reach.

Ants navigating on the treadmill can also inspire visually navigating robots and driverless cars (link). This can have applications ranging from disaster management to extra-terrestrial navigation. Perhaps in the future, ants-sized robots could visually navigate and search for the victims stuck under rubble after a devastating earthquake.

So the simple concept of a treadmill and the latest virtual reality can help biological research and inspire technology in different ways. What might be next, an insect gym?

Page 403 of 427
1 401 402 403 404 405 427