Page 354 of 400
1 352 353 354 355 356 400

Teaching chores to an artificial agent

A system developed at MIT aims to teach artificial agents a range of chores, including setting the table and making coffee.
Image: MIT CSAIL

By Adam Conner-Simons | Rachel Gordon

For many people, household chores are a dreaded, inescapable part of life that we often put off or do with little care. But what if a robot assistant could help lighten the load?

Recently, computer scientists have been working on teaching machines to do a wider range of tasks around the house. In a new paper spearheaded by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Toronto, researchers demonstrate “VirtualHome,” a system that can simulate detailed household tasks and then have artificial “agents” execute them, opening up the possibility of one day teaching robots to do such tasks.

The team trained the system using nearly 3,000 programs of various activities, which are further broken down into subtasks for the computer to understand. A simple task like “making coffee,” for example, would also include the step “grabbing a cup.” The researchers demonstrated VirtualHome in a 3-D world inspired by the Sims video game.

The team’s artificial agent can execute 1,000 of these interactions in the Sims-style world, with eight different scenes including a living room, kitchen, dining room, bedroom, and home office.

“Describing actions as computer programs has the advantage of providing clear and unambiguous descriptions of all the steps needed to complete a task,” says MIT PhD student Xavier Puig, who was lead author on the paper. “These programs can instruct a robot or a virtual character, and can also be used as a representation for complex tasks with simpler actions.”

The project was co-developed by CSAIL and the University of Toronto alongside researchers from McGill University and the University of Ljubljana. It will be presented at the Computer Vision and Pattern Recognition (CVPR) conference, which takes place this month in Salt Lake City.

Unlike humans, robots need more explicit instructions to complete easy tasks; they can’t just infer and reason with ease.

For example, one might tell a human to “switch on the TV and watch it from the sofa.” Here, actions like “grab the remote control” and “sit/lie on sofa” have been omitted, since they’re part of the commonsense knowledge that humans have.

To better demonstrate these kinds of tasks to robots, the descriptions for actions needed to be much more detailed. To do so, the team first collected verbal descriptions of household activities, and then translated them into simple code. A program like this might include steps like: walk to the television, switch on the television, walk to the sofa, sit on the sofa, and watch television.

Once the programs were created, the team fed them to the VirtualHome 3-D simulator to be turned into videos. Then, a virtual agent would execute the tasks defined by the programs, whether it was watching television, placing a pot on the stove, or turning a toaster on and off.

The end result is not just a system for training robots to do chores, but also a large database of household tasks described using natural language. Companies like Amazon that are working to develop Alexa-like robotic systems at home could eventually use data like these to train their models to do more complex tasks.

The team’s model successfully demonstrated that their agents could learn to reconstruct a program, and therefore perform a task, given either a description: “pour milk into glass” or a video demonstration of the activity.

“This line of work could facilitate true robotic personal assistants in the future,” says Qiao Wang, a research assistant in arts, media, and engineering at Arizona State University. “Instead of each task programmed by the manufacturer, the robot can learn tasks just by listening to or watching the specific person it accompanies. This allows the robot to do tasks in a personalized way, or even some day invoke an emotional connection as a result of this personalized learning process.”

In the future, the team hopes to train the robots using actual videos instead of Sims-style simulation videos, which would enable a robot to learn simply by watching a YouTube video. The team is also working on implementing a reward-learning system in which the agent gets positive feedback when it does tasks correctly.

“You can imagine a setting where robots are assisting with chores at home and can eventually anticipate personalized wants and needs, or impending action,” says Puig. “This could be especially helpful as an assistive technology for the elderly, or those who may have limited mobility.”

Surgical technique improves sensation, control of prosthetic limb

Two agonist-antagonist myoneural interface devices (AMIs) were surgically created in the patient’s residual limb: One was electrically linked to the robotic ankle joint, and the other to the robotic subtalar joint.
Image: MIT Media Lab/Biomechatronics group. Original artwork by Stephanie Ku.

By Helen Knight

Humans can accurately sense the position, speed, and torque of their limbs, even with their eyes shut. This sense, known as proprioception, allows humans to precisely control their body movements.

Despite significant improvements to prosthetic devices in recent years, researchers have been unable to provide this essential sensation to people with artificial limbs, limiting their ability to accurately control their movements.

Researchers at the Center for Extreme Bionics at the MIT Media Lab have invented a new neural interface and communication paradigm that is able to send movement commands from the central nervous system to a robotic prosthesis, and relay proprioceptive feedback describing movement of the joint back to the central nervous system in return.

This new paradigm, known as the agonist-antagonist myoneural interface, involves a novel surgical approach to limb amputation in which dynamic muscle relationships are preserved within the amputated limb. The AMI was validated in extensive preclinical experimentation at MIT prior to its first surgical implementation in a human patient at Brigham and Women’s Faulkner Hospital.

In a paper published today in Science Translational Medicine, the researchers describe the first human implementation of the agonist-antagonist myoneural interface (AMI), in a person with below-knee amputation.

The paper represents the first time information on joint position, speed, and torque has been fed from a prosthetic limb into the nervous system, according to senior author and project director Hugh Herr, a professor of media arts and sciences at the MIT Media Lab.

“Our goal is to close the loop between the peripheral nervous system’s muscles and nerves, and the bionic appendage,” says Herr.

To do this, the researchers used the same biological sensors that create the body’s natural proprioceptive sensations.

The AMI consists of two opposing muscle-tendons, known as an agonist and an antagonist, which are surgically connected in series so that when one muscle contracts and shortens — upon either volitional or electrical activation — the other stretches, and vice versa.

This coupled movement enables natural biological sensors within the muscle-tendon to transmit electrical signals to the central nervous system, communicating muscle length, speed, and force information, which is interpreted by the brain as natural joint proprioception. 

This is how muscle-tendon proprioception works naturally in human joints, Herr says.

“Because the muscles have a natural nerve supply, when this agonist-antagonist muscle movement occurs information is sent through the nerve to the brain, enabling the person to feel those muscles moving, both their position, speed, and load,” he says.

By connecting the AMI with electrodes, the researchers can detect electrical pulses from the muscle, or apply electricity to the muscle to cause it to contract.

“When a person is thinking about moving their phantom ankle, the AMI that maps to that bionic ankle is moving back and forth, sending signals through the nerves to the brain, enabling the person with an amputation to actually feel their bionic ankle moving throughout the whole angular range,” Herr says.

Decoding the electrical language of proprioception within nerves is extremely difficult, according to Tyler Clites, first author of the paper and graduate student lead on the project.

“Using this approach, rather than needing to speak that electrical language ourselves, we use these biological sensors to speak the language for us,” Clites says. “These sensors translate mechanical stretch into electrical signals that can be interpreted by the brain as sensations of position, speed, and force.”

The AMI was first implemented surgically in a human patient at Brigham and Women’s Faulkner Hospital, Boston, by Matthew Carty, one of the paper’s authors, a surgeon in the Division of Plastic and Reconstructive Surgery, and an MIT research scientist.

In this operation, two AMIs were constructed in the residual limb at the time of primary below-knee amputation, with one AMI to control the prosthetic ankle joint, and the other to control the prosthetic subtalar joint.

“We knew that in order for us to validate the success of this new approach to amputation, we would need to couple the procedure with a novel prosthesis that could take advantage of the additional capabilities of this new type of residual limb,” Carty says. “Collaboration was critical, as the design of the procedure informed the design of the robotic limb, and vice versa.”

Toward this end, an advanced prosthetic limb was built at MIT and electrically linked to the patient’s peripheral nervous system using electrodes placed over each AMI muscle following the amputation surgery.

The researchers then compared the movement of the AMI patient with that of four people who had undergone a traditional below-knee amputation procedure, using the same advanced prosthetic limb.

They found that the AMI patient had more stable control over movement of the prosthetic device and was able to move more efficiently than those with the conventional amputation. They also found that the AMI patient quickly displayed natural, reflexive behaviors such as extending the toes toward the next step when walking down a set of stairs.

These behaviors are essential to natural human movement and were absent in all of the people who had undergone a traditional amputation.

What’s more, while the patients with conventional amputation reported feeling disconnected to the prosthesis, the AMI patient quickly described feeling that the bionic ankle and foot had become a part of their own body.

“This is pretty significant evidence that the brain and the spinal cord in this patient adopted the prosthetic leg as if it were their biological limb, enabling those biological pathways to become active once again,” Clites says. “We believe proprioception is fundamental to that adoption.”

It is difficult for an individual with a lower limb amputation to gain a sense of embodiment with their artificial limb, according to Daniel Ferris, the Robert W. Adenbaum Professor of Engineering Innovation at the University of Florida, who was not involved in the research.

“This is ground breaking. The increased sense of embodiment by the amputee subject is a powerful result of having better control of and feedback from the bionic limb,” Ferris says. “I expect that we will see individuals with traumatic amputations start to seek out this type of surgery and interface for their prostheses — it could provide a much greater quality of life for amputees.”

The researchers have since carried out the AMI procedure on nine other below-knee amputees and are planning to adapt the technique for those needing above-knee, below-elbow, and above-elbow amputations.

“Previously, humans have used technology in a tool-like fashion,” Herr says. “We are now starting to see a new era of human-device interaction, of full neurological embodiment, in which what we design becomes truly part of us, part of our identity.”

Surgical technique improves sensation, control of prosthetic limb

Two agonist-antagonist myoneural interface devices (AMIs) were surgically created in the patient’s residual limb: One was electrically linked to the robotic ankle joint, and the other to the robotic subtalar joint.
Image: MIT Media Lab/Biomechatronics group. Original artwork by Stephanie Ku.

By Helen Knight

Humans can accurately sense the position, speed, and torque of their limbs, even with their eyes shut. This sense, known as proprioception, allows humans to precisely control their body movements.

Despite significant improvements to prosthetic devices in recent years, researchers have been unable to provide this essential sensation to people with artificial limbs, limiting their ability to accurately control their movements.

Researchers at the Center for Extreme Bionics at the MIT Media Lab have invented a new neural interface and communication paradigm that is able to send movement commands from the central nervous system to a robotic prosthesis, and relay proprioceptive feedback describing movement of the joint back to the central nervous system in return.

This new paradigm, known as the agonist-antagonist myoneural interface, involves a novel surgical approach to limb amputation in which dynamic muscle relationships are preserved within the amputated limb. The AMI was validated in extensive preclinical experimentation at MIT prior to its first surgical implementation in a human patient at Brigham and Women’s Faulkner Hospital.

In a paper published today in Science Translational Medicine, the researchers describe the first human implementation of the agonist-antagonist myoneural interface (AMI), in a person with below-knee amputation.

The paper represents the first time information on joint position, speed, and torque has been fed from a prosthetic limb into the nervous system, according to senior author and project director Hugh Herr, a professor of media arts and sciences at the MIT Media Lab.

“Our goal is to close the loop between the peripheral nervous system’s muscles and nerves, and the bionic appendage,” says Herr.

To do this, the researchers used the same biological sensors that create the body’s natural proprioceptive sensations.

The AMI consists of two opposing muscle-tendons, known as an agonist and an antagonist, which are surgically connected in series so that when one muscle contracts and shortens — upon either volitional or electrical activation — the other stretches, and vice versa.

This coupled movement enables natural biological sensors within the muscle-tendon to transmit electrical signals to the central nervous system, communicating muscle length, speed, and force information, which is interpreted by the brain as natural joint proprioception. 

This is how muscle-tendon proprioception works naturally in human joints, Herr says.

“Because the muscles have a natural nerve supply, when this agonist-antagonist muscle movement occurs information is sent through the nerve to the brain, enabling the person to feel those muscles moving, both their position, speed, and load,” he says.

By connecting the AMI with electrodes, the researchers can detect electrical pulses from the muscle, or apply electricity to the muscle to cause it to contract.

“When a person is thinking about moving their phantom ankle, the AMI that maps to that bionic ankle is moving back and forth, sending signals through the nerves to the brain, enabling the person with an amputation to actually feel their bionic ankle moving throughout the whole angular range,” Herr says.

Decoding the electrical language of proprioception within nerves is extremely difficult, according to Tyler Clites, first author of the paper and graduate student lead on the project.

“Using this approach, rather than needing to speak that electrical language ourselves, we use these biological sensors to speak the language for us,” Clites says. “These sensors translate mechanical stretch into electrical signals that can be interpreted by the brain as sensations of position, speed, and force.”

The AMI was first implemented surgically in a human patient at Brigham and Women’s Faulkner Hospital, Boston, by Matthew Carty, one of the paper’s authors, a surgeon in the Division of Plastic and Reconstructive Surgery, and an MIT research scientist.

In this operation, two AMIs were constructed in the residual limb at the time of primary below-knee amputation, with one AMI to control the prosthetic ankle joint, and the other to control the prosthetic subtalar joint.

“We knew that in order for us to validate the success of this new approach to amputation, we would need to couple the procedure with a novel prosthesis that could take advantage of the additional capabilities of this new type of residual limb,” Carty says. “Collaboration was critical, as the design of the procedure informed the design of the robotic limb, and vice versa.”

Toward this end, an advanced prosthetic limb was built at MIT and electrically linked to the patient’s peripheral nervous system using electrodes placed over each AMI muscle following the amputation surgery.

The researchers then compared the movement of the AMI patient with that of four people who had undergone a traditional below-knee amputation procedure, using the same advanced prosthetic limb.

They found that the AMI patient had more stable control over movement of the prosthetic device and was able to move more efficiently than those with the conventional amputation. They also found that the AMI patient quickly displayed natural, reflexive behaviors such as extending the toes toward the next step when walking down a set of stairs.

These behaviors are essential to natural human movement and were absent in all of the people who had undergone a traditional amputation.

What’s more, while the patients with conventional amputation reported feeling disconnected to the prosthesis, the AMI patient quickly described feeling that the bionic ankle and foot had become a part of their own body.

“This is pretty significant evidence that the brain and the spinal cord in this patient adopted the prosthetic leg as if it were their biological limb, enabling those biological pathways to become active once again,” Clites says. “We believe proprioception is fundamental to that adoption.”

It is difficult for an individual with a lower limb amputation to gain a sense of embodiment with their artificial limb, according to Daniel Ferris, the Robert W. Adenbaum Professor of Engineering Innovation at the University of Florida, who was not involved in the research.

“This is ground breaking. The increased sense of embodiment by the amputee subject is a powerful result of having better control of and feedback from the bionic limb,” Ferris says. “I expect that we will see individuals with traumatic amputations start to seek out this type of surgery and interface for their prostheses — it could provide a much greater quality of life for amputees.”

The researchers have since carried out the AMI procedure on nine other below-knee amputees and are planning to adapt the technique for those needing above-knee, below-elbow, and above-elbow amputations.

“Previously, humans have used technology in a tool-like fashion,” Herr says. “We are now starting to see a new era of human-device interaction, of full neurological embodiment, in which what we design becomes truly part of us, part of our identity.”

Automating window washing

Three and half years ago, I stood on the corner of West Street and gasped as two window washers clung to life at the end of a rope a thousand feet above. By the time rescue crews reached the men on the 69th floor of 1 World Trade they were close to passing out from dangling upside down. Everyday risk-taking men and women hook their bodies to metal scaffolds and ascend to deadly heights for $25 an hour. Ramone Castro, a window washer of three decades, said it best, “It is a very dangerous job. It is not easy going up there. You can replace a machine but not a life.” Castro’s statement sounds like an urgent call to action for robots.

One of the promises of automation is replacing tasks that are too dangerous for humans. Switzerland-based Serbot believes that high-rise facade cleaning is one of those jobs ripe for disruption. In 2010, it was first reported that Serbot contracted with the city of Dubai to automatically clean its massive glass skyline. Utilizing their GEKKO machine, the Swiss company has demonstrated a performance of over 400 square meters an hour, 15 times faster than a professional washer. GEKKO leverages a unique suction technology that enables the massive Roomba-like device to be suspended from the roof and adhere to the curtain wall regardless of weather conditions or architectural features. Serbot offers both semi and full autonomous versions of its GEKKOs, which include options for retrofitting existing roof systems. It is unclear how many robots are actually deployed in the marketplace, however Serbot recently announced the cleaning of the architecturally challenging FESTO’s Automation Center in Germany (shown below).

festo-news-1

According to the press release, “The entire building envelope is cleaned automatically: by a robot, called GEKKO Facade, which sucks on the glass facade. This eliminates important disadvantages of conventional cleaning: no disturbance of the user by cleaning personnel, no risky working in a gondola at high altitude, no additional protection during the cleaning phase, etc.” Serbot further states its autonomous system was able to work at amazing speeds cleaning the 8,600 square meter structure within a couple of days via its intelligent platform that plans a route across the entire glass facade.

skyscraper stats.jpg

Parallel to the global trend of urbanization, skyscraper construction is at an all time high. Demand for glass facade materials and maintenance services is close to surpassing $200 billion worldwide. As New York City is in the center of the construction boom, Israeli-startup, Skyline Robotics, recently joined Iconic Labs NYC (ICONYC). This week, I had the opportunity to ask Skyline founder and CEO Yaron Schwarcz about the move. Schwarcz proudly said, “So far we are deployed in Israel only and are working exclusively with one of the top 5 cleaning companies. Joining ICONYC was definitely a step forward, as a rule we only move forward, we believe that ICONIC can and will help us connect with the best investors and help us grow in the NY market.”

While Serbot requires building owners to purchase their proprietary suction cleaning system, Skyline’s machine, called Ozmo, integrates seamlessly with existing equipment. Schwarcz explains, “We use the existing scaffold of the building in contrast to GEKKO’s use of suction. The use of the arms is to copy the human arms which is the only way to fully maintain the entire building and all its complexity. The Ozmo system is not only a window cleaner, it’s a platform for all types of facade maintenance. Ozmo does not need any humans on the rig, never putting people in danger.” Schwarcz further shared with me the results of early case studies in Israel whereby Ozmo cleaned an entire vertical glass building in 80 hours with one supervisor remotely controlling the operation from the ground, adding with “no breaks.”

While Serbot and Skyline offer an optimistic view of the future, past efforts have been met with skepticism. In a 2014 New York Times article, written days after the two window washers almost fell to their deaths, the paper concluded, “washing windows is something that machines still cannot do as well.” The Times interviewed building exterior consultant, Craig S. Caulkins, who stated then, “Robots have problems.” Caulkins says the set back for automation has been the quality of work, citing numerous examples of dirty window corners. “If you are a fastidious owner wanting clean, clean windows so you can take advantage of that very expensive view that you bought, the last thing you want to see is that gray area around the rim of the window,” exclaimed Caulkins. Furthermore, New York City’s window washers are represented by a very active labor union, S.E.I.U. Local 32BJ. The fear of robots replacing their members could lead to city wide protests, and strikes. The S.E.I.U. 32BJ press office did not return calls for comment.

High rise window washing in New York is very much part of the folklore of the Big Apple. One of the best selling local children books, “Window Washer: At Work Above the Clouds,” profiles the former Twin Towers cleaner Roko Camaj. In 1995, Camaj predicted that “Ten years from now, all window washing will probably be done by a machine.” Unfortunately, Camaj never lived to see the innovations of GEKKO and Ozmo, as he perished in the Towers on September the 11th.

Screen Shot 2018-05-21 at 11.19.11 PM

Automating high-risk professions will be explored further on June 13th @ 6pm in NYC with Democratic Presidential Candidate Andrew Yang and New York Assemblyman Clyde Vanel at the next RobotLab on “The Politics Of Automation” – Reserve Today!

ANYbotics wins ICRA 2018 Robot Launch competition!

The four-legged design of ANYmal allows the robot to conquer difficult terrain such as gravel, sand, and snow. Photo credit: ETH Zurich / Andreas Eggenberger.

ANYbotics led the way in the ICRA 2018 Robot Launch Startup Competition on May 22, 2018 at the Brisbane Conference Center in Australia. Although ANYbotics pitched last out of the 10 startups presenting, they clearly won over the judges and audience. As competition winners, ANYbotics received a $3,000 prize from QUT bluebox, Australia’s robotics accelerator (currently taking applications for 2018!), plus Silicon Valley Robotics membership and mentoring from The Robotics Hub.

ANYbotics is a Swiss startup creating fabulous four legged robots like ANYmal and the core component, the ANYdrive highly integrated modular robotic joint actuator. Founded in 2016 by a group of ETH Zurich engineers, ANYbotics is a spin-off company of the Robotic Systems Lab (RSL), ETH Zurich.

ANYmal moves and operates autonomously in challenging terrain and interacts safely with the environment. As a multi-purpose robot platform, it is applicable on industrial indoor or outdoor sites for inspection and manipulation tasks, in natural terrain or debris areas for search and rescue tasks, or on stage for animation and entertainment. Its four legs allow the robot to crawl, walk, run, dance, jump, climb, carry — whatever the task requires.

ANYdrive is a highly integrated modular robotic joint actuator that guarantees

  • very precise, low-impedance torque control,
  • high impact robustness,
  • safe interaction,
  • intermittent energy storage and peak power amplification

Motor, gear, titanium spring, sensors, and motor electronics are incorporated in a compact and sealed (IP67) unit and connected by a EtherCAT and power bus. With ANYdrive joint actuators, any kinematic structure such as a robot arm or leg can be built without additional bearings, encoders or power electronics.

ANYdrive’s innovative design allows for highly dynamic movements and collision maneuvers without damage from impulsive contact forces, and at the same time for highly sensitive force controlled interaction with the environment. This is of special interest for robots that should interact with humans, such as collaborative and mobile robots.

ICRA 2018 finalists and judges; Roland Siegwart from ETH Zurich, Juliana Lim from SG Innovate, Yotam Rosenbaum from QUT bluebox, Martin Duursma from Main Sequence Ventures and Chris Moehle from The Robotics Hub Fund.

The ICRA 2018 Robot Launch Startup Competition was judged by experienced roboticists, investors and entrepreneurs. Roland Siegwart is a Professor at ETH Zurich’s Autonomous Systems Lab and cofounder of many successful robotics spinouts. Juliana Lim is Head of Talent from SG Innovate, a Singapore venture capital arm specializing in pre-seed, seed, startup, early-stage, and Series A investments in deep technologies, starting with artificial intelligence (AI) and robotics.

Yotam Rosenbaum is the ICT Entrepreneur in Residence at QUT bluebox, building on successful exits from global startups. Martin Duursma is a venture partner in Main Sequence Ventures, Australia’s new innovation fund specializing in AI, robotics and deep tech like biotech, quantum computing and the space industry. Chris Moehle is the managing partner at The Robotics Hub Fund, who may invest up to $250,000 in the overall winner of the Robot Launch Startup Competition 2018.

Organized by Silicon Valley Robotics, the Robot Launch competition is in it’s 5th year and has seen hundreds of startups from more than 20 countries around the globe. The MC for the evening, Silicon Valley Robotics Director Andra Keay, said “Some of the best robotics startups come from places like Switzerland or Australia, but to get funding and to grow fast, they usually need to spend some time in Silicon Valley.”

“The Robot Launch competition allows us to reach startups from all over the world and get them in front of top investors. Many of these startups have gone on to win major events and awards like TechCrunch Battlefield and CES Innovation Awards. So we know that robotics is also coming of age.”

As well as ANYbotics, the other 9 startups gave great pitches. In order of appearance they were:

  • Purple Robotics
  • Micromelon Robotics
  • EXGwear
  • HEBI Robotics
  • Abyss Solutions
  • EyeSyght
  • Niska Retail Robotics
  • Aubot
  • Sevensense

Purple Robotics creates drones for work, which fly for 3x longer than, or carry 3x the payload of existing commercial drones, due to their innovative design. They are not standard quadrocopters but they use the same battery technology. Purple Robotics drones are also gust resistant, providing maximum stability in the air and enabling them to fly closer to structures.

Micromelon creates a seamless integration between visual and text coding, with the ability to translate between the two languages in real time. Students and teachers are able to quickly begin programming the wireless robots. The teacher dashboard and software are designed to work together to assist teachers who may have minimal experience in coding, to instruct a class of students through the transition. Students are able to backtrack to blocks, see how the program looks as text or view both views at once students are able to be supported throughout the entire journey.

EXGwear is currently developing a “hands-free”, intuitive interaction method, in the form of a portable wearable device that is extremely compact, non-obtrusive, and comfortable to wear long hours to help disabled people solve their daily interaction problems with the environment. Our first product, EXGbuds, a customizable earbud-like device is based on patent-pending biosensing technology and machine learning-enabled App. It can measure eye movement and facial expression physiological signals at extremely high accuracy to generate user-specific actionable commands for seamless interaction with the smart IoTs and robotic devices.

HEBI Robotics produces Lego-like robotic building blocks. Our platform consists of hardware and software that make it easy to design, build and program world class robotics quickly. Our hardware platform is robust, flexible, and safe. Our cross-platform software development tools take care of the difficult math that’s required to develop a robot so that the roboticist can focus on the creative aspects of robot design.

Abyss Solutions delivers key innovations in Remotely Operated Vehicles (ROVs) and sensor technology to collect high fidelity, multi-modal data comprehensively across underwater inspections. By pushing the state-of-the-art in machine learning and data analytics, accurate and efficient condition assessments can be conducted and used to build an asset database. The database is able to grow over repeat inspection and the objectivity of the analytics enables automated change tracking. The output is a comprehensive asset representation that can enable efficient risk management for critical infrastructure.

EyeSyght is TV for your fingers. As humans we use our senses to gather and collect information to analyse the environment around us and create a mental picture of our surroundings. But what about touch? When we operate our smartphones, tablets and computers we interact with a flat piece of glass. Now through the use of Haptic Feedback, Electrical Impulses, Ultra Sound, EyeSyght will enable any surface to render Shapes, Textures, Depth, and much much more.

Niska Retail Robotics is reimagining retail, starting with icecream. “Customer demands are shifting away from products and towards services and experiences.” (CSIRO, 2017) Niska creates wonderful customer experiences with robot servers scooping out delicious gourmet icecream for you, 24/7.

Aubot (‘au’ is to meet in Japanese – pronounced “our-bot”) is focused on building robots that help us in our everyday lives. The company was founded in April 2013 by Marita Cheng, Young Australian of the Year 2012. Our first product, Teleport, is a telepresence robot. Teleport will reduce people’s need to travel while allowing them greater freedom to explore new surroundings. In the future, aubot aims to combine Jeva and Teleport to create a telepresence robot with an arm attached.

Sevensense (still based at ETH Zurich Autonomous Systems Lab) provide a visual localization system tailored to the needs of professional service robots. The use of cameras instead of laser rangefinders enables our product to perform more reliably, particularly in dynamic and geometrically ambiguous environments, and allows for a cost advantage. In addition, we offer market specific application modules along with the engineering services to successfully apply our product on the customer’s machinery.

We thank all the startups for sharing their pitches with us – the main hall at ICRA was packed and we look forward to hearing from more startups in the next rounds of Robot Launch 2018.

#261: Cozmo, by Anki, with Andrew Neil Stein

In this episode, Abate interviews Andrew Stein from Anki. At Anki they developed an engaging robot called Cozmo which packs sophisticated robotic software inside a lifelike, palm sized, robot. Cozmo recognizes people and objects around him and plays games with them. Cozmo is unique in that a large amount of development has been implemented to make his animations and behavior feel natural, in addition to focusing on classical robotic elements such as computer vision and object manipulation.

Andrew Neil Stein

Andrew Stein is the Head of Robotics & AI at Anki, where he began working on the Cozmo project more than four years ago as the team’s first member. He has contributed to several core systems of the product, including vision, cube manipulation, animation streaming, localization, high-level behaviors, and low-level actions. He earned his Ph.D. from the Robotics Institute at Carnegie Mellon University, and his Bachelor’s and Master’s degrees in Electrical and Computer Engineering from the Georgia Institute of Technology.

 

 

 

 

Links

Nearly 1000 research videos from #ICRA2018

The International Conference on Robotics and Automation (ICRA) is the IEEE Robotics and Automation Society’s flagship conference and is a premier international forum for robotics researchers to present their work. ICRA 2018 is just wrapping up over in Brisbane Australia.

Robohub will be bringing you stories and podcasts in the weeks ahead.

In the meantime, have a look at the #ICRA2018 tweets and nearly 1000 research spotlight videos from the conference!


Garbage-collecting aqua drones and jellyfish filters for cleaner oceans

An aqua drone developed by the WasteShark project can collect litter in harbors before it gets carried out into the open sea. Image credit – WasteShark

By Catherine Collins

The cost of sea litter in the EU has been estimated at up to €630 million per year. It is mostly composed of plastics, which take hundreds of years to break down in nature, and has the potential to affect human health through the food chain because plastic waste is eaten by the fish that we consume.

‘I’m an accidental environmentalist,’ said Richard Hardiman, who runs a project called WASTESHARK. He says that while walking at his local harbour one day he stopped to watch two men struggle to scoop litter out of the sea using a pool net. Their inefficiency bothered Hardiman, and he set about trying to solve the problem. It was only when he delved deeper into the issue that he realised how damaging marine litter, and plastic in particular, can be, he says.

‘I started exploring where this trash goes – ocean gyres (circular currents), junk gyres, and they’re just full of plastic. I’m very glad that we’re now doing something to lessen the effects,’ he said.

Hardiman developed an unmanned robot, an aqua drone that cruises around urban waters such as harbours, marinas and canals, eating up marine litter like a Roomba of the sea. The waste is collected in a basket which the WasteShark then brings back to shore to be emptied, sorted and recycled.

The design of the autonomous drone is modelled on a whale shark, the ocean’s largest known fish. These giant filter feeders swim around with their mouths open and lazily eat whatever crosses their path.

It’s powered by rechargeable electric batteries, ensuring that it doesn’t pollute the environment through oil spillage or exhaust fumes, and it is relatively silent, avoiding noise pollution. It produces zero carbon emissions and the device moves quite slowly, allowing fish and birds to merely swim away when it gets too close for comfort.

‘We’ve tested it in areas of natural beauty and natural parks where we know it doesn’t harm the wildlife,’ said Hardiman. ‘We’re quite fortunate in that, all our research shows that it doesn’t affect the wildlife around.’

WasteShark’s autonomous drone is modelled on a whale shark. Credit – RanMarine Technology

WasteShark is one of a number of new inventions designed to tackle the problem of marine litter. A project called CLAIM is developing five different kinds of technology, one of which is a plasma-based tool called a pyrolyser. 

Useful gas

CLAIM’s pyrolyser will use heat treatment to break down marine litter to a useful gas. Plasma is basically ionised gas, capable of reaching very high temperatures of thousands of degrees. Such heat can break chemical bonds between atoms, converting waste into a type of gas called syngas.

The pyrolyser will be mounted onto a boat collecting floating marine litter – mainly large items of plastic which, if left in the sea, will decay into microplastic – so that the gas can then be used as an eco-friendly fuel to power the boat, or to provide energy for heating in ports.

Dr Nikoleta Bellou of the Hellenic Centre for Marine Research, one of the project coordinators of CLAIM, said: ‘We know that we humans are actually the key drivers for polluting our oceans. Unlike organic material, plastic never disappears in nature and it accumulates in the environment, especially in our oceans. It poses a threat not only to the health of our oceans and to the coasts but to humans, and has social, economic and ecological impacts.’

The researchers chose areas in the Mediterranean and Baltic Seas to act as their case studies throughout the project, and will develop models that can tell scientists which areas are most likely to become litter hotspots. A range of factors influence how littered a beach may be – it’s not only affected by litter louts in the surrounding area but also by circulating winds and currents which can carry litter great distances, dumping the waste on some particular beaches rather than others.

CLAIM’s other methods to tackle plastic pollution include a boom – a series of nets criss-crossing a river that catches all the large litter that would otherwise travel to the sea. The nets are then emptied and the waste is collected for treatment with the pyrolyser. There have been problems with booms in the past, when bad weather conditions cause the nets to overload and break, but CLAIM will use automated cameras and other sensors that could alert relevant authorities when the nets are full.

Microplastics

Large plastic pieces that can be scooped out of the water are one thing, but tiny particles known as microplastics that are less than 5mm wide pose a different problem. Scientists on the GoJelly project are using a surprising ingredient to create a filter that prevents microplastics from entering the sea – jellyfish slime.

The filter will be deployed at waste water management plants, a known source of microplastics. The method has already proven to be successful in the lab, and now GoJelly is planning to upscale the biotechnology for industrial use.

Dr Jamileh Javidpour of the GEOMAR Helmholtz Centre for Ocean Research Kiel, who coordinates the project, said: ‘We have to be innovative to stop microplastics from entering the ocean.’

The GoJelly project kills two birds with one stone – tackling the issue of microplastics while simultaneously addressing the problem of jellyfish blooms, where the creatures reproduce in high enough levels to blanket an area of ocean.

Jellyfish are one of the most ancient creatures on the planet, having swum in Earth’s oceans during the time of the dinosaurs. On the whole, due to a decline in natural predators and changes in the environment, they are thriving. When they bloom, jellyfish can attack swimmers and fisheries.

Fishermen often throw caught jellyfish back into the sea as a nuisance but, according to Dr Javidpour, jellyfish can be used much more sustainably. Not only can their slime be used to filter out microplastics, they can also be used as feed for aquaculture, for collagen in anti-ageing products, and even in food.

In fact, part of the GoJelly project involves producing a cookbook, showing people how to make delicious dishes from jellyfish. While Europeans may not be used to cooking with jellyfish, in many Asian cultures they are a daily staple. However, Dr Javidpour stresses that the goal is not to replace normal fisheries.

‘We are mainly ecologists, we know the role of jellyfish as part of a healthy ecosystem,’ she said. ‘We don’t want to switch from classical fishery to jellyfish fishery, but it is part of our task to investigate if it is doable, if it is sustainable.’

The research in this article has been funded by the EU.

Fleet of autonomous boats could service some cities, reducing road traffic

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Senseable City Lab have designed a fleet of autonomous boats that offer high maneuverability and precise control.
Courtesy of the researchers
By Rob Matheson

The future of transportation in waterway-rich cities such as Amsterdam, Bangkok, and Venice — where canals run alongside and under bustling streets and bridges — may include autonomous boats that ferry goods and people, helping clear up road congestion.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Senseable City Lab in the Department of Urban Studies and Planning (DUSP), have taken a step toward that future by designing a fleet of autonomous boats that offer high maneuverability and precise control. The boats can also be rapidly 3-D printed using a low-cost printer, making mass manufacturing more feasible.

The boats could be used to taxi people around and to deliver goods, easing street traffic. In the future, the researchers also envision the driverless boats being adapted to perform city services overnight, instead of during busy daylight hours, further reducing congestion on both roads and canals.

“Imagine shifting some of infrastructure services that usually take place during the day on the road — deliveries, garbage management, waste management — to the middle of the night, on the water, using a fleet of autonomous boats,” says CSAIL Director Daniela Rus, co-author on a paper describing the technology that’s being presented at this week’s IEEE International Conference on Robotics and Automation.

Moreover, the boats — rectangular 4-by-2-meter hulls equipped with sensors, microcontrollers, GPS modules, and other hardware — could be programmed to self-assemble into floating bridges, concert stages, platforms for food markets, and other structures in a matter of hours. “Again, some of the activities that are usually taking place on land, and that cause disturbance in how the city moves, can be done on a temporary basis on the water,” says Rus, who is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science.

The boats could also be equipped with environmental sensors to monitor a city’s waters and gain insight into urban and human health.

Co-authors on the paper are: first author Wei Wang, a joint postdoc in CSAIL and the Senseable City Lab; Luis A. Mateos and Shinkyu Park, both DUSP postdocs; Pietro Leoni, a research fellow, and Fábio Duarte, a research scientist, both in DUSP and the Senseable City Lab; Banti Gheneti, a graduate student in the Department of Electrical Engineering and Computer Science; and Carlo Ratti, a principal investigator and professor of the practice in the DUSP and director of the MIT Senseable City Lab.

Better design and control

The work was conducted as part of the “Roboat” project, a collaboration between the MIT Senseable City Lab and the Amsterdam Institute for Advanced Metropolitan Solutions (AMS). In 2016, as part of the project, the researchers tested a prototype that cruised around the city’s canals, moving forward, backward, and laterally along a preprogrammed path.

The ICRA paper details several important new innovations: a rapid fabrication technique, a more efficient and agile design, and advanced trajectory-tracking algorithms that improve control, precision docking and latching, and other tasks. 

To make the boats, the researchers 3-D-printed a rectangular hull with a commercial printer, producing 16 separate sections that were spliced together. Printing took around 60 hours. The completed hull was then sealed by adhering several layers of fiberglass.

Integrated onto the hull are a power supply, Wi-Fi antenna, GPS, and a minicomputer and microcontroller. For precise positioning, the researchers incorporated an indoor ultrasound beacon system and outdoor real-time kinematic GPS modules, which allow for centimeter-level localization, as well as an inertial measurement unit (IMU) module that monitors the boat’s yaw and angular velocity, among other metrics.

The boat is a rectangular shape, instead of the traditional kayak or catamaran shapes, to allow the vessel to move sideways and to attach itself to other boats when assembling other structures. Another simple yet effective design element was thruster placement. Four thrusters are positioned in the center of each side, instead of at the four corners, generating forward and backward forces. This makes the boat more agile and efficient, the researchers say.

The team also developed a method that enables the boat to track its position and orientation more quickly and accurately. To do so, they developed an efficient version of a nonlinear model predictive control (NMPC) algorithm, generally used to control and navigate robots within various constraints.

The NMPC and similar algorithms have been used to control autonomous boats before. But typically those algorithms are tested only in simulation or don’t account for the dynamics of the boat. The researchers instead incorporated in the algorithm simplified nonlinear mathematical models that account for a few known parameters, such as drag of the boat, centrifugal and Coriolis forces, and added mass due to accelerating or decelerating in water. The researchers also used an identification algorithm that then identifies any unknown parameters as the boat is trained on a path.

Finally, the researchers used an efficient predictive-control platform to run their algorithm, which can rapidly determine upcoming actions and increases the algorithm’s speed by two orders of magnitude over similar systems. While other algorithms execute in about 100 milliseconds, the researchers’ algorithm takes less than 1 millisecond.

Testing the waters

To demonstrate the control algorithm’s efficacy, the researchers deployed a smaller prototype of the boat along preplanned paths in a swimming pool and in the Charles River. Over the course of 10 test runs, the researchers observed average tracking errors — in positioning and orientation — smaller than tracking errors of traditional control algorithms.

That accuracy is thanks, in part, to the boat’s onboard GPS and IMU modules, which determine position and direction, respectively, down to the centimeter. The NMPC algorithm crunches the data from those modules and weighs various metrics to steer the boat true. The algorithm is implemented in a controller computer and regulates each thruster individually, updating every 0.2 seconds.

“The controller considers the boat dynamics, current state of the boat, thrust constraints, and reference position for the coming several seconds, to optimize how the boat drives on the path,” Wang says. “We can then find optimal force for the thrusters that can take the boat back to the path and minimize errors.”

The innovations in design and fabrication, as well as faster and more precise control algorithms, point toward feasible driverless boats used for transportation, docking, and self-assembling into platforms, the researchers say.

A next step for the work is developing adaptive controllers to account for changes in mass and drag of the boat when transporting people and goods. The researchers are also refining the controller to account for wave disturbances and stronger currents.

“We actually found that the Charles River has much more current than in the canals in Amsterdam,” Wang says. “But there will be a lot of boats moving around, and big boats will bring big currents, so we still have to consider this.”

The work was supported by a grant from AMS.

Page 354 of 400
1 352 353 354 355 356 400