Page 398 of 399
1 396 397 398 399

Classroom robotics: Training teachers to code

*All images credit: ROBBO

30 teachers arrived, excited to learn. They rolled up their sleeves and placed laptops and Robot kits on the floor. The room filled with excitement (and laughter!) as everyone tried to come up with different solutions on how to create different programs. The results were hilarious; a robot inspired by Darth Vader, a robot that asked everyone to turn the lights off when it was too bright in the room, and a robot that tricked the teacher to leave the classroom during an exam.

Not bad for a day of “work!”

Training, like above, is what we’re all about at ROBBO. ROBBO is a fun and simple way for absolutely anyone to get introduced to the world of robotics and coding. As a part of one of our many projects, we organized a training weekend for the single purpose of introducing teachers to programming and robotics. The teachers started with simple exercises in RobboScratch, a visual programming environment; moving the character, creating series of multiple commands, and learning the advantages of the infinite loop when making programs.

So, what do we mean about classroom robotics? Our educational robotics consist of two different robots; the Robot kit and the Lab. Both robots are ideal for learning programming, robotics as well as skills in problem-solving, mathematics and physics while working in interactive teams. The Lab includes a microphone, LED-lights, light sensor and a slider and is great for experimenting with different elements such as sound or light and numeric values. Our other robot is the Robot kit, equipped with a motor, which is a fun way to explore everyday technology using a touch sensor, proximity sensor, light sensor, line sensors and an LED-light. Our robots are programmed in the visual programming environment RobboScratch, an adapted version of Scratch developed at MIT.

In our earlier example, teachers were divided into separate workshops, working in pairs, or teams of three. We believe it is important to communicate and discuss with others to better understand different programs and come up with alternative solutions if the program doesn’t work in the desired way. The workshops are all based on the exercises from our pedagogical guide, and teachers were given a copy of the guide for their own use. Our guide provides instructions and multiple exercise card (with solutions!) and is free to download here (http://robbo.world/support/).

Our teaching guide is for anyone who wants to learn the basics of programming with the help of ROBBO™ robotics and RobboScratch. Our pedagogical guide is a comprehensive educational tool with instructions, exercise cards and ideas for creating the ultimate learning experience. It has been developed together with Innokas Network at the University of Helsinki and Finnish teachers and students. The majority of the teachers that participated in the training had only limited knowledge of Scratch or Scratch Junior and, therefore, we started from the beginning.

The pedagogical guide includes an introduction to RobboScratch, Lab and Robot kit as well as up to 28 exercise cards to help you along the way. The exercises are designed to develop necessary programming skills step by step, teaching children to think logically as a software developer would do, which may also be useful in many everyday situations. These are, in particular, the ability to understand the whole, to split a problem into smaller parts, and to develop a simple program to perform an operation. In the initial exercises, students will make a program using a predefined model, but as the practice progresses, they will have more and more space for their own ideas.

By developing new skills, users are encouraged to plan and develop innovations in robotics. The training goal is to learn understand and use technology to invent something new. As the final assignment of the teachers’ training, we asked teachers to form teams of four and come up with a small prank using the different capabilities and sensors of either Robot kit or Lab or simultaneous use of both robots.

If you’d like to to learn more about ROBBO or download our free guide, visit our website: http://robbo.world/support/

Comments from teachers:

“The teaching guide is a great support when learning coding. And I can just hand out these ready-made exercise cards to my students as well!”

“The exercise in the guide are good for understanding the different possibilities you have with the robots, because when you start doing an exercise you come up with more ideas on how to develop a more complicated program.”

“The robots emphasized practicality in the learning process. In addition to programming, ROBBO teaches environmental studies and all-around useful skills, in particular when the exercises of the pedagogical guide are being utilized.”

If you’d like to learn more about classroom robotics, check out these articles:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Living and working with robots: Live coverage of #ERF2017

Over 800 leading scientists, companies, and policymakers working in robotics will convene at the European Robotics Forum (#ERF2017) in Edinburgh, 22-24 March. This year’s theme is “Living and Working With Robots” with a focus on applications in manufacturing, disaster relief, agriculture, healthcare, assistive living, education, and mining.

The 3-day programme features keynotes, panel discussions, workshops, and plenty of robots roaming the exhibit floor.

We’ll be updating this post regularly with live tweets and videos. You can also follow all the Robohub coverage here.

Engineers design “tree-on-a-chip”

Engineers have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and other plants.

Trees and other plants, from towering redwoods to diminutive daisies, are nature’s hydraulic pumps. They are constantly pulling water up from their roots to the topmost leaves, and pumping sugars produced by their leaves back down to the roots. This constant stream of nutrients is shuttled through a system of tissues called xylem and phloem, which are packed together in woody, parallel conduits.

Now engineers at MIT and their collaborators have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and plants. Like its natural counterparts, the chip operates passively, requiring no moving parts or external pumps. It is able to pump water and sugars through the chip at a steady flow rate for several days. The results are published this week in Nature Plants.

Anette “Peko” Hosoi, professor and associate department head for operations in MIT’s Department of Mechanical Engineering, says the chip’s passive pumping may be leveraged as a simple hydraulic actuator for small robots. Engineers have found it difficult and expensive to make tiny, movable parts and pumps to power complex movements in small robots. The team’s new pumping mechanism may enable robots whose motions are propelled by inexpensive, sugar-powered pumps.

“The goal of this work is cheap complexity, like one sees in nature,” Hosoi says. “It’s easy to add another leaf or xylem channel in a tree. In small robotics, everything is hard, from manufacturing, to integration, to actuation. If we could make the building blocks that enable cheap complexity, that would be super exciting. I think these [microfluidic pumps] are a step in that direction.”

Hosoi’s co-authors on the paper are lead author Jean Comtet, a former graduate student in MIT’s Department of Mechanical Engineering; Kaare Jensen of the Technical University of Denmark; and Robert Turgeon and Abraham Stroock, both of Cornell University.

A hydraulic lift

The group’s tree-inspired work grew out of a project on hydraulic robots powered by pumping fluids. Hosoi was interested in designing hydraulic robots at the small scale, that could perform actions similar to much bigger robots like Boston Dynamic’s Big Dog, a four-legged, Saint Bernard-sized robot that runs and jumps over rough terrain, powered by hydraulic actuators.

“For small systems, it’s often expensive to manufacture tiny moving pieces,” Hosoi says. “So we thought, ‘What if we could make a small-scale hydraulic system that could generate large pressures, with no moving parts?’ And then we asked, ‘Does anything do this in nature?’ It turns out that trees do.”

The general understanding among biologists has been that water, propelled by surface tension, travels up a tree’s channels of xylem, then diffuses through a semipermeable membrane and down into channels of phloem that contain sugar and other nutrients.

The more sugar there is in the phloem, the more water flows from xylem to phloem to balance out the sugar-to-water gradient, in a passive process known as osmosis. The resulting water flow flushes nutrients down to the roots. Trees and plants are thought to maintain this pumping process as more water is drawn up from their roots.

“This simple model of xylem and phloem has been well-known for decades,” Hosoi says. “From a qualitative point of view, this makes sense. But when you actually run the numbers, you realize this simple model does not allow for steady flow.”

In fact, engineers have previously attempted to design tree-inspired microfluidic pumps, fabricating parts that mimic xylem and phloem. But they found that these designs quickly stopped pumping within minutes.

It was Hosoi’s student Comtet who identified a third essential part to a tree’s pumping system: its leaves, which produce sugars through photosynthesis. Comtet’s model includes this additional source of sugars that diffuse from the leaves into a plant’s phloem, increasing the sugar-to-water gradient, which in turn maintains a constant osmotic pressure, circulating water and nutrients continuously throughout a tree.

Running on sugar

With Comtet’s hypothesis in mind, Hosoi and her team designed their tree-on-a-chip, a microfluidic pump that mimics a tree’s xylem, phloem, and most importantly, its sugar-producing leaves.

To make the chip, the researchers sandwiched together two plastic slides, through which they drilled small channels to represent xylem and phloem. They filled the xylem channel with water, and the phloem channel with water and sugar, then separated the two slides with a semipermeable material to mimic the membrane between xylem and phloem. They placed another membrane over the slide containing the phloem channel, and set a sugar cube on top to represent the additional source of sugar diffusing from a tree’s leaves into the phloem. They hooked the chip up to a tube, which fed water from a tank into the chip.

With this simple setup, the chip was able to passively pump water from the tank through the chip and out into a beaker, at a constant flow rate for several days, as opposed to previous designs that only pumped for several minutes.

“As soon as we put this sugar source in, we had it running for days at a steady state,” Hosoi says. “That’s exactly what we need. We want a device we can actually put in a robot.”

Hosoi envisions that the tree-on-a-chip pump may be built into a small robot to produce hydraulically powered motions, without requiring active pumps or parts.

“If you design your robot in a smart way, you could absolutely stick a sugar cube on it and let it go,” Hosoi says.

This research was supported, in part, by the Defense Advance Research Projects Agency.

Worm-inspired material strengthens, changes shape in response to its environment

The Nereis virens worm inspired new research out of the MIT Laboratory for Atomistic and Molecular Mechanics. Its jaw is made of soft organic material, but is as strong as harder materials such as human dentin. Photo: Alexander Semenov/Wikimedia Commons

A new material that naturally adapts to changing environments was inspired by the strength, stability, and mechanical performance of the jaw of a marine worm. The protein material, which was designed and modeled by researchers from the Laboratory for Atomistic and Molecular Mechanics (LAMM) in the Department of Civil and Environmental Engineering (CEE), and synthesized in collaboration with the Air Force Research Lab (AFRL) at Wright-Patterson Air Force Base, Ohio, expands and contracts based on changing pH levels and ion concentrations. It was developed by studying how the jaw of Nereis virens, a sand worm, forms and adapts in different environments.

The resulting pH- and ion-sensitive material is able to respond and react to its environment. Understanding this naturally-occurring process can be particularly helpful for active control of the motion or deformation of actuators for soft robotics and sensors without using external power supply or complex electronic controlling devices. It could also be used to build autonomous structures.

“The ability of dramatically altering the material properties, by changing its hierarchical structure starting at the chemical level, offers exciting new opportunities to tune the material, and to build upon the natural material design towards new engineering applications,” wrote Markus J. Buehler, the McAfee Professor of Engineering, head of CEE, and senior author of the paper.

The research, recently published in ACS Nano, shows that depending on the ions and pH levels in the environment, the protein material expands and contracts into different geometric patterns. When the conditions change again, the material reverts back to its original shape. This makes it particularly useful for smart composite materials with tunable mechanics and self-powered roboticists that use pH value and ion condition to change the material stiffness or generate functional deformations.

Finding inspiration in the strong, stable jaw of a marine worm

In order to create bio-inspired materials that can be used for soft robotics, sensors, and other uses — such as that inspired by the Nereis — engineers and scientists at LAMM and AFRL needed to first understand how these materials form in the Nereis worm, and how they ultimately behave in various environments. This understanding involved the development of a model that encompasses all different length scales from the atomic level, and is able to predict the material behavior. This model helps to fully understand the Nereis worm and its exceptional strength.

“Working with AFRL gave us the opportunity to pair our atomistic simulations with experiments,” said CEE research scientist Francisco Martin-Martinez. AFRL experimentally synthesized a hydrogel, a gel-like material made mostly of water, which is composed of recombinant Nvjp-1 protein responsible for the structural stability and impressive mechanical performance of the Nereis jaw. The hydrogel was used to test how the protein shrinks and changes behavior based on pH and ions in the environment.

The Nereis jaw is mostly made of organic matter, meaning it is a soft protein material with a consistency similar to gelatin. In spite of this, its strength, which has been reported to have a hardness ranging between 0.4 and 0.8 gigapascals (GPa), is similar to that of harder materials like human dentin. “It’s quite remarkable that this soft protein material, with a consistency akin to Jell-O, can be as strong as calcified minerals that are found in human dentin and harder materials such as bones,” Buehler said.

At MIT, the researchers looked at the makeup of the Nereis jaw on a molecular scale to see what makes the jaw so strong and adaptive. At this scale, the metal-coordinated crosslinks, the presence of metal in its molecular structure, provide a molecular network that makes the material stronger and at the same time make the molecular bond more dynamic, and ultimately able to respond to changing conditions. At the macroscopic scale, these dynamic metal-protein bonds result in an expansion/contraction behavior.

Combining the protein structural studies from AFRL with the molecular understanding from LAMM, Buehler, Martin-Martinez, CEE Research Scientist Zhao Qin, and former PhD student Chia-Ching Chou ’15, created a multiscale model that is able to predict the mechanical behavior of materials that contain this protein in various environments. “These atomistic simulations help us to visualize the atomic arrangements and molecular conformations that underlay the mechanical performance of these materials,” Martin-Martinez said.

Specifically, using this model the research team was able to design, test, and visualize how different molecular networks change and adapt to various pH levels, taking into account the biological and mechanical properties.

By looking at the molecular and biological makeup of a the Nereis virens and using the predictive model of the mechanical behavior of the resulting protein material, the LAMM researchers were able to more fully understand the protein material at different scales and provide a comprehensive understanding of how such protein materials form and behave in differing pH settings. This understanding guides new material designs for soft robots and sensors.

Identifying the link between environmental properties and movement in the material

The predictive model explained how the pH sensitive materials change shape and behavior, which the researchers used for designing new PH-changing geometric structures. Depending on the original geometric shape tested in the protein material and the properties surrounding it, the LAMM researchers found that the material either spirals or takes a Cypraea shell-like shape when the pH levels are changed. These are only some examples of the potential that this new material could have for developing soft robots, sensors, and autonomous structures.

Using the predictive model, the research team found that the material not only changes form, but it also reverts back to its original shape when the pH levels change. At the molecular level, histidine amino acids present in the protein bind strongly to the ions in the environment. This very local chemical reaction between amino acids and metal ions has an effect in the overall conformation of the protein at a larger scale. When environmental conditions change, the histidine-metal interactions change accordingly, which affect the protein conformation and in turn the material response.

“Changing the pH or changing the ions is like flipping a switch. You switch it on or off, depending on what environment you select, and the hydrogel expands or contracts” said Martin-Martinez.

LAMM found that at the molecular level, the structure of the protein material is strengthened when the environment contains zinc ions and certain pH levels. This creates more stable metal-coordinated crosslinks in the material’s molecular structure, which makes the molecules more dynamic and flexible.

This insight into the material’s design and its flexibility is extremely useful for environments with changing pH levels. Its response of changing its figure to changing acidity levels could be used for soft robotics. “Most soft robotics require power supply to drive the motion and to be controlled by complex electronic devices. Our work toward designing of multifunctional material may provide another pathway to directly control the material property and deformation without electronic devices,” said Qin.

By studying and modeling the molecular makeup and the behavior of the primary protein responsible for the mechanical properties ideal for Nereis jaw performance, the LAMM researchers are able to link environmental properties to movement in the material and have a more comprehensive understanding of the strength of the Nereis jaw.

The research was funded by the Air Force Office of Scientific Research and the National Science Foundation’s Extreme Science and Engineering Discovery Environment (XSEDE) for the simulations.

Living and working with robots: European Robotics Forum to focus on robotics markets and future of work

Over 800 leading scientists, companies, and policymakers working in robotics will convene at the European Robotics Forum (#ERF2017) in Edinburgh, 22-24 March. This year’s theme is “Living and Working With Robots” with a focus on applications in manufacturing, disaster relief, agriculture, healthcare, assistive living, education, and mining.

The 3-day programme features keynotes, panel discussions, workshops, and plenty of robots roaming the exhibit floor. Visitors may encounter a humanoid from Pal Robotics, a bartender robot from KUKA, Shadow’s human-like hands, or the latest state-of-the-art robots from European research. Success stories from Horizon 2020, the European Union’s framework programme for research and innovation, and FP7 European projects will be on display.

Dr Cécile Huet Deputy Head of European Commission Robotics & Artificial Intelligence Unit, said, “A set of EU projects will demonstrate the broad impact of the EU funding programme in robotics: from progress in foundational research in robot learning, to in touch sensing for a new dimension in intuitive Human-Robot cooperation, to inspection in the oil-and-gas industry, security, care, manufacturing for SMEs, or the vast applications enabled by the progress in drones autonomous navigation.”

Reinhard Lafrenz, Secretary General of euRobotics said, “A rise in sales in robotics is driving the industry forward, and it’s not just benefiting companies who sell robots, but also SMEs and larger industries that use robots to increase their productivity and adopt new ways of thinking about their business. Around 80 robotics start-ups were created last year in Europe, which is truly remarkable. At euRobotics, we nurture the robotics industry ecosystem in Europe; keep an eye out for the Tech Transfer award and the Entrepreneurship award we’ll be giving out at ERF.”

Projects presented will include:

  • FUTURA – Focused Ultrasound Therapy Using Robotic Approaches
  • PETROBOT – Use cases for inspection robots opening up the oil-, gas- and petrochemical markets
  • sFly – Swarm of Micro Flying Robots
  • SMErobotics – The European Robotics Initiative for Strengthening the Competitiveness of SMEs in Manufacturing by Integrating aspects of Cognitive Systems
  • STRANDS – Spatio-Temporal Representations and Activities For Cognitive Control in Long-Term Scenarios
  • WEARHAP – WEARable HAPtics for Humans and Robots
  • Xperience – Robots Bootstrapped through Learning from Experience

The increased use of Artificial Intelligence and Machine Learning in robotics will be highlighted in two keynote presentations. Raia Hadsell, Senior Research Scientist at DeepMind will focus on deep learning, and strategies to make robots that can continuously learn and improve over time. Stan Boland, CEO of FiveAI, will talk about his company’s aim to accelerate the arrival of fully autonomous vehicles.

Professor David Lane, ERF2017 General Chair and Director of the Edinburgh Centre for Robotics, said,  “We’re delighted this year to have two invited keynotes of outstanding quality and relevance from the UK, representing both research and disruptive industrial application of robotics and artificial intelligence. EURobotics and its members are committed to the innovation that translates technology from research to new products and services. New industries are being created, with robotics providing the essential arms, legs and sensors that bring big data and artificial intelligence out of the laboratory and into the real world.”

Throughout ERF2017, emphasis will be given to the impact of robots on society and the economy. Keith Brown MSP, Cabinet Secretary for Economy, Jobs and Fair Work, will open the event, said, “The European Robotics Forum provides an opportunity for Scotland to showcase our world-leading research and expertise in robotics, artificial intelligence and human-robot interaction. This event will shine a light on some of the outstanding developments being pioneered and demonstrates Scotland’s vital role in this globally significant area.”

In discussing robots and society, Dr Patricia A. Vargas, ERF2017 General Chair and Director of the Robotics Laboratory at Heriot-Watt University, said, “As robots gradually move to our homes and workplace, we must make sure they are fully ethical. A potential morality code for robots should include human responsibilities, and take into account how humans can interact with robots in a safe way. The European Robotics Forum is the ideal place to drive these discussions.”

Ultimately, the forum aims to understand how robots can benefit small and medium-sized businesses, and how links between industry and academia can be improved to better exploit the strength of European robotics and AI research. As robots start leaving the lab to enter our home and work environments, it becomes increasingly important to understand how they will best work alongside human co-workers and users. Issues of policy, the law, and ethics will be debated during dedicated workshops.

Dr Katrin Lohan, General Chair and Deputy Director of the Robotics Laboratory at Heriot-Watt University said, “It is important how to integrate robotics into the workflow so that it support and not disrupt the human workers. The potential of natural interaction interfaces and non-verbal communication cues needs to be further explored. The synergies of robots and human workers could make all the difference for small and medium-sized businesses to discuss this the European Robotics Forum is the ideal place as it joins industry and academia community. ”

______________________

Confirmed keynote speakers include:
Keith Brown, Cabinet Secretary for the Economy, Jobs and Fair Work, Member of the Scottish Parliament
Raia Hadsell, Senior Research Scientist at DeepMind
Stan Boland, CEO of FiveAI

The full programme can be found here.

Dates: 22 – 24 March
Venue: EICC, The Exchange, 150 Morrison St., EH3 8EE Edinburgh, Scotland
Participants: 800+ participants expected
Website: http://www.erf2017.eu/

Press Passes:
Journalists may request free press badges, or support with interviews, by emailing publicity.chairs@erf2017.eu. Please see the website for additional information.

Organisers
The European Robotics Forum is organised by euRobotics under SPARC, the Public-Private partnership for Robotics in Europe. This year’s conference is hosted by the Edinburgh Centre for Robotics.

About euRobotics and SPARC
euRobotics is a non-profit organisation based in Brussels with the objective to make robotics beneficial for Europe’s economy and society.  With more than 250 member organisations, euRobotics also provides the European Robotics Community with a legal entity to engage in a public/private partnership with the European Commission, named SPARC.

SPARC, the public-private partnership (PPP) between the European Commission and euRobotics, is a European initiative to maintain and extend Europe’s leadership in civilian robotics. Its aim is to strategically position European robotics in the world thereby securing major benefits for the European economy and the society at large.

SPARC is the largest research and innovation programme in civilian robotics in the world, with 700 million euro in funding from the European Commission between 2014 to 2020, which is tripled by European industry to yield a total investment of 2.1 billion euro. SPARC will stimulate an ever more vibrant and effective robotics community that collaborates in the successful development of technical transfer and commercial exploitation.

www.eu-robotics.net
www.eu-robotics.net/sparc

Press contact details:

Sabine Hauert, Robohub President
Sabine.Hauert@robohub.org

OR

Kassie Perlongo, Managing Editor
Kassie.Perlongo@robohub.org

RoboThespian stars in UK play Spillikin, a love story

In a poignant play traveling throughout the UK, a robot is co-star and companion to the wife of the (now deceased) robot builder, with the wife developing early Alzheimer’s. The play explores very human themes about love, death, and disease, all handled extremely sensitively with RoboThespian playing a large role.

Jon Welch, the writer and director, said of the play:

“It’s a story about a robot maker. All of his life he builds robots, but he develops degenerative illness in mid-life and realizes he’s not going to live to remain a companion to his wife. His wife, by now, is developing early Alzheimer’s, so he builds his final creation, his final robot to be a companion to his wife.”

The robot is from Engineered Arts, a 12-year-old UK company that develops an ever expanding range of humanoid and semi-humanoid robots featuring natural human-like movement and advanced social behaviours. RoboThespian, Socibot and Byrun are their most prominent robot creations.

“We have pre-programmed every single thing the robot says and every single thing the robot does — all the moves. There’s about nearly 400 separate queues but they are made up of other files, all stuck together so there’s probably a couple of thousand cues in reality. So the robot will always say the same thing and move the same way, depending on what queue is been triggered at what particular time.”

This promotional video for the play is well worth watching:

The Drone Center’s Weekly Roundup: 3/20/17

The U.S. Army deployed a company of MQ-1C Gray Eagle drones to Kunsan Air Base in South Korea. Credit: Staff Sgt. Christopher Calvert/U.S. Army

March 13, 2017 – March 19, 2017

At the Center for the Study of the Drone

We spoke to Rolling Stone about the implications of recent advances in swarming drone technology for the future of warfare.

News

A U.S. airstrike in Syria involving U.S. MQ-9 Reaper drones may have resulted in the deaths of noncombatants. According to the U.K.-based Syrian Observatory for Human Rights, the strike, which reportedly hit a mosque in Jinah, killed at least 46 people. In a statement to reporters, a Pentagon spokesperson said that U.S. aircraft had not targeted the mosque, but rather al-Qaeda fighters at a community center nearby. (Washington Post)

The Wall Street Journal has reported that the Trump administration has given the CIA greater latitude to order drone strikes. If confirmed to be true, the policy shift would appear to reverse restrictions placed by the Obama administration on the intelligence agency’s role in strikes, and may reopen a disagreement with the Department of Defense over the CIA’s authority to carry out strike operations.

The U.S. Army deployed an MQ-1C Gray Eagle surveillance and strike drone unit to Kunsan Air Base in South Korea. The Gray Eagle company will be assigned to the 2nd Combat Aviation Brigade, 2nd Aviation Regiment. (AIN Online)

Canada announced new rules for recreational drone users, including a flight ceiling of 295 feet and a prohibition against flying near airports. Infractions could result in fines of over $2,000. In a statement, Transport Minister Marc Garneau said that the measures were aimed at preventing an accident involving a drone and a manned aircraft. (ABC News)

Commentary, Analysis, and Art

The U.S. Senate Committee on Commerce, Science, and Transportation held a hearing on integrating drones into the national airspace. (UAS Magazine)

At the New York Times, Rachel Nuwer takes a closer look at the benefits and challenges of  using drones to fight poachers.

The New York Times Editorial Board argues that the Trump administration should not loosen the rules of engagement for strikes and counterterrorism operations in Yemen and Somalia.

At Lawfare, Robert Chesney considers the possible consequences of the Trump administration’s reported decision to allow the CIA to order drone strikes.

At Recode, Johana Bhuiyan writes that Uber’s self-driving vehicle technology is struggling to meet expectations.

The Australian Transport Safety Bureau released a report in which it found that there was a 75 percent rise in the number of reported close encounters between drones and manned aircraft between 2012 and 2016. (PerthNow)  

Drone manufacturer DJI released a paper in which it argues that drones have saved 59 lives over the past several years. (Drone360)

At Breaking Defense, Sydney J. Freedberg Jr. looks at how automation and robotics figure into the U.S. Army’s plans for its next generation battle tank.

At DefenseNews, Meghann Myers examines the different ways that the U.S. Army is looking to protect soldiers from drones.

At the Verge, Andrew Liptak looks at how one U.S. ally used a $3 million Patriot missile to shoot down a $200 drone.

At CNBC, Michelle Castillo writes that drone racing is turning into a lucrative profession for some racers.

At Real Clear Defense, Jon Blatt explains “why drones still play second fiddle to fighters.”

At the New York Times’ Lens blog, photographer Josh Haner discusses how drones can contribute to storytelling.

At Wired, photographer Aydın Büyüktaş shares how he uses a drone, 3-D rendering, and Photoshop to create curved landscapes of the American West.

Meanwhile, at TechRepublic, Ant Pruitt offers a step-by-step guide to aerial photography for aspiring drone photographers.

Know Your Drone

U.K. firm Windhorse Aerospace revealed new details about its edible humanitarian drones, which will likely be made of compressed vegetable honeycomb and salami. (The Verge)

Online retail giant Amazon has been granted two patents for its proposed delivery drone system: an adjustable landing gear system and a propeller system with adjustable wingtips. (CNBC)

Meanwhile, Amazon displayed two of its Prime Air delivery drones at the South by Southwest  festival in Texas, the first time the systems had been displayed publicly. (Fortune)

Drone maker QuadH2O unveiled the HexH2O Pro, a waterproof commercial drone. (Unmanned Systems Technology)

Russian defense firm Kalashnikov is planning to build a 20-ton armed unmanned ground vehicle. (Popular Mechanics)

Defense firm BAE is once again displaying its Armed Robotic Combat Vehicle, a weaponized unmanned ground vehicle that it developed for the U.S. Army’s cancelled Future Combat Systems program. (Defense News)

Singapore’s Air Force has announced that its Heron 1 surveillance and reconnaissance drone has reached full operational capability. (IHS Jane’s 360)

Researchers at Georgia Tech are developing a user-friendly interface that makes it easy to control robotic arms. (IEEE Spectrum)

China Daily reported that China Aerospace Science and Industry Corporation, a state-owned company, is developing drones capable of evading radar detection. (IHS Jane’s 360)

The Israeli military is set to begin operational tests of the Elbit Systems’ Skylark 3, a surveillance and reconnaissance drone. (FlightGlobal)

Defense firm Israel Aerospace Industries unveiled two small electro-optical sensors designed for use on surveillance and reconnaissance drones. (Shephard Media)

Police in Wuhan, China are testing a counter-drone jamming gun. (South China Morning Post)

A software upgrade to the U.S. Navy’s Boeing P-8 maritime surveillance aircraft will enable it to work with unmanned systems. (Defense Systems)

Drones at Work

New Zealand firm Drone Technologies conducted the country’s first beyond-line-of-sight flight of a drone to inspect transmission lines and towers in the Rimutaka Ranges. (Stuff)

The Cecil County Sheriff’s Office in Maryland used a drone to discover a trove of stolen heavy machinery. (ABC2 News)

A Skylark 1 drone operated by the Israel Defense Forces crashed during a flight in Gaza. (Jerusalem Post)

The FAA has granted the Grand Forks County Sheriff’s Office in North Dakota a waiver to conduct nighttime drone operations. (Bemidji Pioneer)

Industry Intel

China-based drone manufacturer Yuneec announced that it is laying off an undisclosed number of staff at its North America office. (MarketWatch)

Defunct drone startup Lily Robotics told customers that it does not have a timeline for refunding preorders of its cancelled selfie drone. (Recode)

The Defense Advanced Research Projects Agency awarded Dynetics and General Atomics Aeronautical Systems phase two contracts for the Gremlins low-cost, reusable drone program. (Shephard Media)

The U.S. Air Force will reportedly award General Atomics Aeronautical Systems contracts for upgrading the MQ-9 Reaper Block 5 systems to an extended range configuration. (IHS Jane’s 360)

The National Oceanic and Atmospheric Administration awarded Aerial Imaging Solutions a $61,850 contract for three hexacopter drone systems. (FBO)

The U.S. Geological Survey awarded Rock House Products International a $13,011 contract for a thermal imaging system for an unmanned aircraft. (FBO)

The U.S. Navy awarded Northrop Grumman Systems a $3.6 million contract for the installation and flight testing of the Selex ES Osprey 30 RADAR for the MQ-8C Fire Scout drone. (FBO)

The U.S. Navy announced that it will award Boeing Insitu a $112,842 foreign military sales contract for spare parts for the ScanEagle drone for Kenya. (FBO)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Japan’s World Robot Summit posts challenges for teams

Japan is holding a huge robot celebration in 2018 in Tokyo and 2020 in Aichi, Fukushima, hosted by the Ministry of Economy, Trade and industry (METI) and the New Energy Industrial Technology Development Organization (NEDO). This is a commercial robotics Expo and a series of robotics Challenges with the goal of bringing together experts from around the world to advance human focused robotics.

The World Robot Summit website was just launched on March 2, 2017. The results of tenders for standard robot platforms for the competitions are being announced soon and the first trials for competition teams should happen in summer 2017.

There are a total of 8 challenges that fall into 4 categories: Industrial Robotics, Service Robotics, Disaster Robotics and Junior.

Industrial: Assembly Challenge – quick and accurate assembly of model products containing technical components require in assembling industrial products and other goods.

Service: Partner Robot Challenge – setting tasks equivalent to housework and making robots that complete such tasks – utilizing a standard robot platform.

Service: Automation of Retail Work Challenge – making robots to complete tasks eg. shelf stocking and replenishment multiple types of products such as foods, interaction between customers and staffs and cleaning restrooms.

Disaster: Plant Disaster Prevention Challenge – inspecting or maintaining infrastructures based on set standards eg. opening/closing valves and exchanging consumable supplies and searching for disaster victims.

Disaster: Tunnel Disaster Response and Recovery Challenge – collecting information and providing emergency response in case of a tunnel disaster eg. saving lives and removing vehicles from tunnels.

Disaster: Standard Disaster Robotics Challenge – assessing standard performance levels eg. mobility, sensing, information collection, wireless communication, remote control on-site deployment and durability, etc. require in disaster prevention and response.

Junior (aged 19 or younger): School Robot Challenge – making robots to complete tasks that might be useful in a school environment – utilizing a standard robot platform.

Junior (aged 19 or younger): Home Robot Challenge – setting tasks equivalent to housework and making robots that complete such tasks.

The World Robot Summit, Challenge, Expo and Symposiums are looking for potential teams and major sponsors. 

For more information, you can email: Wrs@keieiken.co.jp

Robots Podcast #230: bots_alive, with Bradley Knox



In this episode, Audrow Nash interviews Bradley Knox, founder of bots_alive. Knox speaks about an add-on to a Hexbug, a six-legged robotic toy, that makes the bot behave more like a character. They discuss the novel way Knox uses machine learning to create a sense character. They also discuss the limitation of technology to emulate living creatures, and how the bots_alive robot was built within these limitations.

 

 

Brad Knox

Dr. Bradley Knox is the founder of bots_alive. He researched human-robot interaction, interactive machine learning, and artificial intelligence at the MIT Media Lab and at UT Austin. At MIT, he designed and taught Interactive Machine Learning. He has won two best paper awards at major robotics and AI conferences, was awarded best dissertation from UT Austin’s Computer Science Department, and was named to IEEE’s AI’s 10 to Watch in 2013.

 

 

Links

Bosch and Nvidia partner to develop AI for self-driving cars

Amongst all the activity in autonomously driven vehicle joint ventures, new R&D facilities, strategic acquisitions (such as Mobileye being acquired by Intel) and booming startup fundings, two big players in the industry, NVIDIA and Bosch, are partnering to develop an AI self-driving car supercomputer.

Bosch CEO Dr Volkmar Denner announced the partnership during his keynote address at Bosch Connected World, in Berlin.

“Automated driving makes roads safer, and artificial intelligence is the key to making that happen,” said Denner. “We are making the car smart. We are teaching the car how to maneuver through road traffic by itself.”

The Bosch AI car computer will use NVIDIA AI PX technology, the upcoming AI car superchip, advertised as the world’s first single-chip processor designed to achieve Level-4 autonomous driving (see ADAS chart). The unprecedented level of performance is necessary to handle the massive amount of computation required for the various tasks self-driving vehicles must perform which include running deep neural nets to sense surroundings, understanding the 3D environment, localizing themselves on an HD map, predicting the behavior and position of other objects, as well as computing car dynamics and a safe path forward.

Source: Frost & Sullivan;VDS Automotive SYS Konferenz 2014/

 

Essentially, the NVIDIA platform enables vehicles to be trained on the complexities of driving, operated autonomously and updated over the air with new features and capabilities. And Bosch, which is the one of the world’s largest auto parts makers, has the Tier 1 credentials to mass-produce this AI-enabled supercomputer for a good portion of the auto industry.

“Self-driving cars is a challenge that can finally be solved with recent breakthroughs in deep learning and artificial intelligence,” said Jen-Hsun Huang, founder and CEO, NVIDIA. “Using DRIVE PX AI car computer, Bosch will build automotive-grade systems for the mass production of autonomous cars. Together we will realize a future where autonomous vehicles make mobility safe and accessible to all.”

Nvidia is also partnering with automakers Audi and Mercedes-Benz.

Bottom line:

“This is the kind of strategic tie-up that lets both partners do what they do best – Nvidia can focus on developing the core AI supercomputing tech, and Bosch can provide relationships and sales operations that offer true scale and reach,” says Darrell Etherington for TechCrunch.

Security for multirobot systems

Researchers including MIT professor Daniela Rus (left) and research scientist Stephanie Gil (right) have developed a technique for preventing malicious hackers from commandeering robot teams’ communication networks. To verify the theoretical predictions, the researchers implemented their system using a battery of distributed Wi-Fi transmitters and an autonomous helicopter. Image: M. Scott Brauer.

Distributed planning, communication, and control algorithms for autonomous robots make up a major area of research in computer science. But in the literature on multirobot systems, security has gotten relatively short shrift.

In the latest issue of the journal Autonomous Robots, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and their colleagues present a new technique for preventing malicious hackers from commandeering robot teams’ communication networks. The technique could provide an added layer of security in systems that encrypt communications, or an alternative in circumstances in which encryption is impractical.

“The robotics community has focused on making multirobot systems autonomous and increasingly more capable by developing the science of autonomy. In some sense we have not done enough about systems-level issues like cybersecurity and privacy,” says Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and senior author on the new paper.

“But when we deploy multirobot systems in real applications, we expose them to all the issues that current computer systems are exposed to,” she adds. “If you take over a computer system, you can make it release private data — and you can do a lot of other bad things. A cybersecurity attack on a robot has all the perils of attacks on computer systems, plus the robot could be controlled to take potentially damaging action in the physical world. So in some sense there is even more urgency that we think about this problem.”

Identity theft

Most planning algorithms in multirobot systems rely on some kind of voting procedure to determine a course of action. Each robot makes a recommendation based on its own limited, local observations, and the recommendations are aggregated to yield a final decision.

A natural way for a hacker to infiltrate a multirobot system would be to impersonate a large number of robots on the network and cast enough spurious votes to tip the collective decision, a technique called “spoofing.” The researchers’ new system analyzes the distinctive ways in which robots’ wireless transmissions interact with the environment, to assign each of them its own radio “fingerprint.” If the system identifies multiple votes as coming from the same transmitter, it can discount them as probably fraudulent.

“There are two ways to think of it,” says Stephanie Gil, a research scientist in Rus’ Distributed Robotics Lab and a co-author on the new paper. “In some cases cryptography is too difficult to implement in a decentralized form. Perhaps you just don’t have that central key authority that you can secure, and you have agents continually entering or exiting the network, so that a key-passing scheme becomes much more challenging to implement. In that case, we can still provide protection.

“And in case you can implement a cryptographic scheme, then if one of the agents with the key gets compromised, we can still provide  protection by mitigating and even quantifying the maximum amount of damage that can be done by the adversary.”

Hold your ground

In their paper, the researchers consider a problem known as “coverage,” in which robots position themselves to distribute some service across a geographic area — communication links, monitoring, or the like. In this case, each robot’s “vote” is simply its report of its position, which the other robots use to determine their own.

The paper includes a theoretical analysis that compares the results of a common coverage algorithm under normal circumstances and the results produced when the new system is actively thwarting a spoofing attack. Even when 75 percent of the robots in the system have been infiltrated by such an attack, the robots’ positions are within 3 centimeters of what they should be. To verify the theoretical predictions, the researchers also implemented their system using a battery of distributed Wi-Fi transmitters and an autonomous helicopter.

“This generalizes naturally to other types of algorithms beyond coverage,” Rus says.

The new system grew out of an earlier project involving Rus, Gil, Dina Katabi — who is the other Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT — and Swarun Kumar, who earned master’s and doctoral degrees at MIT before moving to Carnegie Mellon University. That project sought to use Wi-Fi signals to determine transmitters’ locations and to repair ad hoc communication networks. On the new paper, the same quartet of researchers is joined by MIT Lincoln Laboratory’s Mark Mazumder.

Typically, radio-based location determination requires an array of receiving antennas. A radio signal traveling through the air reaches each of the antennas at a slightly different time, a difference that shows up in the phase of the received signals, or the alignment of the crests and troughs of their electromagnetic waves. From this phase information, it’s possible to determine the direction from which the signal arrived.

Space vs. time

A bank of antennas, however, is too bulky for an autonomous helicopter to ferry around. The MIT researchers found a way to make accurate location measurements using only two antennas, spaced about 8 inches apart. Those antennas must move through space in order to simulate measurements from multiple antennas. That’s a requirement that autonomous robots meet easily. In the experiments reported in the new paper, for instance, the autonomous helicopter hovered in place and rotated around its axis in order to make its measurements.

When a Wi-Fi transmitter broadcasts a signal, some of it travels in a direct path toward the receiver, but much of it bounces off of obstacles in the environment, arriving at the receiver from different directions. For location determination, that’s a problem, but for radio fingerprinting, it’s an advantage: The different energies of signals arriving from different directions give each transmitter a distinctive profile.

There’s still some room for error in the receiver’s measurements, however, so the researchers’ new system doesn’t completely ignore probably fraudulent transmissions. Instead, it discounts them in proportion to its certainty that they have the same source. The new paper’s theoretical analysis shows that, for a range of reasonable assumptions about measurement ambiguities, the system will thwart spoofing attacks without unduly punishing valid transmissions that happen to have similar fingerprints.

“The work has important implications, as many systems of this type are on the horizon — networked autonomous driving cars, Amazon delivery drones, et cetera,” says David Hsu, a professor of computer science at the National University of Singapore. “Security would be a major issue for such systems, even more so than today’s networked computers. This solution is creative and departs completely from traditional defense mechanisms.”

If you enjoyed this article from CSAIL, you might also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Collaborating machines and avoiding soil compression

Image: Swarmfarm

Soil compression can be a serious problem, but it isn’t always, or in all ways, a bad thing. For example, impressions made by hoofed animals, so long as they only cover a minor fraction of the soil surface, create spaces in which water can accumulate and help it percolate into the soil more effectively, avoiding erosion runoff.

The linear depressions made by wheels rolling across the surface are more problematic because they create channels that can accelerate the concentration of what would otherwise be evenly distributed rainfall, turning it into a destructive force. This is far less serious when those wheels follow the contour of the land rather than running up and down slopes.

Taking this one step further, if it is possible for wheeled machines to always follow the same tracks, the compression is localized and the majority of the land area remains unaffected. If those tracks are filled with some material though which water can percolate but which impedes the accumulation of energy in downhill flows, the damage is limited to the sacrifice of the portion of the overall land area dedicated to those tracks and the creation of compression zones beneath them, which may result in boggy conditions on the uphill sides of the tracks, which may or may not be a bad thing, depending on what one is trying to grow there.

Source: vinbot.eu

(I should note at this point that such tracks, when they run on the contour, are reminiscent of the ‘swales’ used in permaculture and regenerative agriculture.)

Tractors with GPS guidance are capable of running their wheels over the same tracks with each pass, but the need for traction, so they can apply towing force to implements running through the soil, means that those tracks will constitute a significant percentage of the overall area. Machines, such as dedicated sprayers, with narrower wheels that can be spread more widely apart, create tracks which occupy far less of the total land area, but they are not built for traction, and using them in place of tractors for all field operations would require a very different approach to farming.

It is possible to get away from machine-caused soil compression altogether, using either aerial machines (drones) or machines which are supported by or suspended from fixed structures, like posts or rails.

Small drones are much like hummingbirds in that they create little disturbance, but they are also limited in the types of operations they can perform by their inability to carry much weight or exert significant force. They’re fine for pollination but you wouldn’t be able to use them to uproot weeds with tenacious roots or to harvest watermelons or pumpkins.

On the other hand, fixed structures and the machines that are supported by or suspended from them have a significant up-front cost. In the case of equipment suspended from beams or gantries spanning between rails and supported from wheeled trucks which are themselves supported by rails, there is a tradeoff between the spacing of the rails and the strength/stiffness required in the gantry. Center-pivot arrangements also have such a tradeoff, but they use a central pivot in place of one rail (or wheel track), and it’s common for them to have several points of support spaced along the beam, requiring several concentric rails or wheel tracks.

Strictly speaking, there’s no particular advantage in having rail-based systems follow the contour of the land since they leave no tracks at all. Center-pivot systems using wheels that run directly on the soil rather than rail are best used on nearly flat ground since their round tracks necessarily run downhill over part of their circumference. In any rail-based system, the “rail” might be part of the mobile unit rather than part of the fixed infrastructure, drawing support from posts spaced closely enough that there were always at least two beneath it. However, this would preclude using trough-shaped rails to deliver water for irrigation.

Since the time of expensive machines is precious, it’s best to avoid burdening them with operations that can be handled by small, inexpensive drones, and the ideal arrangement is probably a combination of small drones, a smaller number of larger drones with some carrying capacity, light on-ground devices that put little pressure on the soil, and more substantial machines supported or suspended from fixed infrastructure, whether rail, center-pivot, or something else. Livestock (chickens, for example), outfitted with light wearable devices, might also be part of the mix.

The small drones, being more numerous, will be the best source of raw data, which can be used to optimize the operation of the larger drones, on-ground devices, and the machines mounted on fixed infrastructure, although too much centralized control would not be efficient. Each device should be capable of continuing to do useful work even when it loses network connection, and peer-to-peer connections will be more appropriate than running everything through a central hub in some circumstances.

Bonirob, an agricultural robot. Source: Bosch

 

This is essentially a problem in complex swarm engineering, complex because of the variety of devices involved. Solving it in a way that creates a multi-device platform capable of following rules, carrying out plans, and recognizing anomalous conditions is the all-important first step in enabling the kind of robotics that can then go one to enable regenerative practices in farming (and land management in general).

If you enjoyed this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

 

Envisioning the future of robotics

Image: Ryan Etter

Robotics is said to be the next technological revolution. Many seem to agree that robots will have a tremendous impact over the following years, and some are heavily betting on it. Companies are investing billions buying other companies, and public authorities are discussing legal frameworks to enable a coherent growth of robotics.

Understanding where the field of robotics is heading is more than mere guesswork. While much public concern focuses on the potential societal issues that will arise with the advent of robots, in this article, we present a review of some of the most relevant milestones that happened in robotics over the last decades. We also offer our insights on feasible technologies we might expect in the near future.

Copyright © Acutronic Robotics 2017. All Rights Reserved.

Pre-robots and first manipulators

What’s the origin of robots? To figure it out we’ll need to go back quite a few decades to when different conflicts motivated the technological growth that eventually enabled companies to build the first digitally controlled mechanical arms. One of the first and well documented robots was UNIMATE (considered by many the first industrial robot): a programmable machine funded by General Motors, used to create a production line with only robots. UNIMATE helped improve industrial production at the time. This motivated other companies and research centers to actively dedicate resources to robotics, which boosted growth in the field.

Sensorized robots

Sensors were not typically included in robots until the 70’s. Starting in1968, a second generation of robots emerged that integrated sensors. These robots were able to react to their environment and offer responses that met varying scenarios.

Relevant investments were observed during this period. Industrial players worldwide were attracted by the advantage that robots promised.

Worldwide industrial robots:  Era of the robots

Many consider that the Era of Robots started in 1980. Billions of dollars were invested by companies all around to world to automate basic tasks in their assembly lines. Sales of industrial robots grew 80% above the previous years’.

Key technologies appeared within these years: General internet access was extended in 1980; Ethernet became a standard in 1983 (IEEE 802.3); the Linux kernel was announced in 1991; and soon after that real-time patches started appearing on top of Linux.

The robots created between 1980 and 1999 belong to what we call the third generation of robots: robots that were re-programmable and included dedicated controllers. Robots populated many industrial sectors and were used for a wide variety of activities: painting, soldering, moving, assembling, etc.

By the end of the 90s, companies started thinking about robots beyond the industrial sphere. Several companies created promising concepts that would inspire future roboticists. Among the robots created within this period, we highlight two:

  1. The first LEGO Mindstorms kit (1998): a set consisting of 717 pieces including LEGO bricks, motors, gears, different sensors, and a RCX Brick with an embedded microprocessor to construct various robots using the exact same parts. The kit allowed the learning of  basic robotics principles. Creative projects have appeared over the years showing the potential of interchangeable hardware in robotics. Within a few years. the LEGO Mindstorms kit became the most successful project that involved robot part interchangeability.
  2. Sony’s AIBO (1999): the world’s first entertainment robot. Widely used for research and development, Sony offered robotics to everyone in the form of a $1,500 robot that included a distributed hardware and software architecture. The OPEN-R architecture involved the use of modular hardware components — e.g. appendages that can be easily removed and replaced to customize the shape and function of the robots — and modular software components that could be interchanged to modify their behavior and movement patterns. OPEN-R inspired future robotic frameworks, and minimized the need for programming individual movements or responses.

Integration effort was identified as one of the main issues within robotics, particularly related to industrial robots. A common infrastructure typically reduces the integration effort by facilitating an environment in which components can be connected and made to interoperate. Each of the infrastructure-supported components are optimized for such integration at their conception, and the infrastructure handles the integration effort. At that point, components could come from different manufacturers (yet when supported by a common infrastructure, they will interoperate).

Sony’s AIBO and LEGO’s Mindstorms kit were built upon this principle, and both represented common infrastructures. Even though they came from the consumer side of robotics, one could argue that their success was strongly related to the fact that both products made use of interchangeable hardware and software modules. The use of a common infrastructure proved to be one of the key advantages of these technologies, however those concepts were never translated to industrial environments. Instead, each manufacturer, in an attempt to dominate the market, started creating their own “robot programming languages”.

The dawn of smart robots

Starting from the year 2000, we observed a new generation of robot technologies. The so-called fourth generation of robots consisted of more intelligent robots that included advanced computers to reason and learn (to some extend at least), and more sophisticated sensors that helped controllers adapt themselves more effectively to different circumstances.

Among the technologies that appeared in this period, we highlight the Player Project (2000, formerly the Player/Stage Project), the Gazebo simulator (2004) and the Robot Operating System (2007). Moreover, relevant hardware platforms appeared during these years. Single Board Computers (SBCs), like the Raspberry Pi, enabled millions of users all around the world to create robots easily.

The boost of bio-inspired artificial intelligence

The increasing popularity of artificial intelligence, and particularly neural networks, became relevant in this period as well. While a lot of the important work on neural networks happened in the 80’s and in the 90’s, computers did not have enough computational power at the time. Datasets weren’t big enough to be useful in practical applications. As a result, neural networks practically disappeared in the first decade of the 21st century. However, starting from 2009 (speech recognition), neural networks gained popularity and started delivering good results in fields such as computer vision (2012) or machine translation (2014). Over the last few years, we’ve seen how these techniques have been translated to robotics for tasks such as robotic grasping. In the coming years, we expect to see these AI techniques having more and more impact in robotics.

What happened to industrial robots?

Relevant key technologies have also emerged from the industrial robotics landscape (e.g.: EtherCAT). However, except for the appearance of the first so-called collaborative robots, the progress within the field of industrial robotics has significantly slowed down when compared to previous decades. Several groups have identified this fact and written about it with conflicting opinions. Below, we summarize some of the most relevant points encountered while reviewing previous work:

  • The Industrial robot industry :  is it only a supplier industry?
    For some, the industrial robot industry is a supplier industry. It supplies components and systems to larger industries, like manufacturing. These groups argue that the manufacturing industry is dominated by the PLC, motion control and communication suppliers which, together with the big customers, are setting the standards. Industrial robots therefore need to adapt and speak factory languages (PROFINET, ETHERCAT, Modbus TCP, Ethernet/IP, CANOPEN, DEVICENET, etc.) which for each factory, might be different.
  • Lack of collaboration and standardized interfaces in industry
    To date, each industrial robot manufacturer’s business model is somehow about locking you into their system and controllers. Typically, one will encounter the following facts when working with an industrial robot: a) each robot company has its own proprietary programming language, b) programs can’t be ported from one robot company to the next one, c) communication protocols are different, d) logical, mechanical and electrical interfaces are not standardized across the industry. As a result, most robotic peripheral makers suffer from having to support many different protocols, which requires a lot of development time that reduces the functionality of the product.
  • Competing by obscuring vs opening new markets?
    The closed attitude of most industrial robot companies is typically justified by the existing competition. Such an attitude leads to a lack of understanding between different manufacturers. An interesting approach would be to have manufacturers agree on a common infrastructure. Such an infrastructure could define a set of electrical and logical interfaces (leaving the mechanical ones aside due to the variability of robots in different industries) that would allow industrial robot companies to produce robots and components that could interoperate, be exchanged and eventually enter into new markets. This would also lead to a competitive environment where manufacturers would need to demonstrate features, rather than the typical obscured environment where only some are allowed to participate.

The Hardware Robot Operating System (H-ROS)

For robots to enter new and different fields, it seems reasonable that they need to adapt to the environment itself. This fact was previously highlighted for the industrial robotics case, where robots had to be fluent with factory languages. One could argue the same for service robots (e.g. households robots that will need to adapt to dish washers, washing machines, media servers, etc.), medical robots and many other areas of robotics. Such reasoning lead to the creation of the Hardware Robot Operating System (H-ROS), a vendor-agnostic hardware and software infrastructure for the creation of robot components that interoperate and can be exchanged between robots. H-ROS builds on top of ROS, which is used to define a set of standardized logical interfaces that each physical robot component must meet if compliant with H-ROS.

H-ROS facilitates a fast way of building robots, choosing the best component for each use-case from a common robot marketplace. It complies with different environments (industrial, professional, medical, …) where variables such as time constraints are critical. Building or extending robots is simplified to the point of placing H-ROS compliant components together. The user simply needs to program the cognition part (i.e. brain) of the robot and develop their own use-cases, all without facing the complexity of integrating different technologies and hardware interfaces.

The future ahead

With latest AI results being translated to robotics, and recent investments in the field, there’s a high anticipation for the near future of robotics.

As nicely introduced by Melonee Wise in a recent interview, there’s still not that many things you can do with a $1000-5000 BOM robot (which is what most people would pay on an individual basis for a robot). Hardware is still a limiting factor, and our team strongly believes that a common infrastructure, such as H-ROS, will facilitate an environment where robot hardware and software can evolve.

The list presented below summarizes, according to our judgement, some of the most technically feasible future robotic technologies to appear.

Acknowledgments

This review was funded and supported by Acutronic Robotics, a firm focused on the development of next-generation robot solutions for a range of clients.

The authors would also like to thank the Erle Robotics and the Acutronic groups for their support and help.

References

  • [1] Gates, B. ”A robot in every home,” Scientific American, 296(1), 2007, pp. 58–65. (link)
  • [2] Trikha, B. “ A Journey from floppy disk to cloud storage,” in International Journal on Computer Science and Engineering,Vol. 2, 2010, pp.1449–1452. (link)
  • [3] Copeland, B. J. “Colossus: its origins and originators,” in IEEE Annals of the History of Computing, Vol. 26, 2004, pp. 38–45. (link)
  • [4] Bondyopadhyay, P. K. “In the beginning [junction transistor],” in Proceedings of the IEEE, Vol. 86, 1998, pp.63–77. (link)
  • [5] Bryson, A. E. “Optimal control-1950 to 1985,” in IEEE Control Systems, Vol. 16, 1996, pp.26–33. (link)
  • [6] Middleditch, A. E. “Survey of numerical controller technology,” in Production Automation Project, University of Rochester, 1973. (link)
  • [7] Acal, A. P., & Lobera, A. S. “Virtual reality simulation applied to a numerical control milling machine,” in International Journal on Interactive Design and Manufacturing, Vol.1, 2007, pp.143–154. (link)
  • [8] Mark, M. “U.S. Patent №2,901,927,” Washington DC: U.S. Patent and Trademark Office, 1959 (link)
  • [9] Mickle, P. “A peep into the automated future,” in The capital century 1900–1999. http://www. capitalcentury. com/1961. html, 1961. (link)
  • [10] Kilby, J. S. (1976). Invention of the integrated circuit. IEEE Transactions on electron devices, 23(7), 648–654. (link)
  • [11] Giralt, G., Chatila, R., & Vaisset, M. “An integrated navigation and motion control system for autonomous multisensory mobile robots,” in Autonomous robot vehicles, 1990, pp.420–443. (link)
  • [12] Bryan, L. A., & Bryan, E. A. “Programmable controllers,” 1988. (link)
  • [13] Wade, J. “Dynamics of organizational communities and technological bandwagons: An empirical investigation of community evolution in the microprocessor market,” in Strategic Management Journal, Vol.16, 1995, pp.111–133. (link)
  • [14] Wallén, J. “The history of the industrial robot,” in Linköping University Electronic Press, 2008. (link)
  • [15] Paul, R. P., “WAVE: A Model Based Language for Manipulator Control,” in The Industrial Robot, Vol. 4, 1977, pp.10–17. (link)
  • [16] Shepherd, S., & Buchstab, A. “Kuka robots on-site,” in Robotic Fabrication in Architecture, Art and Design 2014, 2014, pp. 373–380. (link)
  • [17] Cutkosky, M. R., & Wright, P. K. (1982). Position Sensing Wrists for Industrial Manipulators (No. CMU-RI-TR-82–9). CARNEGIE-MELLON UNIV PITTSBURGH PA ROBOTICS INST. (link)
  • [18] Finkel, R., Taylor, R., Bolles, Paul, R. and Feldman, J., “An Overview of AL, A Programming System for Automation,” in Proceedings -Fourth International Joint Conference on Artificial Intelligence, June 1975, pp.758–765. (link)
  • [19] Park, J., & Kim, G. J. “Robots with projectors: an alternative to anthropomorphic HRI,” in Proceedings of the 4th ACM/IEEE international conference on Human robot interaction, March 2009, pp. 221–222 (link)
  • [20] Srihari, K., & Deisenroth, M. P. (1988). Robot Programming Languages — A State of the Art Survey. In Robotics and Factories of the Future’87 (pp. 625–635). Springer Berlin Heidelberg. (link)
  • [21] Gruver, W. A., Soroka, B. J., Craig, J. J. and Turner, T. L., “Industrial Robot Programming Languages: A Comparative Evaluation,” in IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-14, №4, July/August 1984, pp. 565–570. (link)
  • [22] Maeda, J. (2005). Current research and development and approach to future automated construction in Japan. In Construction Research Congress 2005: Broadening Perspectives (pp. 1–11). (link)
  • [23] Castells, M. “The Internet galaxy: Reflections on the Internet, business, and society,” in Oxford University Press on Demand, 2002. (link)
  • [24] Beckhoof “25 Years of PC Control,” 2011. (link)
  • [25] Shuang Yu “IEEE 802.3 ‘Standard for Ethernet’ Marks 30 Years of Innovation and Global Market Growth,” Press release IEEE, June 24, 2013. Retrieved January 11, 2014. (link)
  • [26] Brooks, R. “New approaches to robotics,”in Science, 253(5025), 1991, 1227–1232. (link)
  • [27] World Heritage Encyclopedia, “International Federation of Robotics” in World Heritage Encyclopedia (link)
  • [28] Lapham, J. “ RobotScript™: the introduction of a universal robot programming language,” Industrial Robot: An International Journal, 26(1),1999, pp. 17–25 (link)
  • [29]García Marín, J. A. “New concepts in automation and robotic technology for surface engineering,” 2010. (link)
  • [30] Walter A Aviles, Robin T Laird, and Margaret E Myers. “Towards a modular robotic architecture,” in 1988 Robotics Conferences. International Society for Optics and Photonics, 1989, pp. 271–278 (link)
  • [31] Angle, C. “Genghis, a six legged autonomous walking robot,” Doctoral dissertation, Massachusetts Institute of Technology, 1989. (link)
  • [32] Bovet, D. P., & Cesati, M. “Understanding the Linux Kernel: from I/O ports to process management,” in O’Reilly Media, Inc.” 2005. (link)
  • [33] Alpert, D., & Avnon, D. “Architecture of the Pentium microprocessor,” in IEEE micro, Vol. 13, 1993, pp.11–21. (link)
  • [34] Hollingum, J. “ABB focus on lean robotization,” in Industrial Robot: An International Journal, Vol. 21, 1994, pp.15–16. (link)
  • [35] Barabanov, M. “A Linux Based Real-Time Operating System,” 1996. (link)
  • [36] Yodaiken, V. “Cheap Operating systems Research,” in Published in the Proceedings of the First Conference on Freely Redistributable Systems, Cambridge MA, 1996 (link)
  • [37] Decotignie, J. D. “Ethernet-based real-time and industrial communications,” in Proceedings of the IEEE, Vol. 93, 2005, pp.1102–1117. (link)
  • [38] Wade, S., Dunnigan, M. W., & Williams, B. W. “Modeling and simulation of induction machine vector control with rotor resistance identification,” in IEEE transactions on power electronics, Vol. 12, 1997, pp.495–506. (link)
  • [39] Campbell, M., Hoane, A. J., & Hsu, F. H. “Deep blue,” in Artificial intelligence, Vol. 134, 2002, pp.57–83. (link)
  • [40] Folkner, W. M., Yoder, C. F., Yuan, D. N., Standish, E. M., & Preston, R. A. “Interior structure and seasonal mass redistribution of Mars from radio tracking of Mars Pathfinder,” in Science 278(5344), 1997, pp.1749–1752. (link)
  • [41] Cliburn, D. C. “Experiences with the LEGO Mindstorms throughout the undergraduate computer science curriculum,”in Frontiers in Education Conference, 36th Annual,IEEE, October 2006, pp.1–6. (link)
  • [42] Rowe S., R Wagner C. “An introduction to the joint architecture for unmanned systems (JAUS),” in Ann Arbor 1001, 2008. (link)
  • [43] Fujita, M. “On activating human communications with pet-type robot AIBO,” in Proceedings of the IEEE, Vol. 92, 2004, pp.1804–1813.(link)
  • [44] Breazeal, C. L. “Sociable machines: Expressive social exchange between humans and robots,” in Doctoral dissertation, Massachusetts Institute of Technology, 2000. (link)
  • [45] Rafiei, M., Elmi, S. M., & Zare, A. “Wireless communication protocols for smart metering applications in power distribution networks,” in Electrical Power Distribution Networks (EPDC), 2012 Proceedings of 17th Conference on. IEEE, May 2012, pp. 1–5. (link)
  • [46] Brian Gerkey, Richard T Vaughan, and Andrew Howard. “The Player/Stage project: Tools for multi-robot and distributed sensor systems” in Proceedings of the 11th international conference on advanced robotics. Vol. 1. 2003, pp. 317–323. (link)
  • [47] Herman Bruyninckx. “Open robot control software: the OROCOS project” in Robotics and Automation, 2001. Proceedings 2001 icra. ieee International Conference on. Vol. 3. IEEE. 2001, pp. 2523–2528. (link)
  • [48] Hirose, M., & Ogawa, K. “Honda humanoid robots development,” in Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 365(1850), 2007, pp.11–19. (link)
  • [49] Mohr, F. W., Falk, V., Diegeler, A., Walther, T., Gummert, J. F., Bucerius, J., … & Autschbach, R. “Computer-enhanced “robotic” cardiac surgery: experience in 148 patients” in The Journal of thoracic and cardiovascular surgery, 121(5), 2001, pp.842–853. (link)
  • [50] Jones, J. L., Mack, N. E., Nugent, D. M., & Sandin, P. E. “U.S. Patent №6,883,201,” in Washington, DC: U.S. Patent and Trademark Office, 2005. (link)
  • [51] Jansen, D., & Buttner, H. “Real-time Ethernet: the EtherCAT solution,” in Computing and Control Engineering, 15(1), 2004, pp. 16–21. (link)
  • [52] Koenig, N., & Howard, A. “Design and use paradigms for gazebo, an open-source multi-robot simulator,” in Intelligent Robots and Systems, 2004.(IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on IEEE., Vol. 3, September 2004, pp. 2149–2154. (link)
  • [53] Cousins, S. “Willow garage retrospective [ros topics],” in IEEE Robotics & Automation Magazine, 21(1), 2014, pp.16–20. (link)
  • [54] Garage, W. Robot operating system. 2009. [Online]. (link)
  • [55] Fisher A. “Inside Google’s Quest To Popularize Self-Driving Cars,” in Popular Science, Bonnier Corporation, Retrieved 10 October 2013. (link)
  • [56] Cousins, S. “Ros on the pr2 [ros topics],” in IEEE Robotics & Automation Magazine, 17(3), 2010, pp.23–25. (link)
  • [57] PARROT, S. A. “Parrot AR. Drone,” 2010. (link)
  • [58] Honda Motor Co. ASIMO, 2011. [Online]. (link)
  • [59] Shen, F., Yu, H., Sakurai, K., & Hasegawa, O. “An incremental online semi-supervised active learning algorithm based on self-organizing incremental neural network,” in Neural Computing and Applications, 20(7), 2011, pp.1061–1074. (link)
  • [60] Industria 4.0 en la Feria de Hannover: La senda hacia la “fábrica inteligente” pasa por la Feria de Hannover, sitio digital ‘Deutschland’, 7 de abril de 2014 (link)
  • [61] Richardson, Matt, and Shawn Wallace “Getting started with raspberry PI,” in O’Reilly Media, Inc., 2012. (link)
  • [62] Edwards, S., & Lewis, C. “Ros-industrial: applying the robot operating system (ros) to industrial applications,” in IEEE Int. Conference on Robotics and Automation, ECHORD Workshop, May 2012. (link)
  • [63] Canis, B. Unmanned aircraft systems (UAS): Commercial outlook for a new industry. Congressional Research Service, Washington, 2015, p.8. (link)
  • [64] Trishan de Lanerolle, The Dronecode Foundation aims to keep UAVs open, Jul 2015. [Online] (link)
  • [65] Savioke, Your Robot Butler Has Arrived, August 2014. [Online]. (link)
  • [66] ABB, “ABB introduces Yumi, world´s first truly collaborative dual-arm robot”, 2015. Press release (link)
  • [67] LEE, Chang-Shing, et al. Human vs. Computer Go: Review and Prospect [Discussion Forum]. IEEE Computational Intelligence Magazine, 2016, vol. 11, no 3, pp. 67–72. (link)
  • [68] Bogue, R. (2015). Sensors for robotic perception. Part one: human interaction and intentions. Industrial Robot: An International Journal, 42(5), pp.386–391 (link)
  • [69] The Linux foundation, official wiki. 2009. [Online]. (link)
  • [70] The Tesla Team, “ All Tesla Cars Being Produced Now Have Full Self-Driving Hardware” Official web, 19 Oct. 2016. [Online]. (link)
  • [71] Brian Gerkey. Why ROS 2.0?, 2014. [Online]. (link)
  • [72] Acutronic Robotics, “H-ROS: Hardware Robot Operating System”, 2016. [Online]. (link)
  • [73] Judith Viladomat, TALOS:the next step in humanoid robots from PAL Robotics. 4 Oct. 2016 [Online]. (link)

Choreographing automated cars could save time, money and lives

If you take humans out of the driving seat, could traffic jams, accidents and high fuel bills become a thing of the past? As cars become more automated and connected, attention is turning to how to best choreograph the interaction between the tens or hundreds of automated vehicles that will one day share the same segment of Europe’s road network.

It is one of the most keenly studied fields in transport – how to make sure that automated cars get to their destinations safely and efficiently. But the prospect of having a multitude of vehicles taking decisions while interacting on Europe’s roads is leading researchers to design new traffic management systems suitable for an era of connected transport.

The idea is to ensure that traffic flows as smoothly and efficiently as possible, potentially avoiding the jams and delays caused by human behaviour.

‘Travelling distances and time gaps between vehicles are crucial,’ said Professor Markos Papageorgiou, head of the Dynamic Systems & Simulation Laboratory at the Technical University of Crete, Greece. ‘It is also important to consider things such as how vehicles decide which lane to drive in.’

Prof. Papageorgiou’s TRAMAN21 project, funded by the EU’s European Research Council, is studying ways to manage the behaviour of individual vehicles, as well as highway control systems.

For example, the researchers have been looking at how adaptive cruise control (ACC) could improve traffic flows. ACC is a ‘smart’ system that speeds up and slows down a car as necessary to keep up with the one in front. Highway control systems using ACC to adjust time gaps between cars could help to reduce congestion.

‘It may be possible to have a traffic control system that looks at the traffic situation and recommends or even orders ACC cars to adopt a shorter time gap from the car in front,’ Prof. Papageorgiou said.

‘So during a peak period, or if you are near a bottleneck, the system could work out a gap that helps you avoid the congestion and gives higher flow and higher capacity at the time and place where this is needed.’

Variable speed limits

TRAMAN21, which runs to 2018, has been running tests on a highway near Melbourne, Australia, and is currently using variable speed limits to actively intervene in traffic to improve flows.

An active traffic management system of this kind could even help when only relatively few vehicles on the highway have sophisticated automation. But he believes that self-driving vehicle systems must be robust enough to be able to communicate with each other even when there are no overall traffic control systems.

‘Schools of fish and flocks of birds do not have central controls, and the individuals base their movement on the information from their own senses and the behaviour of their neighbours,’ Prof. Papageorgiou said.

‘In theory this could also work in traffic flow, but there is a lot of work to be done if this is to be perfected. Nature has had a long head-start.’

One way of managing traffic flow is platooning – a way to schedule trucks to meet up and drive in convoy on the highway. Magnus Adolfson from Swedish truckmaker Scania AB, who coordinated the EU-funded COMPANION project, says that platooning – which has already been demonstrated on Europe’s roads – can also reduce fuel costs and accidents.

The three-year project tested different combinations of distances between trucks, speeds and unexpected disruptions or stoppages.

Fuel savings

In tests with three-vehicle platoons, researchers achieved fuel savings of 5 %. And by keeping radio contact with each other, the trucks can also reduce the risk of accidents.

‘About 90 percent of road accidents are caused by driver error, and this system, particularly by taking speed out of the driver’s control, can make it safer than driving with an actual driver,’ Adolfson said.

The COMPANION project also showed the benefits of close communication between vehicles to reduce the likelihood of braking too hard and causing traffic jams further back.

‘There is enough evidence to show that using such a system can have a noticeable impact, so it would be good to get it into production as soon as possible,’ Adolfson said. The researchers have extended their collaboration to working with the Swedish authorities on possible implementation.

Rutger Beekelaar, a project manager at Dutch-based research organisation TNO, says that researchers need to demonstrate how automated cars can work safely together in order to increase their popularity.

‘Collaboration is essential to ensure vehicles can work together,’ he said. ‘We believe that in the near future, there will be more and more automation in traffic, in cars and trucks. But automated driving is not widely accepted yet.’

To tackle this, Beekelaar led a group of researchers in the EU-funded i-GAME project, which developed technology that uses wireless communication that contributes to managing and controlling automated vehicles.

They demonstrated these systems in highway conditions in the 2016 Grand Cooperative Driving Challenge in Helmond, in the Netherlands, which put groups of real vehicles through their paces to demonstrate cooperation, how they safely negotiated an intersection crossing, and merged with another column of traffic.

Beekelaar says that their technology is now being used in other European research projects, but that researchers, auto manufacturers, policymakers, and road authorities still need to work together to develop protocols, systems and standardisation, along with extra efforts to address cyber security, ethics and particularly the issue of public acceptance.

Three years on: An update from Leka, Robot Launch winner

Nearly three years ago, Leka won the Grand Prize at the 2014 Robot Launch competition for their robotic toy set on changing the way children with developmental disorders learn, play and progress. Leka will be the first interactive tool for children with developmental disorders that is available for direct purchase to the public. Designed for use in the home and not limited to a therapist’s office, Leka enables streamlined communication between therapists, parents and children easier, more efficient and more accessible through its monitoring platform. Leka’s co-founder and CEO, Ladislas de Toldi, writes about Leka’s progress since the Robot Launch competition and where the company is headed in the next year.

Since winning the Robot Launch competition in 2014, Leka has made immense progress and is well on it’s way to getting in the hands of exceptional children around the globe.

2016 was a big year for us; Leka was accepted into the 2016 class of the Sprint Accelerator Program, powered by Techstars, in Kansas City, MO. The whole team picked up and moved from Paris, France to the United States for a couple of months to work together as a team and create the best version of Leka possible.

Techstars was for us the opportunity to really test the US Special Education Market. We came to the program with two goals in mind: to build a strong community around our project in Kansas City and the area, and to launch our crowdfunding campaign on Indiegogo.

The program gave us an amazing support system to connect with people in the Autism community in the area and to push ourselves to build the best crowdfunding campaign targeting special education.

We’re incredibly humbled to say we succeeded in both: Kansas City is going to be our home base in the US, thanks to all the partnerships we now have with public schools and organizations.

Near the end of our accelerator program in May 2016, we launched our Indiegogo campaign to raise funds for Leka’s development and manufacturing—and ended up raising more than 150 percent of our total fundraising goal. We had buyers from all over the world including the United States, France, Israel, Australia and Uganda! As of today, we have reached +$145k on Indiegogo with more than 300 units preordered.

In July, the entire Leka team moved back to Paris to finalize the hardware development of Leka and kick off the manufacturing process. Although the journey has been full of challenges, we are thrilled with the progress we have made on Leka and the impact it can make on the lives of children.

This past fall, we partnered with Bourgogne Service Electronics (BSE) for manufacturing. BSE is a French company and we’re working extremely close with them on Leka’s design. Two of our team members, Alex and Gareth, recently worked with BSE to finalize the design and create what we consider to be Leka’s heart—an electronic card. The card allows Leka’s lights, movements and LCD screen to come to life.

We were also able to integrate proximity sensors into Leka, so that it can know where children are touching it, and lead to better analytics and progress monitoring in the future.

We have had quite a few exciting opportunities in the past year at industry events as well! We attended the Techstars alumni conference FounderCon, in Cincinnati, OH, and CES Unveiled in Paris in the Fall. We then had the opportunity to present Leka in front of some amazing industry professionals at the Wall Street Journal’s WSJ.D Live in Laguna Beach, CA. But most exciting was CES in Las Vegas this past January, and the announcements we made at the show.

At CES, we were finally able to unveil our newest industrial-grade prototypes with the autonomous features we’ve been working toward for the past three years. With Leka’s new fully integrated sensors, children can play with the robotic toy on their own, making it much more humanlike and interactive. These new features allow Leka to better help children understand social cues and improve their interpersonal skills.

At CES we also introduced Leka’s full motor integration, vibration and color capabilities, and the digital screen. Leka’s true emotions can finally show!

In the six months between our Indiegogo campaign, and CES Las Vegas, we were able to make immense improvements toward Leka, and pour our hearts into the product we believe will change lives for exception children and their families. We’re currently developing our next industrial prototype so we can make Leka even better, and we’re aiming to begin shipping in Fall 2017. We can’t wait to show you all the final product!

*All photos credit: Leka

About Leka
Leka is a robotic smart toy set on changing the way children with developmental disorders learn, play and progress. Available for direct purchase online through InDemand, Leka is an interactive tool designed to make communication between therapists, parents and children easier, more efficient and more accessible. Working with and adapting to each child’s own needs and abilities, Leka is able to provide vital feedback to parents and therapists on a child’s progress and growth.

Founded in France with more than two years in R&D, the company recently completed its tenure at the 2016 Sprint Accelerators Techstars program and is rapidly growing. Leka expects to begin shipping out units to Indiegogo backers in Fall 2017.

For more information, please visit http://leka.io.

If you liked this article, read more about Leka on Robohub here:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Page 398 of 399
1 396 397 398 399