News

Page 418 of 434
1 416 417 418 419 420 434

August 2017 fundings, acquisitions, IPOs and failures


August fundings totaled $369 million but the number of August transactions, seven, was down from previous months, eg: both July and June had 19 fundings each. Acquisitions, on the other hand, remained steady with a big one pending: Snap has been negotiating all month to acquire Chinese drone startup Zero Zero Robotics for around $150M.

Fundings

  1. Auris Medical Robotics, the Silicon Valley startup headed by Dr. Frederic H. Moll who previously co-founded Hansen Medical and Intuitive Surgical, raised $280 million in a Series D round led by Coatue Management and included earlier investors Mithril Capital Management, Lux Capital, and Highland Capital. Auris has raised a total of $530 million and is developing targeted, minimally invasive robotic-assisted therapies that treat only the diseased cells in order to prevent the progression of a patient’s illness. Lung cancer is the first disease they are targeting.
  2. Oryx Vision, an Israeli startup, raised $50 million in a round led by Third Point Ventures and WRV with participation by Union Tech Ventures. They all join existing investors Bessemer Venture Partners, Maniv Mobility, and Trucks VC, a VC firm focused on the future of transportation. The company has raised a total of $67 million to date. Oryx is developing a LiDAR for self-driving automobiles using microscopic antennas to detect the light frequencies. The tiny antennas are made of silicon which allows them to put thousands in one sensor thereby lowering the cost of LiDAR distancing. The advantage is increased range and sensitivity for an autonomous vehicle that needs to know exactly what is surrounding it and what those things are doing and can see through fog and not get blinded by bright sunlight.
  3. TuSimple, a Chinese startup developing driverless technologies for the trucking industry, raised $20 million in a Series B funding round led by Nvidia with participation by Sina. Nvidia will own a 3% stake in TuSimple while the startup will support the development of the Nvidia’s artificial intelligence computing platform for self-driving vehicles, Drive PX2.
  4. Atlas Dynamics, a Latvian/Israeli drone startup, raised $8 million from investment groups in Israel and in Asia. The 3-rotor Atlas Pro drone operates autonomously with interchangeable payloads and offers 55 minutes of flight time.
  5. Common Sense Robotics, an Israeli warehouse fulfillment robotics startup, raised $6 million from Aleph VC and Innovation Endeavors. CommonSense is developing small urban, automated spaces that combine the benefits of local distribution with the economics of automated fulfillment. In big cities these ‘micro-centers’ would receive, stock, and package merchandise of participating vendors based on predictive algorithms. Vendors would then arrange last-mile delivery solutions.
  6. Sky-Futures, a London-based industrial inspection services with drones startup, raised $4 million in funding from Japanese giant Mitsui & Co. The announcement came as part of Theresa May’s just-concluded trip to Japan. Sky Futures and Mitsui plan to provide inspections and other services to Mitsui’s clients across a range of sectors. Mitsui, a trading, investment and service company, has 139 offices in 66 countries.
  7. Ambient Intelligence Technology, a Japanese underwater drone manufacturer spin-off from the University of Tsukuba, raised $1.93 million from Beyond Next Ventures and Mitsui Sumitomo Insurance Venture Capital, SMBC Venture Capital, and Freebit Investment. Ambient’s ROVs can operate for prolonged periods of autonomous operation at depths of 300 meters.

Acquisitions

  1. Dupont Pioneer has acquired farm management software platform startup Granular for $300 million. San Francisco-based Granular’s farm management software helps farmers run more profitable businesses by enabling them to manage their operations and analyze their financials for each of their fields in real time and to create reports for third parties like landowners and banks. Last year they partnered with the American Farm Bureau Insurance Services to streamline crop insurance data collection and reporting and also have a cross-marketing arrangement with Deere.
  2. L3 Technologies acquired Massachusetts-based OceanServer Technology for an undisclosed amount. “OceanServer Technology positions L3 to support the U.S. Navy’s vision for the tactical employment of UUVs. This acquisition also enhances our technological capabilities and strengthens our position in growth areas where we see compelling opportunity,” said Michael T. Strianese, L3’s Chairman and CEO. “As a leading innovator and developer of UUVs, OceanServer Technology provides L3 with a new growth platform that is aligned with the U.S. Navy’s priorities.”
  3. KB Medical, SA, a Swiss medical robotics startup, was acquired by Globus Medical, a musculoskeletal solutions manufacturer, for an undisclosed amount. This is the 2nd acquisition of a robotics startup by Globus. They acquired Excelsius Robotics in 2014. “The addition of KB Medical will enable Globus Medical to accelerate, enhance and expand our product portfolio in imaging, navigation and robotics. KB Medical’s experienced team of technology development professionals, its strong IP portfolio, and shared philosophy for robotic solutions in medicine strengthen Globus Medical’s position in this strategic area,” said Dave Demski of Emerging Technologies.
  4. Jenoptik, a Germany-based laser components manufacturer of vision systems for automation and robotics, acquired Michigan-based Five Lakes Automation, an integrator and manufacturer of robotic material handling systems, for an undisclosed amount.
  5. Honeybee Robotics, the Brooklyn-based robotic space systems provider, was acquired by Ensign-Bickford for an undisclosed amount. Ensign-Bickford is a privately held 181-year-old contractor and supplier of space launch vehicles and systems. “The timing is great,” said Kiel Davis, President of Honeybee Robotics. “Honeybee has a range of new spacecraft motion control and robotics products coming to market. And EBI has the experience and resources to help us scale up and optimize our production operations so that we can meet the needs of our customers today and in the near future.”

IPOs

  1. Duke Robotics, a Florida and Israeli developer of advanced robotic systems that provide troops with aerial support and other technologies developed in Israel, has filed and been qualified for a stock offering of up to $15 million under SEC Tier II Reg A+ which allows anyone, not just wealthy investors, to be able to purchase stock from approved equity crowdfunding offers.

Failures

  1. C&R Robotics (KR)
  2. EZ-Robotics (CN)

Robots won’t steal our jobs if we put workers at center of AI revolution

File 20170830 24267 1w1z0fj

Future robots will work side by side with humans, just as they do today.
Credit: AP Photo/John Minchillo

by Thomas Kochan, MIT Sloan School of Management and Lee Dyer, Cornell University

The technologies driving artificial intelligence are expanding exponentially, leading many technology experts and futurists to predict machines will soon be doing many of the jobs that humans do today. Some even predict humans could lose control over their future.

While we agree about the seismic changes afoot, we don’t believe this is the right way to think about it. Approaching the challenge this way assumes society has to be passive about how tomorrow’s technologies are designed and implemented. The truth is there is no absolute law that determines the shape and consequences of innovation. We can all influence where it takes us.

Thus, the question society should be asking is: “How can we direct the development of future technologies so that robots complement rather than replace us?”

The Japanese have an apt phrase for this: “giving wisdom to the machines.” And the wisdom comes from workers and an integrated approach to technology design, as our research shows.

Lessons from history

There is no question coming technologies like AI will eliminate some jobs, as did those of the past.

The invention of the steam engine was supposed to reduce the number of manufacturing workers. Instead, their ranks soared.
Lewis Hine

More than half of the American workforce was involved in farming in the 1890s, back when it was a physically demanding, labor-intensive industry. Today, thanks to mechanization and the use of sophisticated data analytics to handle the operation of crops and cattle, fewer than 2 percent are in agriculture, yet their output is significantly higher.

But new technologies will also create new jobs. After steam engines replaced water wheels as the source of power in manufacturing in the 1800s, the sector expanded sevenfold, from 1.2 million jobs in 1830 to 8.3 million by 1910. Similarly, many feared that the ATM’s emergence in the early 1970s would replace bank tellers. Yet even though the machines are now ubiquitous, there are actually more tellers today doing a wider variety of customer service tasks.

So trying to predict whether a new wave of technologies will create more jobs than it will destroy is not worth the effort, and even the experts are split 50-50.

It’s particularly pointless given that perhaps fewer than 5 percent of current occupations are likely to disappear entirely in the next decade, according to a detailed study by McKinsey.

Instead, let’s focus on the changes they’ll make to how people work.

It’s about tasks, not jobs

To understand why, it’s helpful to think of a job as made up of a collection of tasks that can be carried out in different ways when supported by new technologies.

And in turn, the tasks performed by different workers – colleagues, managers and many others – can also be rearranged in ways that make the best use of technologies to get the work accomplished. Job design specialists call these “work systems.”

One of the McKinsey study’s key findings was that about a third of the tasks performed in 60 percent of today’s jobs are likely to be eliminated or altered significantly by coming technologies. In other words, the vast majority of our jobs will still be there, but what we do on a daily basis will change drastically.

To date, robotics and other digital technologies have had their biggest effects on mostly routine tasks like spell-checking and those that are dangerous, dirty or hard, such as lifting heavy tires onto a wheel on an assembly line. Advances in AI and machine learning will significantly expand the array of tasks and occupations affected.

Creating an integrated strategy

We have been exploring these issues for years as part of our ongoing discussions on how to remake labor for the 21st century. In our recently published book, “Shaping the Future of Work: A Handbook for Change and a New Social Contract,” we describe why society needs an integrated strategy to gain control over how future technologies will affect work.

And that strategy starts with helping define the problems humans want new technologies to solve. We shouldn’t be leaving this solely to their inventors.

Fortunately, some engineers and AI experts are recognizing that the end users of a new technology must have a central role in guiding its design to specify which problems they’re trying to solve.

The second step is ensuring that these technologies are designed alongside the work systems with which they will be paired. A so-called simultaneous design process produces better results for both the companies and their workers compared with a sequential strategy – typical today – which involves designing a technology and only later considering the impact on a workforce.

An excellent illustration of simultaneous design is how Toyota handled the introduction of robotics onto its assembly lines in the 1980s. Unlike rivals such as General Motors that followed a sequential strategy, the Japanese automaker redesigned its work systems at the same time, which allowed it to get the most out of the new technologies and its employees. Importantly, Toyota solicited ideas for improving operations directly from workers.

In doing so, Toyota achieved higher productivity and quality in its plants than competitors like GM that invested heavily in stand-alone automation before they began to alter work systems.

Similarly, businesses that tweaked their work systems in concert with investing in IT in the 1990s outperformed those that didn’t. And health care companies like Kaiser Permanente and others learned the same lesson as they introduced electronic medical records over the past decade.

Each example demonstrates that the introduction of a new technology does more than just eliminate jobs. If managed well, it can change how work is done in ways that can both increase productivity and the level of service by augmenting the tasks humans do.

Worker wisdom

But the process doesn’t end there. Companies need to invest in continuous training so their workers are ready to help influence, use and adapt to technological changes. That’s the third step in getting the most out of new technologies.

And it needs to begin before they are introduced. The important part of this is that workers need to learn what some are calling “hybrid” skills: a combination of technical knowledge of the new technology with aptitudes for communications and problem-solving.

Companies whose workers have these skills will have the best chance of getting the biggest return on their technology investments. It is not surprising that these hybrid skills are now in high and growing demand and command good salaries.

None of this is to deny that some jobs will be eliminated and some workers will be displaced. So the final element of an integrated strategy must be to help those displaced find new jobs and compensate those unable to do so for the losses endured. Ford and the United Auto Workers, for example, offered generous early retirement benefits and cash severance payments in addition to retraining assistance when the company downsized from 2007 to 2010.

Examples like this will need to become the norm in the years ahead. Failure to treat displaced workers equitably will only widen the gaps between winners and losers in the future economy that are now already all too apparent.

The ConversationIn sum, companies that engage their workforce when they design and implement new technologies will be best-positioned to manage the coming AI revolution. By respecting the fact that today’s workers, like those before them, understand their jobs better than anyone and the many tasks they entail, they will be better able to “give wisdom to the machines.”

Thomas Kochan, Professor of Management, MIT Sloan School of Management and Lee Dyer, Professor Emeritus of Human Resource Studies and Research Fellow, Center for Advanced Human Resource Studies (CAHRS), Cornell University

This article was originally published on The Conversation. Read the original article.

Robot learns to follow orders like Alexa

ComText allows robots to understand contextual commands such as, “Pick up the box I put down.”
Photo: Tom Buehler/MIT CSAIL

by Adam Conner-Simons & Rachel Gordon

Despite what you might see in movies, today’s robots are still very limited in what they can do. They can be great for many repetitive tasks, but their inability to understand the nuances of human language makes them mostly useless for more complicated requests.

For example, if you put a specific tool in a toolbox and ask a robot to “pick it up,” it would be completely lost. Picking it up means being able to see and identify objects, understand commands, recognize that the “it” in question is the tool you put down, go back in time to remember the moment when you put down the tool, and distinguish the tool you put down from other ones of similar shapes and sizes.

Recently researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have gotten closer to making this type of request easier: In a new paper, they present an Alexa-like system that allows robots to understand a wide range of commands that require contextual knowledge about objects and their environments. They’ve dubbed the system “ComText,” for “commands in context.”

The toolbox situation above was among the types of tasks that ComText can handle. If you tell the system that “the tool I put down is my tool,” it adds that fact to its knowledge base. You can then update the robot with more information about other objects and have it execute a range of tasks like picking up different sets of objects based on different commands.

“Where humans understand the world as a collection of objects and people and abstract concepts, machines view it as pixels, point-clouds, and 3-D maps generated from sensors,” says CSAIL postdoc Rohan Paul, one of the lead authors of the paper. “This semantic gap means that, for robots to understand what we want them to do, they need a much richer representation of what we do and say.”

The team tested ComText on Baxter, a two-armed humanoid robot developed for Rethink Robotics by former CSAIL director Rodney Brooks.

The project was co-led by research scientist Andrei Barbu, alongside research scientist Sue Felshin, senior research scientist Boris Katz, and Professor Nicholas Roy. They presented the paper at last week’s International Joint Conference on Artificial Intelligence (IJCAI) in Australia.

How it works

Things like dates, birthdays, and facts are forms of “declarative memory.” There are two kinds of declarative memory: semantic memory, which is based on general facts like the “sky is blue,” and episodic memory, which is based on personal facts, like remembering what happened at a party.

Most approaches to robot learning have focused only on semantic memory, which obviously leaves a big knowledge gap about events or facts that may be relevant context for future actions. ComText, meanwhile, can observe a range of visuals and natural language to glean “episodic memory” about an object’s size, shape, position, type and even if it belongs to somebody. From this knowledge base, it can then reason, infer meaning and respond to commands.

“The main contribution is this idea that robots should have different kinds of memory, just like people,” says Barbu. “We have the first mathematical formulation to address this issue, and we’re exploring how these two types of memory play and work off of each other.”

With ComText, Baxter was successful in executing the right command about 90 percent of the time. In the future, the team hopes to enable robots to understand more complicated information, such as multi-step commands, the intent of actions, and using properties about objects to interact with them more naturally.

For example, if you tell a robot that one box on a table has crackers, and one box has sugar, and then ask the robot to “pick up the snack,” the hope is that the robot could deduce that sugar is a raw material and therefore unlikely to be somebody’s “snack.”

By creating much less constrained interactions, this line of research could enable better communications for a range of robotic systems, from self-driving cars to household helpers.

“This work is a nice step towards building robots that can interact much more naturally with people,” says Luke Zettlemoyer, an associate professor of computer science at the University of Washington who was not involved in the research. “In particular, it will help robots better understand the names that are used to identify objects in the world, and interpret instructions that use those names to better do what users ask.”

The work was funded, in part, by the Toyota Research Institute, the National Science Foundation, the Robotics Collaborative Technology Alliance of the U.S. Army, and the Air Force Research Laboratory.

New soft robots really suck: Vacuum-powered systems empower diverse capabilities

V-SPA
Recent advances in soft robotics have seen the development of soft pneumatic actuators (SPAs) to ensure that all parts of the robot are soft, including the functional parts. These SPAs have traditionally used increased pressure in parts of the actuator to initiate movement, but today a team from NCCR Robotics and RRL, EPFL publish a new kind of SPA, one that uses vacuum, in ScienceRobotics.

The new vacuum-powered Soft Pneumatic Actuator (V-SPA) is soft, lightweight and very easy to fabricate. By using foam and coating it with layers of silicone-rubber, the team have created an actuator that can be made using off the shelf parts without the need for molds – in fact, it takes just two hours to manufacture the V-SPA.

Once produced, the actuators were combined into plug-and-play “V-SPA Modules” which created a simplified design of soft pneumatic robots with many degrees of freedom. In fact, the team created reconfigurable, modular robots using these modules, where every function of the robot was powered by a single shared vacuum source, enabling many different types of capabilities, such as multiple forms of ground locomotion, vertical climbing, object manipulation and stiffness changing.

To test the new modular robot, the team added a suction arm which used the vacuum pump to pick up and move a series of objects, a task that was completed by turning on suction when an object should be carried and allowing the arm to refill with air when the object should be released. Further validation came through attaching suction cups to the robot and using it to climb up a vertical window and using the robot to walk using a number of different gaits (either through use of waves, like a snake, or rolling).

By creating a soft, lightweight actuator that can move in any direction the team hope to enable a new generation of truly soft, compliant robots that can interact safely with the humans that use them.

 

 

Reference

M. A. Robertson and J. Paik, “New soft robots really suck: vacuum powered systems empower diverse capabilities,” Science Robotics. doi/10.1126/scirobotics.aan6357

Industrial robots in China up, up and away!

China has rapidly become a global leader in robotics and automation. 2016 annual sales of industrial robots reached the highest level ever for any single country: 87,000 units (up 27% from 2015) and China’s stock of industrial robots is now, at 340,000 units, also the highest total in the world. while Chinese robot manufacturers increased their market share to 31% (up 120% from 2015).

The International Federation of Robotics (IFR), which provided these figures, is forecasting that “from 2018 to 2020, a sales increase between 15 and 20 percent on average per year is possible for industrial robots.” And these projections don’t include service robots for professional and B2B use, and personal use such as toys, drones, mobile gofers, guides, home assistants, and consumer products like robotic vacuums and floor and window cleaners.

Outlook for 2017

According to a report released by China Robot Industry Alliance (CRIA) at the big World Robot Conference in August in Beijing and reported by China Daily, China’s industrial robot market is expected to reach $4.22 billion in 2017 representing more than 110,000 new industrial robots.

At the same press conference, CRIA also reported that China’s service robot market will reach $1.32 billion this year, up 28% percent from 2015.

Outlook to 2020

The main drivers for the growth of the use of industrial robots in China are the electrical and electronics industry followed by general handling, welding and the auto industry. This broad and expanding demand is expected to continue as major contract manufacturers start and/or continue to automate their production. A further driving factor is China’s growing consumer market for all kinds of consumer goods.

According to the ten-year national plan “Made in China 2025,” the Chinese government wants to transform China from a low-cost labor-intensive manufacturing giant into a technology-based world manufacturing power. The plan includes strengthening Chinese robot suppliers and further increasing their market shares in China and abroad.

Shanzhai

Shenzhen is the Silicon Valley of technology and hardware for China. Things get made FAST. All kinds of ‘things.’ The can-make attitude in Shenzhen is being duplicated around China thus it is important to know what goes on, why it happens in Shenzhen, why it happens so fast, and what they think about patents, intellectual property and Western companies.

Another factor (driver) in China’s relentless push toward automation and robotics is this factoid: In 2016, China’s mobile payments hit $5.5 trillion, roughly 50 times the size of America’s $112 billion market, according to consulting firm iResearch. Chinese are adopting cashless and e-commerce methods at a rate significantly faster than the rest of the world.

WIRED Video produced an hour-long documentary describing the process, the people, and ‘Shanzhai,’ the evolving philosophy of copycat manufacturing, and attempts to put a positive spin on patent avoidance and what many Westerners call stealing, plus the speed of production for adequate profits (as opposed to massive profits). It is a worthwhile and very informative investment of an hour of your time.

New robot rolls with the rules of pedestrian conduct


by Jennifer Chu
Engineers at MIT have designed an autonomous robot with “socially aware navigation,” that can keep pace with foot traffic while observing these general codes of pedestrian conduct.
Credit: MIT

Just as drivers observe the rules of the road, most pedestrians follow certain social codes when navigating a hallway or a crowded thoroughfare: Keep to the right, pass on the left, maintain a respectable berth, and be ready to weave or change course to avoid oncoming obstacles while keeping up a steady walking pace.

Now engineers at MIT have designed an autonomous robot with “socially aware navigation,” that can keep pace with foot traffic while observing these general codes of pedestrian conduct.

In drive tests performed inside MIT’s Stata Center, the robot, which resembles a knee-high kiosk on wheels, successfully avoided collisions while keeping up with the average flow of pedestrians. The researchers have detailed their robotic design in a paper that they will present at the IEEE Conference on Intelligent Robots and Systems in September.

“Socially aware navigation is a central capability for mobile robots operating in environments that require frequent interactions with pedestrians,” says Yu Fan “Steven” Chen, who led the work as a former MIT graduate student and is the lead author of the study. “For instance, small robots could operate on sidewalks for package and food delivery. Similarly, personal mobility devices could transport people in large, crowded spaces, such as shopping malls, airports, and hospitals.”

Chen’s co-authors are graduate student Michael Everett, former postdoc Miao Liu, and Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics at MIT.

Social drive

In order for a robot to make its way autonomously through a heavily trafficked environment, it must solve four main challenges: localization (knowing where it is in the world), perception (recognizing its surroundings), motion planning (identifying the optimal path to a given destination), and control (physically executing its desired path).

Chen and his colleagues used standard approaches to solve the problems of localization and perception. For the latter, they outfitted the robot with off-the-shelf sensors, such as webcams, a depth sensor, and a high-resolution lidar sensor. For the problem of localization, they used open-source algorithms to map the robot’s environment and determine its position. To control the robot, they employed standard methods used to drive autonomous ground vehicles.

“The part of the field that we thought we needed to innovate on was motion planning,” Everett says. “Once you figure out where you are in the world, and know how to follow trajectories, which trajectories should you be following?”

That’s a tricky problem, particularly in pedestrian-heavy environments, where individual paths are often difficult to predict. As a solution, roboticists sometimes take a trajectory-based approach, in which they program a robot to compute an optimal path that accounts for everyone’s desired trajectories. These trajectories must be inferred from sensor data, because people don’t explicitly tell the robot where they are trying to go. 

“But this takes forever to compute. Your robot is just going to be parked, figuring out what to do next, and meanwhile the person’s already moved way past it before it decides ‘I should probably go to the right,’” Everett says. “So that approach is not very realistic, especially if you want to drive faster.”

Others have used faster, “reactive-based” approaches, in which a robot is programmed with a simple model, using geometry or physics, to quickly compute a path that avoids collisions.

The problem with reactive-based approaches, Everett says, is the unpredictability of human nature — people rarely stick to a straight, geometric path, but rather weave and wander, veering off to greet a friend or grab a coffee. In such an unpredictable environment, such robots tend to collide with people or look like they are being pushed around by avoiding people excessively.

 “The knock on robots in real situations is that they might be too cautious or aggressive,” Everett says. “People don’t find them to fit into the socially accepted rules, like giving people enough space or driving at acceptable speeds, and they get more in the way than they help.”

Training days

The team found a way around such limitations, enabling the robot to adapt to unpredictable pedestrian behavior while continuously moving with the flow and following typical social codes of pedestrian conduct.

They used reinforcement learning, a type of machine learning approach, in which they performed computer simulations to train a robot to take certain paths, given the speed and trajectory of other objects in the environment. The team also incorporated social norms into this offline training phase, in which they encouraged the robot in simulations to pass on the right, and penalized the robot when it passed on the left.

“We want it to be traveling naturally among people and not be intrusive,” Everett says. “We want it to be following the same rules as everyone else.”

The advantage to reinforcement learning is that the researchers can perform these training scenarios, which take extensive time and computing power, offline. Once the robot is trained in simulation, the researchers can program it to carry out the optimal paths, identified in the simulations, when the robot recognizes a similar scenario in the real world.

The researchers enabled the robot to assess its environment and adjust its path, every one-tenth of a second. In this way, the robot can continue rolling through a hallway at a typical walking speed of 1.2 meters per second, without pausing to reprogram its route.

“We’re not planning an entire path to the goal — it doesn’t make sense to do that anymore, especially if you’re assuming the world is changing,” Everett says. “We just look at what we see, choose a velocity, do that for a tenth of a second, then look at the world again, choose another velocity, and go again. This way, we think our robot looks more natural, and is anticipating what people are doing.”

Crowd control

Everett and his colleagues test-drove the robot in the busy, winding halls of MIT’s Stata Building, where the robot was able to drive autonomously for 20 minutes at a time. It rolled smoothly with the pedestrian flow, generally keeping to the right of hallways, occasionally passing people on the left, and avoiding any collisions.

“We wanted to bring it somewhere where people were doing their everyday things, going to class, getting food, and we showed we were pretty robust to all that,” Everett says. “One time there was even a tour group, and it perfectly avoided them.”

Everett says going forward, he plans to explore how robots might handle crowds in a pedestrian environment.

“Crowds have a different dynamic than individual people, and you may have to learn something totally different if you see five people walking together,” Everett says. “There may be a social rule of, ‘Don’t move through people, don’t split people up, treat them as one mass.’ That’s something we’re looking at in the future.”

This research was funded by Ford Motor Company.  

The need for robotics standards

Last week I was talking to one lead engineer of a Singapore company which is building a benchmarking system for robot solutions. Having seen my presentation at ROSCON2016 about robot benchmarking, he asked me how I would benchmark solutions that are non-ROS compatible. I said that I wouldn’t dedicate time to benchmark solutions that are not ROS-based. Instead, I suggested I would use the time to polish the ROS-based benchmarking and suggest that vendors adopt that middleware in their products.

Benchmarks are necessary and they need standards

Benchmarks are necessary to improve any field. By having a benchmark, different solutions to a single problem can be compared and hence a direction for improvement can be traced. Currently, robotics lacks such benchmarking system.

I strongly believe that to create a benchmark for robotics we need a standard at the level of programming.

By having a standard at the level of programming, manufacturers can build their own hardware solutions at will, as long as they are programmable with the programming standard. That is the approach taken by devices that can be plugged into a computer. Manufacturers create the product on their own terms, and then provide a Windows driver that allows any computer in the world (that runs Windows) to communicate with the product. Once this computer-to-product communication is made, you can create programs that compare the same type of devices from different manufacturers for performance, quality, noise, whatever your benchmark is trying to compare.

You see? Different types of devices, different types of hardware. But all of them can be compared through the same benchmarking system that relies on the Windows standard.

Software development for robots also needs standards

The need for standards is not only required for comparing solutions but also to speed robotics development. By having a robotics standard, developers can concentrate on building solutions that do not have to be re-implemented whenever the robot hardware changes. Actually, given the middleware structure, developers can disassociate enough from the hardware that they can almost spend 100% of their time in the software realm, while still developing code for robots.

We need the same type of standard for robotics. We need a kind of operating system that allows us to compare different robotics solutions. We need the Windows of the PCs, the Android of the phones, the CAN of the buses…

IMG_0154

A few standard proposals and a winner

But you already know that. I’m not the first one to state this. Actually, many people have already tried to create such a standard. Some examples include Player, ROS, YARP, OROCOS, Urbi, MIRA or JdE Robot, to name a few.

Personally, I actually don’t care which standard is used. It could be ROS, it could be YARP, or it could be any other that still has not been created. The only thing I really care about is that we  adopt a standard as soon as possible.

And it looks like the developers have decided. Robotics developers prefer ROS as their common middleware to program robots.

No other middleware for robotics has had such a large adoption. Some data about it:

ROS YARP OROCOS
Number of Google pages: 243.000 37.000 42.000
Number of citations if the paper describing the middleware: 3.638 463 563
Alexa ranking: 14.118Screenshot from 2017-08-24 19:50:39 1.505.000Screenshot from 2017-08-24 19:50:29 668.293Screenshot from 2017-08-24 19:50:19

Note 1: Only showing the current big three players.

Note 2: Very simple comparison. Difficult to compare in other terms since data is not available.

Note 3: Data measured in August 2017. May vary at the time you are reading this. Links provided on the numbers themselves, so you can check yourself.

This is not only the feeling that we, roboticists, have. The numbers also indicate that ROS is becoming the standard for robotics programming.

Screenshot from 2017-08-24 19:25:59

Why ROS?

The question is then, why has ROS emerged on top of all the other possible contestants. None of them is worst than ROS in terms of features. Actually you can find some feature in all the other middlewares that outperform ROS. If that is so, why or how has ROS achieved the status of becoming the standard ?

A simple answer from my point of view: excellent learning tutorials and debugging tools.

1

 

Here there is a video where Leila Takayama, early developer of ROS, explains when she realized that the key for having ROS used worldwide would be to provide tools that simplify the reuse of ROS code. None of the other projects have such a set of clear and structured tutorials. In addition, few of the other middlewares provide debugging tools for their packages. The lack of these two essential aspects is preventing new people from using their middlewares (even if I understand the developers of OROCOS and YARP for not providing it… who wants to write tutorials or build debugging tools… nobody! ? )

 

Additionally, it is not only about tutorials and debugging tools. ROS creators also provide a good system of managing packages. The result is that developers worldwide could use others packages in a (relatively) easy way. This created an explosion in ROS packages available, providing off-the-shelf almost anything for your brand new ROSified robot.

Now, the impressive rate at which contributions to the ROS ecosystem are made makes it almost unstoppable in terms of growing.

Screenshot from 2017-02-23 20:39:27

 

What about companies?

At the beginning, ROS was mostly used by students at Universities. However, as ROS becomes more mature and the number of packages increases, companies are realizing that adopting ROS is also good for them because they will be able to use code developed by others. On top of that, it will be easier for them to hire new engineers who already know the middleware (otherwise they would need to teach the newcomers their own middleware).

As a result, many companies have jumped onto the ROS train, developing from scratch their products to be ROS compatible. Examples include Robotnik, Fetch Robotics, Savioke, Pal Robotics, Yujin Robots, The Construct, Rapyuta Robotics, Erle Robotics, Shadow Robot or Clearpath, to name a few of the sponsors of the next ROSCON ? . Creating their ROS-compatible products, they decreased their development time by several orders of magnitude.

To bring things further, two Spanish companies have revolutionised the standardization of robotics products using ROS middleware: on one side, Robotnik has created the ROS Components shop. A shop where anyone can buy ROS compatible devices, starting from mobile bases to sensors or actuators. On the other side, Erle Robotics (now Acutronic Robotics) is in the process of developing Hardware ROS. The H-ROS is a standardized software and hardware infrastructure to easily create reusable and reconfigurable robot hardware parts. ROS is enabling hardware standarization too, but this time driven by companies, not research! That must mean something…

Screenshot from 2017-08-24 22:25:45

Finally, it looks like industrial robot manufacturers have understood the value that a standard can provide to their business. Even if they do not make their industrial robots ROS-enabled from the start, they are adopting ROS Industrial  a flavour of ROS, which allows them to ROSify their industrial robots and re-use all the software created for manipulators in the ROS ecosystem.

But are all companies jumping onto the ROS train? Not all of them!

Some companies like Jibo, Aldebaran or Google still do not rely on ROS for their robot programming. Some of them rely on their own middleware created before the existence of ROS  (that is the case of Aldebaran). Some others, though, are creating their own middleware from scratch. Their reasons: they do not believe ROS is good, they have already created a middleware, or do not want to develop their products dependent on the middleware of others. Those companies have very fair reasons to go their way. However, will that make them competitive? (if we have to judge from history, mobiles, VCRs, the answer may be no).

So is ROS the standard for programming robots?

That question is still too soon to be answered. It looks like it is becoming the standard, but many things can change. It is unlikely that another middleware takes the current title from ROS, but it may happen. There could be a new player that wipes ROS from the map (maybe Google will release its middleware to the public – like they did with Android – and take the sector by storm?).

Still, ROS has its problems, like a lack of security or the instability of some important packages. Even if the OSRF group are working hard to build a better ROS system (for instance ROS2 is in beta phase with many root improvements), some hard work is still required for some basic things (like the ROS controllers for real robots).

IMG_3330

Given those numbers, at The Construct we believe that ROS IS THE STANDARD (that is why we are devoted to creating the best ROS learning tutorials of the world). Actually, it was thanks to this standardization that two Barcelona students were able to create an autonomous robot product for coffee shops in only three months with zero knowledge of robotics (see Barista robot).

This is the future, and it is good. In this future, thanks to standards, almost anyone will be able to build, design and program their own robotics product, similar to how PC stores are building computers today.

So my advice, as I said to the Singapore engineer, is to bet on ROS. Right now, it is the best option for a robotics standard.

 

Long-term control of brain-computer interfaces by users with locked-in syndrome

Using Brain Computer Interfaces (BCI) as a way to give people with locked-in syndrome back reliable communication and control capabilities has long been a futuristic trope of medical dramas and sci-fi. A team from NCCR Robotics and CNBI, EPFL have recently published a paper detailing work as a step towards taking this technique into everyday lives of those affected by extreme paralysis.

BCIs measure brainwaves using sensors placed outside of the head. With careful training and calibration, these brainwaves can be used to understand the intention of the person they are recorded from. However, one of the challenges of using BCIs in everyday life is the variation in the BCI performance over time. This issue is particularly important for motor-restricted end-users, as they usually suffer from even higher fluctuations of their brain signals and resulting performance. One approach to tackle this issue is to use shared control approaches for BCI, which has so far been mostly based on predefined settings, providing a fixed level of assistance to the user.

The team tackled the issue of performance variation by developing a system capable of dynamically matching the user’s evolving capabilities with the appropriate level of assistance. The key element of this adaptive shared control framework is to incorporate the user’s brain state and signal reliability while the user is trying to deliver a BCI command.

The team tested their novel strategy with one person with incomplete locked-in syndrome, multiple times over the course of a year. The person was asked to imagine moving the right hand to trigger a “right command”, and the left hand for a “left command” to control an avatar in a computer game. They demonstrated how adaptive shared control can exploit an estimation of the BCI performance (in terms of command delivery time) to adjust online the level of assistance in a BCI game by regulating its speed. Remarkably, the results exhibited a stable performance over several months without recalibration of the BCI classifier or the performance estimator.

This work marks the first time that this design has been successfully tested with an end-user with incomplete locked-in syndrome and successfully replicates the results of earlier tests with able bodied subjects.

 

Reference:

S. Saeedi, R. Chavarriage and J. del R. Millán, “Long-Term Stable Control of Motor-Imagery BCI by a Locked-In User Through Adaptive Assistance,” IEEE Transactions on neural systems and rehabilitation engineering,” Vol. 25, no. 4, 380-391.

Udacity Robotics video series: Interview with Jillian Ogle from Let’s Robot


Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.

This week we’re featuring Mike’s interview with Jillian Ogle, Founder and CEO of Let’s Robot. Jillian is a video game designer turned roboticist attempting to combine games in robotics in a unique user experience.

You can find all the interviews here. We’ll be posting them regularly on Robohub.

Udacity Robotics video series: Interview with Jillian Ogle from Let’s Robot


Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.

This week we’re featuring Mike’s interview with Jillian Ogle, Founder and CEO of Let’s Robot. Jillian is a video game designer turned roboticist attempting to combine games in robotics in a unique user experience.

You can find all the interviews here. We’ll be posting them regularly on Robohub.

EU’s future cyber-farms to utilise drones, robots and sensors

Farmers could protect the environment and cut down on fertiliser use with swarms of drones. Image credit – ‘Aerial View – Landschaft Markgräflerland’ by Taxiarchos228 is licenced under CC 3.0 unported
by Anthony King

Bee-based maths is helping teach swarms of drones to find weeds, while robotic mowers keep hedgerows in shape.

‘We observe the behaviour of bees. We gain knowledge of how the bees solve problems and with this we obtain rules of interaction that can be adapted to tell us how the robot swarms should work together,’ said Vito Trianni at the Institute of Cognitive Sciences and Technologies of the Italian National Research Council.

Honeybees, for example, run on an algorithm to allow them to choose the best nest site, even though no bee knows the full picture.

Trianni runs an EU-funded research project known as SAGA, which is using the power of robotic groupthink to keep crops weed free.

‘We can use low-cost robots and low-cost cameras. They can even be prone to error, but thanks to the cooperation they will be able to generate precise maps at centimetre scales,’ said Trianni.

‘They will initially spread over the field to inspect it at low resolution, but will then decide on areas that require more focus,’ said Trianni. ‘They can gather together in small groups closer to the ground.’

Importantly the drones make these decisions themselves, as a group.

Next spring, a swarm of the quadcopters will be released over a sugar beet field. They will stay in radio contact with each other and use algorithms learnt from the bees to cooperate and put together a map of weeds. This will then allow for targeted spraying of weeds or their mechanical removal on organic farms.

Today the most common way to control weeds is to spray entire fields with herbicide chemicals. Smarter spraying will save farmers money, but it will also lower the risk of resistance developing to the agrichemicals. And there will be an environmental benefit from spraying less herbicides.

Co-ops

Swarms of drones for mapping crop fields offer a service to farmers, while farm co-ops could even buy swarms themselves.

‘There is no need to fly them every day over your field, so it is possible to share the technology between multiple farmers,’ said Trianni. A co-op might buy 20 to 30 drones, but adjust the size of the swarm to the farm.

The drones are 1.5 kilos in weight and fly for around 20-30 minutes. For large fields, the drone swarms could operate in relay teams, with drones landing and being replaced by others.

It’s the kind of technology that is ideally suited to today’s large-scale farms, as is another remote technology that combines on-the-ground sensor information with satellite data to tell farmers how much nitrogen or water their fields need.

Wheat harvested from a field in Boigneville, 100 km south of Paris, France, in August this year will have been grown with the benefit of this data, as part of pilot being run by an EU-funded project known as IOF2020, which involves over 70 partners and around 200 researchers.

‘Sensors are costing less and less, so at the end of the project we hope to have something farmers or farm cooperatives can deploy in their fields,’ explained Florence Leprince, a plant scientist at Arvalis – Institut du végétal, the French arable farming institute which is running the wheat experiment.

‘This will allow farmers be more precise and not overuse nitrogen or water.’ Florence Leprince, Arvalis – Institut du végétal, France

Adding too much nitrogen to a crop field costs farmers money, but it also has a negative environmental impact. Surplus nitrogen leaches from soils and into rivers and lakes, causing pollution.

It’s needed because satellite pictures can indicate how much nitrogen is in a crop, but not in soil. The sensors will help add detail, though in a way that farmers will find easy to use.

It’s a similar story for the robotic hedge trimmer being developed by a separate group of researchers. All the farmer or groundskeeper needs to do is mark which hedge needs trimming.

‘The user will sketch the garden, though not too accurately,’ said Bob Fisher, computer vision scientist at Edinburgh University, UK, and coordinator of the EU-funded TrimBot2020 project. ‘The robot will go into the garden and come back with a tidied-up sketch map. At that point, the user can say go trim that hedge, or mark what’s needed on the map.’

This autumn will see the arm and the robot base assembled together, while the self-driving bot will be set off around the garden next spring.

More info:
SAGA (part of ECHORD Plus Plus)
IOF2020
TrimBot2020

EU’s future cyber-farms to utilise drones, robots and sensors

Farmers could protect the environment and cut down on fertiliser use with swarms of drones. Image credit – ‘Aerial View – Landschaft Markgräflerland’ by Taxiarchos228 is licenced under CC 3.0 unported
by Anthony King

Bee-based maths is helping teach swarms of drones to find weeds, while robotic mowers keep hedgerows in shape.

‘We observe the behaviour of bees. We gain knowledge of how the bees solve problems and with this we obtain rules of interaction that can be adapted to tell us how the robot swarms should work together,’ said Vito Trianni at the Institute of Cognitive Sciences and Technologies of the Italian National Research Council.

Honeybees, for example, run on an algorithm to allow them to choose the best nest site, even though no bee knows the full picture.

Trianni runs an EU-funded research project known as SAGA, which is using the power of robotic groupthink to keep crops weed free.

‘We can use low-cost robots and low-cost cameras. They can even be prone to error, but thanks to the cooperation they will be able to generate precise maps at centimetre scales,’ said Trianni.

‘They will initially spread over the field to inspect it at low resolution, but will then decide on areas that require more focus,’ said Trianni. ‘They can gather together in small groups closer to the ground.’

Importantly the drones make these decisions themselves, as a group.

Next spring, a swarm of the quadcopters will be released over a sugar beet field. They will stay in radio contact with each other and use algorithms learnt from the bees to cooperate and put together a map of weeds. This will then allow for targeted spraying of weeds or their mechanical removal on organic farms.

Today the most common way to control weeds is to spray entire fields with herbicide chemicals. Smarter spraying will save farmers money, but it will also lower the risk of resistance developing to the agrichemicals. And there will be an environmental benefit from spraying less herbicides.

Co-ops

Swarms of drones for mapping crop fields offer a service to farmers, while farm co-ops could even buy swarms themselves.

‘There is no need to fly them every day over your field, so it is possible to share the technology between multiple farmers,’ said Trianni. A co-op might buy 20 to 30 drones, but adjust the size of the swarm to the farm.

The drones are 1.5 kilos in weight and fly for around 20-30 minutes. For large fields, the drone swarms could operate in relay teams, with drones landing and being replaced by others.

It’s the kind of technology that is ideally suited to today’s large-scale farms, as is another remote technology that combines on-the-ground sensor information with satellite data to tell farmers how much nitrogen or water their fields need.

Wheat harvested from a field in Boigneville, 100 km south of Paris, France, in August this year will have been grown with the benefit of this data, as part of pilot being run by an EU-funded project known as IOF2020, which involves over 70 partners and around 200 researchers.

‘Sensors are costing less and less, so at the end of the project we hope to have something farmers or farm cooperatives can deploy in their fields,’ explained Florence Leprince, a plant scientist at Arvalis – Institut du végétal, the French arable farming institute which is running the wheat experiment.

‘This will allow farmers be more precise and not overuse nitrogen or water.’ Florence Leprince, Arvalis – Institut du végétal, France

Adding too much nitrogen to a crop field costs farmers money, but it also has a negative environmental impact. Surplus nitrogen leaches from soils and into rivers and lakes, causing pollution.

It’s needed because satellite pictures can indicate how much nitrogen is in a crop, but not in soil. The sensors will help add detail, though in a way that farmers will find easy to use.

It’s a similar story for the robotic hedge trimmer being developed by a separate group of researchers. All the farmer or groundskeeper needs to do is mark which hedge needs trimming.

‘The user will sketch the garden, though not too accurately,’ said Bob Fisher, computer vision scientist at Edinburgh University, UK, and coordinator of the EU-funded TrimBot2020 project. ‘The robot will go into the garden and come back with a tidied-up sketch map. At that point, the user can say go trim that hedge, or mark what’s needed on the map.’

This autumn will see the arm and the robot base assembled together, while the self-driving bot will be set off around the garden next spring.

More info:
SAGA (part of ECHORD Plus Plus)
IOF2020
TrimBot2020

Talking Machines: Data Science Africa, with Dina Machuve

In episode seven of season three we take a minute to break way from our regular format and feature a conversation with Dina Machuve of the Nelson Mandela African Institute of Science and Technology. We cover everything from her work to how cell phone access has changed data patterns. We got to talk with her at the Data Science Africa conference and workshop.

If you enjoyed this episode, you may also want to listen to:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Talking Machines: Data Science Africa, with Dina Machuve

In episode seven of season three we take a minute to break way from our regular format and feature a conversation with Dina Machuve of the Nelson Mandela African Institute of Science and Technology. We cover everything from her work to how cell phone access has changed data patterns. We got to talk with her at the Data Science Africa conference and workshop.

If you enjoyed this episode, you may also want to listen to:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Custom robots in a matter of minutes

Interactive Robogami enables the fabrication of a wide range of robot designs. Photo: MIT CSAIL

Even as robots become increasingly common, they remain incredibly difficult to make. From designing and modeling to fabricating and testing, the process is slow and costly: Even one small change can mean days or weeks of rethinking and revising important hardware.

But what if there were a way to let non-experts craft different robotic designs — in one sitting?

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are getting closer to doing exactly that. In a new paper, they present a system called “Interactive Robogami” that lets you design a robot in minutes, and then 3-D print and assemble it in as little as four hours.
 

One of the key features of the system is that it allows designers to determine both the robot’s movement (“gait”) and shape (“geometry”), a capability that’s often separated in design systems.

“Designing robots usually requires expertise that only mechanical engineers and roboticists have,” says PhD student and co-lead author Adriana Schulz. “What’s exciting here is that we’ve created a tool that allows a casual user to design their own robot by giving them this expert knowledge.”

The paper, which is being published in the new issue of the International Journal of Robotics Research, was co-led by PhD graduate Cynthia Sung alongside MIT professors Wojciech Matusik and Daniela Rus.

The other co-authors include PhD student Andrew Spielberg, former master’s student Wei Zhao, former undergraduate Robin Cheng, and Columbia University professor Eitan Grinspun. (Sung is now an assistant professor at the University of Pennsylvania.)

How it works

3-D printing has transformed the way that people can turn ideas into real objects, allowing users to move away from more traditional manufacturing. Despite these developments, current design tools still have space and motion limitations, and there’s a steep learning curve to understanding the various nuances.

Interactive Robogami aims to be much more intuitive. It uses simulations and interactive feedback with algorithms for design composition, allowing users to focus on high-level conceptual design. Users can choose from a library of over 50 different bodies, wheels, legs, and “peripherals,” as well as a selection of different steps (“gaits”).

Importantly, the system is able to guarantee that a design is actually possible, analyzing factors such as speed and stability to make suggestions and ensure that, for example, the user doesn’t create a robot so top-heavy that it can’t move without tipping over.

Once designed, the robot is then fabricated. The team’s origami-inspired “3-D print and fold” technique involves printing the design as flat faces connected at joints, and then folding the design into the final shape, combining the most effective parts of 2-D and 3-D printing.  

“3-D printing lets you print complex, rigid structures, while 2-D fabrication gives you lightweight but strong structures that can be produced quickly,” Sung says. “By 3-D printing 2-D patterns, we can leverage these advantages to develop strong, complex designs with lightweight materials.”

Results

To test the system, the team used eight subjects who were given 20 minutes of training and asked to perform two tasks.

One task involved creating a mobile, stable car design in just 10 minutes. In a second task, users were given a robot design and asked to create a trajectory to navigate the robot through an obstacle course in the least amount of travel time.

The team fabricated a total of six robots, each of which took 10 to 15 minutes to design, three to seven hours to print and 30 to 90 minutes to assemble. The team found that their 3-D print-and-fold method reduced printing time by 73 percent and the amount of material used by 70 percent. The robots also demonstrated a wide range of movement, like using single legs to walk, using different step sequences, and using legs and wheels simultaneously.

“You can quickly design a robot that you can print out, and that will help you do these tasks very quickly, easily, and cheaply,” says Sung. “It’s lowering the barrier to have everyone design and create their own robots.”

Rus hopes people will be able to incorporate robots to help with everyday tasks, and that similar systems with rapid printing technologies will enable large-scale customization and production of robots.

“These tools enable new approaches to teaching computational thinking and creating,” says Rus. “Students can not only learn by coding and making their own robots, but by bringing to life conceptual ideas about what their robots can actually do.”

While the current version focuses on designs that can walk, the team hopes that in the future, the robots can take flight. Another goal is to have the user be able to go into the system and define the behavior of the robot in terms of tasks it can perform.

“This tool enables rapid exploration of dynamic robots at an early stage in the design process,” says Moritz Bächer, a research scientist and head of the computational design and manufacturing group at Disney Research. “The expert defines the building blocks, with constraints and composition rules, and paves the way for non-experts to make complex robotic systems. This system will likely inspire follow-up work targeting the computational design of even more intricate robots.”

This research was supported by the National Science Foundation’s Expeditions in Computing program.

Page 418 of 434
1 416 417 418 419 420 434