Page 382 of 398
1 380 381 382 383 384 398

5G fast and ultra-low latency robot control demonstrated

SoftBank and Huawei jointly demonstrated various use cases for their forthcoming 5G network. 5G commercial services, which will provide ultra-high throughput of over 800 Mbps with ultra-low latency transmission of less than 2ms, will begin being rolled out in 2020 in Japan and Korea and 2021-2023 in China, Europe and the U.S.

5G will (we hope) be able to handle the massive growth of IoT devices and their streaming data. With 5G technology, getting and staying connected will get easier. You’ll still need a robust network provider but your devices will learn to do things like sync or pair automatically.

When 5G comes online, around 50 billion “things” will be connected and that number will be growing exponentially. Think of self-driving cars that have capabilities to communicate with traffic lights, smart city sensor systems, savvy home appliances, industrial automation systems, connected health innovations, personal drones, robots and more.

“5G will make the internet of things more effective, more efficient from a spectral efficiency standpoint,” said an Intel spokesperson. “Each IOT device and network will use exactly and only what it needs and when it needs it, as opposed to just what’s available.”

In the SoftBank and Huawei robot demonstration, a robotic arm played an air hockey game against a human. A camera installed on top of the air hockey table detected the puck’s position to calculate its trajectory. That data was streamed to the cloud and the calculated result was then forwarded to the robotic arm control server to control the robotic arm. In the demonstration, the robotic arm was able to strike back the puck shot by the human player on various trajectories at competition speed, i.e., with no noticeable latency from camera to cloud to controller to robot arm.

Other demonstrations by SoftBank and Huawei included real-time ultra-high definition camera data compressed, streamed and the then displayed on a UHD monitor; an immersive video scenery capture from 180-degree 4-lense cameras uploaded and the downloaded to smartphones and tablets; remote rendering by a cloud GPU server; and the robot demo. Each demo was oriented to various industries, eg: tele-health, tele-education, VR, AR, CAD overlays at a remote (construction) site and the robot example which can apply to factory automation and vehicle-to-vehicle communication.

Other vendors have also demonstrated 5G use cases. Ericsson and BMW tracked a connected car at 105 mph and Verizon used 5G wireless to livestream the Indianapolis Motor Speedway in VR and hi-res 4k 360° video.

5G is coming!

Batteries for Drones

Batteries provide essential power to the motors, receivers and controllers. For multirotors, the most commonly used batteries are Lithium Polymer (LiPo) types, as their energy efficiency is high. Usually 3-4 cell batteries are used, which provide currents of up to around 5000 mah (miliamperes – hour) capacity. To understand what mah means, consider this example: a 3000 mah battery will last 3 times longer than a 1000 mah battery.  Think of the charge (or load) in Amperes and time as similar to velocity and time. Velocity x time = distance. Here the distance is mah, so in other words, it is the distance you can go for so many hours at a certain speed. As the speed (ampere or load you use) increases, the time will decrease because you have a certain defined mah limit (distance). The advantage of LiPo batteries are that it can discharge at a much faster rate than a normal battery. It is recommended to buy a few sets of batteries, so that, when the first set is discharged, you do not have to wait for flying your drone again, and while one charges in the recharger, you can use the other battery. Some intelligent batteries on newer models have sensors and they can calculate its distance from you versus amount of power to return. Safety Note: Lithium Batteries can catch fire and you must check the requirements of the battery manufacturer for safe usage.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Ask, discuss anything about robots and drones in our forums

See our Robot Book

This post was originally written by RoboticMagazine.com and displaying without our permission is not allowed.

The post Batteries for Drones appeared first on Roboticmagazine.

Teleoperating robots with virtual reality


by Rachel Gordon
Consisting of a headset and hand controllers, CSAIL’s new VR system enables users to teleoperate a robot using an Oculus Rift headset.
Photo: Jason Dorfman/MIT CSAIL

Certain industries have traditionally not had the luxury of telecommuting. Many manufacturing jobs, for example, require a physical presence to operate machinery.

But what if such jobs could be done remotely? Last week researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) presented a virtual reality (VR) system that lets you teleoperate a robot using an Oculus Rift headset.

The system embeds the user in a VR control room with multiple sensor displays, making it feel like they’re inside the robot’s head. By using hand controllers, users can match their movements to the robot’s movements to complete various tasks.

“A system like this could eventually help humans supervise robots from a distance,” says CSAIL postdoc Jeffrey Lipton, who was the lead author on a related paper about the system. “By teleoperating robots from home, blue-collar workers would be able to tele-commute and benefit from the IT revolution just as white-collars workers do now.”  

The researchers even imagine that such a system could help employ increasing numbers of jobless video-gamers by “gameifying” manufacturing positions.

The team used the Baxter humanoid robot from Rethink Robotics, but said that it can work on other robot platforms and is also compatible with the HTC Vive headset.

Lipton co-wrote the paper with CSAIL Director Daniela Rus and researcher Aidan Fay. They presented the paper at the recent IEEE/RSJ International Conference on Intelligent Robots and Systems in Vancouver.

There have traditionally been two main approaches to using VR for teleoperation.

In a direct model, the user’s vision is directly coupled to the robot’s state. With these systems, a delayed signal could lead to nausea and headaches, and the user’s viewpoint is limited to one perspective.

In a cyber-physical model, the user is separate from the robot. The user interacts with a virtual copy of the robot and the environment. This requires much more data, and specialized spaces.

The CSAIL team’s system is halfway between these two methods. It solves the delay problem, since the user is constantly receiving visual feedback from the virtual world. It also solves the the cyber-physical issue of being distinct from the robot: Once a user puts on the headset and logs into the system, they’ll feel as if they’re inside Baxter’s head.

The system mimics the homunculus model of the mind — the idea that there’s a small human inside our brains controlling our actions, viewing the images we see, and understanding them for us. While it’s a peculiar idea for humans, for robots it fits: Inside the robot is a human in a virtual control room, seeing through its eyes and controlling its actions.

Using Oculus’ controllers, users can interact with controls that appear in the virtual space to open and close the hand grippers to pick up, move, and retrieve items. A user can plan movements based on the distance between the arm’s location marker and their hand while looking at the live display of the arm.

To make these movements possible, the human’s space is mapped into the virtual space, and the virtual space is then mapped into the robot space to provide a sense of co-location.

The system is also more flexible compared to previous systems that require many resources. Other systems might extract 2-D information from each camera, build out a full 3-D model of the environment, and then process and redisplay the data. In contrast, the CSAIL team’s approach bypasses all of that by simply taking the 2-D images that are displayed to each eye. (The human brain does the rest by automatically inferring the 3-D information.) 

To test the system, the team first teleoperated Baxter to do simple tasks like picking up screws or stapling wires. They then had the test users teleoperate the robot to pick up and stack blocks.

Users successfully completed the tasks at a much higher rate compared to the direct model. Unsurprisingly, users with gaming experience had much more ease with the system.

Tested against current state-of-the-art systems, CSAIL’s system was better at grasping objects 95 percent of the time and 57 percent faster at doing tasks. The team also showed that the system could pilot the robot from hundreds of miles away; testing included controling Baxter at MIT from a hotel’s wireless network in Washington.

“This contribution represents a major milestone in the effort to connect the user with the robot’s space in an intuitive, natural, and effective manner.” says Oussama Khatib, a computer science professor at Stanford University who was not involved in the paper.

The team eventually wants to focus on making the system more scalable, with many users and different types of robots that can be compatible with current automation technologies.

The project was funded, in part, by the Boeing Company and the National Science Foundation.

Industrial cleaning equipment maker Nilfisk goes public

Copyright: Nilfisk

Danish Nilfisk Holding A/S began being listed on the NASDAQ Stock Exchange under symbol NLFSK after being spun off from NKT A/S, a Danish conglomerate. Nilfisk is one of the world’s leading suppliers of professional cleaning equipment with a strong brand and a vision for growth in robotics.

Nilfisk expects that 10% of their revenue will come from autonomous machines within the next 5-7 years. In that pursuit, Blue Ocean Robotics and Nilfisk recently announced a strategic partnership to develop a portfolio of intelligent cleaning machines and robots to add to the Nilfisk line of industrial cleaners.

According to Hans Henrik Lund, CEO of Nilfisk,

We estimate that approximately 70 percent of the cost of professional cleaning goes to labor. At the same time, the cleaning industry is one of the industries with the highest employee turnover. We therefore experience a significant need among our customers to introduce autonomous machines that can solve standardized cleaning tasks so that cleaning operators can be used for other assignments. We have a clear strategy to develop our product portfolio in partnership with highly-specialized technology companies that are the best in their field. We already have good experiences with this, and we are looking forward to starting this partnership with Blue Ocean Robotics, which complements our other partnerships very well.”

Preliminary Q3 2017 financial results reports revenue of approx. EUR 253m ($300 million) which represents a gain of approx. 3.4% over Q3 2016. EBITDA was approx. 11.7% in the first nine months of 2017.

Nilfisk competitors include Tennant, Karcher, Vector Technologies, Sumitomo, Discovery Robotics, ICE / Brain Corp, and Taski Intellibot to name just a few.

National Robot Safety Conference 2017

I had the opportunity to attend the National Robot Safety Conference for Industrial Robots today in Pittsburgh, PA (USA). Today was the first day of a three-day conference. While I mostly cover technical content on this site; I felt that this was an important conference to attend since safety and safety standards are becoming more and more important in robot system design. This conference focused specifically on industrial robots. That means the standards discussed were not directly related to self-driving cars, personal robotics, or space robots (you still don’t want to crash into a martian and start an inter-galactic war).

In this post I will go into a bit of detail on the presentations from the first day. Part of the reason I wanted to attend the first day was to hear the overview and introductory talks that formed a base for the rest of the sessions.

The day started out with some Standards Bingo. Lucky for us the conference organizers provided a list of standards terms, abbreviations, codes, and titles (see link below). For somebody (like myself) who does not work with industrial robot safety standards every day, when people start rattling off safety standard numbers it can get confusing very fast.

Quick, what is ISO 10218-1:2011 or IEC 60204-1:2016? For those who do not know, (me included) those are Safety requirements for industrial robots — Part 1: Robots and Safety of machinery – electrical equipment — Part 1: General requirements.

Click here for a post with a guide to relevant safety standards, Abbreviations, Codes & Titles.

The next talk was from Carla Silver at Merck & Company Inc. she introduced what safety team members need to remember to be successful, and introduced Carla’s Top Five List.

  1. Do not assume you know everything about the safety of a piece of equipment!
  2. Do not assume that the Equipment Vendor has provided all the information or understands the hazards of the equipment.
  3. Do not assume that the vendor has built and installed the equipment to meet all safety regulations.
  4. Be a “Part of the Process”. – Make sure to involve the entire team (including health and safety people)
  5. Continuous Education

I think those 5 items are a good list for life in general.

The prior talk set the stage for why safety can be tricky and the amount of work it takes to stay up to date.

Robot integrator is a certification (and way to make money) from Robotic Industries Association (RIA) that helps provide people who come trained to fill the safety role while integrating and designing new robot systems.

According to Bob Doyle the RIA Director of Communications, RIA certified robot integrators must understand current industry safety standards and undergo an on-site audit in order to get certified. Every two years they need to recertify. Part of the recertification is having an RIA auditor perform a site visit. When recertifing the integrators are expected to know the current standards. I was happy to hear about the two-year recertification, due to how much changes with robotics technology over two years.

A bit unrelated but A3 is the umbrella association for Robotic Industries Association (RIA) as well as Advancing Vision & Imaging (AIA), and Motion Control & Motor Association (MCMA). Bob mentioned that the AIA and MCMA certifications are standalone from the RIA Certified Integrators. However they are both growing as a way to train industrial engineers for those applications. Both the AIA and MCMA certifications are vendor agnostic for the technology used. There are currently several hundred people with the AIA certification. The MCMA certification was just released earlier this year and has several dozen people certified. Bob said that there are several companies that now require at least one team member on a project to have the above certifications.

The next talk really started to get into the details about Robot System Integrators and Best Practices. In particular risk assessments. Risk assessments is a relatively new part of the integration process, but has a strong focus in the current program. Risk assessments are important due to the number of potential safety hazards and the different types or interactions a user might have with the robot system. The risk assessment helps guide the design as well as how users should interact with the robot . The responsibility to perform this risk assessment is with the robot integrator and not directly with the manufacturer or end-user.

One thing that I heard that surprised me was that many integrators do not share the risk assessment with the end-user since it is considered proprietary to that integrator. However one participant said that you can often get them to discuss it in a meeting or over the phone, just they will not hand over the documents.

After a small coffee break we moved on to discussing some of the regulations in detail. In particular R15.06 which is for Industrial Robot Safety standards, the proposed R15.08 standards for industrial mobile robot safety standards, and the R15.606 collaborative robot safety standards. Here are a few notes that I took:

Types of Standards

  • A – Basic concepts — Ex. Guidance to assess risk
  • B – Generic safety standards — Ex. Safety distances, interlocks, etc..
  • C – Machine specific — ex. From the vendor for a particular robot.

Type C standards overrule type A & B standards.

Parts of a Standard

  • Normative – These are required and often use the language of “shall”
  • Informative – These are recommended or advice and use the language of “should” or “can”. Notes in standards are considered Informative

Key Terms for Safety Standards

  • Industrial Robot – Robot manipulator of at least 3 DOF and its controller
  • Robot System – The industrial robot with its end effector, work piece and periphery equipment (such as conveyor).
  • Robot Cell – Robot system with the safe guarded spaces to include the physical barriers.
robot work cell

Case study that was presented of a 3 robot system in a single cell, and how it was designed to meet safety standards.

R15.06 is all about “keeping people safe by keeping them away from the robot system”. This obviously does not work for mobile robots that move around people and collaborative robots. For that the proposed R15.08 standard for mobile robots and the R15.606 standard for collaborative robots are needed.

R15.08 which is expected to be ratified as a standard in 2019 looks at things like mobile robots, manipulators on mobile robots, and manipulators working while the mobile base is also working. Among other things, the current standard draft says that if an obstacle is detected, the primary mode is for the robot to stop; however dynamic replanning will be allowed.

For R15.606 they are trying to get rid of the term collaborative robot (a robot designed for direct interaction with a human) and think about systems in regard to its application. For example :

…a robotic application where an operator may interact directly with a robot system without relying on perimeter safeguards for protection in pre-determined,low risk tasks…

collaborative robots

After all the talk about standards we spent a bit of time looking at various case studies that were very illuminating for designing industrial robotic systems, and some of the problems that can occur.

One thing unrelated, but funny since this was a safety conference, was a person sitting near the back of the room who pulled a roll of packing tape out of their backpack to tape over their laptop power cable that ran across the floor.

I hope you found this interesting. This was the 29th annual national robot safety meeting (really, I did not realize we had been using robots in industry for that long). If you want to find out more about safety and how it affects your work and robots make sure to attend next year.


I would like to thank RIA for giving me a media pass to attend this event.

Why engineering schools globally need more creative women


File 20171011 5671 g2i4ka.jpg?ixlib=rb 1.1
At McMaster University, 40 per cent of assistant professors in engineering are now women and the school is working hard to make the profession more equitable for women.
(Shutterstock)

Engineers are good at solving problems. We make bridges safer, computers faster and engines more efficient. Today, the profession is working on an especially thorny problem: gender equity in higher education.

While other fields of study continue to make significant advances towards gender equity, engineering schools are still struggling to pull their numbers of women students past the 20 per cent threshold.

This week, McMaster University is hosting a conference for more than 150 deans of engineering from schools around the world. One of the major issues we’re discussing at this Global Engineering Deans Council Conference is the gender imbalance that remains a challenge across the field.

We are making progress, but we need a breakthrough.

Cultivating interest in children

Our increasingly automated, mechanized world requires more engineers than ever, and demand for them is expected to grow. And the largest pool of under-utilized talent is right here: the women who would make great engineers, but choose other careers.

Why don’t they choose engineering? Some turn away as early as Grade 6. Research shows that this is the point when many girls simply turn off math and science, even though they have performed as well as their male classmates until that point.

We must reach kids before this juncture to show them how useful engineering is to everyday life. We need to show them how easy and interesting it is to write computer code and build apps, to help them use technology to build things and solve problems.

Robotics camps and classes can introduce girls to the creative dimensions of engineering at a young age.
(Shutterstock)

Some say women are just not interested in engineering. Once, they said women were not capable of succeeding in engineering. Clearly that was untrue, and so now we are trying to correct the idea that they are not interested in engineering simply because they are women.

A profession of ambiguity and creativity

Could it be the way engineering has portrayed itself? For too long, engineering has presented itself as a field that recruits top brains from the abstract realms of mathematics and science and shapes them into problem-solvers.

Engineering might seem more attractive to everyone, women and men, if instead it presented itself as a profession of creative, helpful problem-solvers who use math and science as some of their tools.

Engineers don’t solve only cut-and-dried problems. They also solve ambiguous problems, where there is no single solution. Five groups of engineers who tackle the same problem can come up with five different applicable solutions. Hence, it is crucial that we project the ambiguity of engineering problems and that their solutions demand creativity. Doing so will transmit a more compelling message to women and men alike.

Replacing an antiquated culture

We must also critically examine the culture of engineering. I have learned through numerous conversations with women that the male-centric culture of engineering often puts them off. On average, they also earn less than their male colleagues do.

Despite sincere efforts, a stubborn nub of resistance remains in the broader engineering culture that is antithetical to women’s point of view. It is certainly not universal, but in the corners where it prevails, it is tiresome and antiquated. This old culture is even apparent in the structure of the engineering building where I work. It was designed in the 1950s and bathroom spaces for men outnumber those for women four to one. Does that send a message that old ways are changing?

Women who might think about engineering look at faculty leaders and still see mainly grey-haired men. We are working on that. At McMaster, as a result of deliberate recruitment, 40 per cent of our assistant professors of engineering are now women. As they advance, our senior ranks will move closer to a true balance.

Engineering is still associated for many with male-dominated domains.
(Shutterstock)

At McMaster we are also working to remove the barrier that biology unfairly places in the career paths of women faculty members, by making sure they are not indirectly penalized for taking parental and other life-event leaves. We are ensuring there are resources available so their research continues in their absence, so they do not fall behind because they are having children, and so they can step directly back into their teaching and research careers after their parental leaves.

Harnessing diverse viewpoints

This is not only about fairness, though. Engineering needs women for another simpler, larger reason: Because solving problems needs creativity. And creativity demands a diversity of viewpoints.

Without input from women, engineers would have access to only half the total pool of creativity, constraining their ability to solve problems and limiting the applicability of the solutions they do reach.

The ConversationOnly when the body of engineers truly reflects the society it serves — in terms of age, ethnicity, religion, physical ability, sexuality and gender — can it most effectively serve the needs of that society. Only then will it understand all the communities it is serving, harness the widest variety of viewpoints and generate prosperity for all.

Ishwar K. Puri, Dean of Engineering, McMaster University

This article was originally published on The Conversation. Read the original article.

Robocar-only highways are not quite so nice an idea as expected

Recently Madrona Ventures, in partnership with Craig Mundie (former Microsoft CTO) released a white paper proposing an autonomous vehicle corridor between Seattle and Vancouver on I-5 and BC Highway 99. While there are some useful ideas in it, the basic concept contains some misconceptions about both traffic management, infrastructure planning, and robocars.

Carpool lanes are hard

The proposal starts with a call for allowing robocars in the carpool lanes, and then moving to having a robocar only lane. Eventually it moves to more lanes being robocar only, and finally the whole highway. Generally I have (mostly) avoided too much talk of the all-robocar road because there are so many barriers to this that it remains very far in the future. This proposal wants to make it happen sooner, which is not necessarily bad, but it sure is difficult.

Carpool lanes are poorly understood, even by some transportation planners. For optimum traffic flow, you want to keep every lane at near capacity, but not over it. If you have a carpool lane at half-capacity, you have a serious waste of resources, because the vast majority (around 90%) of the carpools are “natural carpools” that would exist regardless of the lane perk. They are things like couples or parents with children. A half-empty carpool lane makes traffic worse for everybody but the carpoolers, for whom the trip does improve.

That’s why carpool lanes will often let in electric cars, and why “high occupancy toll” lanes let in solo drivers willing to pay a price. In particular with the HOT lane, you can set the price so you get just enough cars in the carpool lane to make it efficient, but no more.

(It is not, of course, this simple, as sometimes carpool lanes jam up because people are scared of driving next to slow moving regular lanes, and merging is problematic. Putting a barrier in helps sometimes but can also hurt. An all-robocar lane would avoid these problems, and that is a big plus.)

Letting robocars into the carpool lane can be a good idea, if you have room. If you have to push electric cars out, that may not be the best public goal, but it is a decision a highway authority could make. (If the robocars are electric, which many will be, it’s OK.)

The transition, however, from “robocars allowed” to “robocars only” for the lane is very difficult. Because you do indeed have a decent number of carpools (even if only 10% are induced) you have to kick them out at some point to grow robocar capacity. You can’t have a switch day without causing more traffic congestion for some time after it. If you are willing to build a whole new lane (as is normal for carpool creation) you can do it, but only by wasting a lot of the new lane at first.

Robocar packing

Many are attracted to the idea that robocars can follow more closely behind another vehicle if they have faster reaction times. They also have the dream that the cars will be talking to one another, so they can form platoons that follow even more closely.) The inter car communication (V2V) creates too much computer security risk to be likely, though some still dream of a magic solution which will make it safe to have 1500kg robots exchanging complex messages with every car they randomly encounter on the road. Slightly closer following is still possible without it.

Platooning has a number of issues. It was at first popular as an idea because the lead car could be human driven. You didn’t have to solve the whole driving problem to make a platoon. Later experiments showed a number of problems, however.

  • If not in a fully dedicated lane, other drivers keep trying to fit themselves into the gaps in a platoon, unless they are super-close
  • When cars are close, they throw up stones from the road, constantly cracking windshields, destroying a car’s finish, and in some experiments, destroying the radiator!
  • Any failure can be catastrophic, since multiple cars will be unable to avoid being in the accident.
  • Fuel savings of workable following distances are around 10%. Nice, but not exciting.

To have platoons, you need cars designed with stone-shields or some other technique to stop stones from being thrown. You need a more secure (perhaps optical rather than radio) protocol for communication of only the simplest information, such as when brakes are being hit. And you must reach a safety level where the prospect of chain accidents is no longer frightening.

In any event, the benefits of packing are not binary. Rather, in a lane that is 90% robocars and 10% human, you get 90% of the benefit of a 100% robocar lane. There is no magic special benefit you get at 100% as far as packing is concerned. This is even true to some degree with the problems of erratic human drivers. Humans will brake for no good reason, and this causes traffic jams. Research shows that just a small fraction of robocars on the road who will react properly enough to this are enough to stop this from causing major traffic jams. There is actually a diminishing return from having more robocars. Traffic flow does need some gaps in it to absorb braking events, and while you could get away with fewer in an all robocar road, I am not sure that is wise. As long as you have a modest buffer, robocars trailing a human who brakes for no reason can absorb it and restore the flow as soon as the human speeds up again.

Going faster

There is a big benefit to all-robocar lanes if you are willing to allow the cars in that lane to drive much faster. That’s something that can’t happen in a mixed lane. The white paper makes only one brief mention of that benefit.

Other than this, the cars don’t get any great benefit from grouping. I mean, anybody would prefer to drive with robocars, which should drive more safely and more regularly. They won’t block the lane the way human drivers do. They will tailgate you (perhaps uncomfortably so) but they will only do so when it’s safe. They could cluster together to enjoy this benefit on their own, without any need for regulations.

The danger of robocar-only lanes

One of the biggest reasons to be wary of robocar only lanes is that while this proposal does not say it, most proposals have been put forward in the belief that robocars are not safe enough to mix with regular traffic. That is true today for the prototypes, but all teams plan to make vehicles which do meet that safety goal before they ship.

Many dedicated lane proposals have essentially called for robocar operation only in the dedicated lanes, and manual driving is required in other lanes. If you declare that the vehicles are not safe without a special lane, you turn them into vehicles with a very limited domain of operation. Since the creation of new dedicated lanes will be a very long (decades long) process, it’s an incredible damper on the deployment of the technology. “Keep those things in their own special lanes” means delay those things by decades.

The white paper does not advocate this. But there is a danger that the concept will be co-opted by those who do. As long as the benefits are minor, why take that risk?

Do we need it?

In general, any plan that calls for infrastructure change or political change is risky because of the time scales involved. It is quite common for governmental authorities to draft plans that take many years or decades to solve things software teams will solve in months or even, at the basic level, in hours. We want to be always sure that there is not a software solution before we start the long and high-momentum path of infrastructure change. Even change as simple as repainting.

Most of the benefits that come from all-robocar highway lanes arrive without mandating it. The ability for greater speed is the main one that doesn’t. All this happens everywhere, without planning, and political difficulty. Banning human drivers from lanes is going to be politically difficult. Banning them from the main artery would be even harder.

For great speed, I actually think that airplanes and potentially the hyperloop provide interesting answers, at least for trips of more than 150 miles. The white paper makes a very common poor assumption — that other technologies will stand still as we move to 2040. I know this is not true. I have big hopes for better aviation, including electric planes, robotic planes and most of all, better airports that create a seamless transfer from robocar to aircraft entirely unlike the nightmare we have built today.

On the ground, while I am not a fan of existing rail technology, new technologies like hyperloop are just starting to show some promise. If it can be built, hyperloop will be faster and more energy efficient, and through the use of smaller pods rather than long trains, offer travel without a schedule.

On the plus side, a plan for robocar only lanes is not a grand one. If you can sell it politically, you don’t need to build much infrastructure. It’s just some signs and new paint.

Some other users for all-robocar lanes

Once density is high enough, I think all-robocar lanes could be useful as barriers on a highway with dynamic lane assignment. To do this, you would just have a big wide stretch of pavement, and depending on traffic demand, allocate lanes to a direction. The problem is the interface lane. We may not want human drivers to drive at 75mph with other cars going the other way just 4 feet away. Robocars, however, could drive exclusively in the two border lanes, and do it safely. They would also drive a little off-center to create a larger buffer to avoid the wind-shake of passing close. No trucks in these lanes!

In an ideal situation, you would get a lot more capacity by paving over the shoulders and median to do this. With no median, though, you still have a risk of runaway cars (even robocars) crossing into oncoming traffic. A simpler solution would be to do this on existing highways. If you have a 6 lane highway, you could allocate 4 lanes one way and 2 the other, but insist that the two border lanes be robocars only, if we trust them. A breakdown by a robocar going in the counter-direction at high speed could still be an issue. Of course, this is how undivided highways are, but they have lower speeds and traffic flow.

GM accepts all liability in robocars, and other news

General Motors announced this week that they would “take full responsibility” if a crash takes place during an autonomous driving trip. This follows a pledge to do the same made some time ago by Daimler, Google and Volvo and possibly others.

What’s interesting is that they don’t add the caveat “if the system is at fault.” Of course, if the system is not at fault, they can get payment from the other driver, and so it’s still OK to tell the passenger or owner that GM takes responsibility.

GM is moving on a rapid timetable with the technology they bought with Cruise not too long ago. In fact, rumours of a sooner than expected release actually shot their stock up a bit this week.

Even to this day I still see articles which ask the question, “who is liable in an accident?” and then don’t answer it as though the answer is unknown or hard to figure out. It never was. There was never any doubt that the creators of these vehicles would take responsibility for any accidents they cause. Even if they tried not to, the liability would fall to them in the court system. People have been slow to say it because lawyers always advise clients, “never say in advance that you will take liability for something!” Generally good advice, but pointless here, and the message of responsibility makes customers feel better. Would you get into a taxi if you knew you would be liable if the driver crashed?

Senate bill

In other news this week, a Senate panel passed its own version of the House bill deregulating robocars. Notable was the exclusion of trucks, at the request of the Teamsters. I have predicted since this all began that the Teamsters would eventually bring their influence to bear on automated trucking. They will slow things down, but it’s a battle they won’t win. Truck accidents kill 4,000 people every year, and truck driving is a grueling boring profession whose annual turnover sometimes exceeds 100%. At that rate, if they introduced all-automated truck fleets today, it would be a very long time before somebody who actually wanted a trucking job lost it to automation. Indeed, even in the mostly automated world there will still be routes and tasked best served by humans, and they will be served by those humans who want it.

Actually, this new-world trucking will be a much nicer job. It will be safer, and nobody will drive the long-haul cross-country routes that grind you with boredom, take you away from your home and family for a week or more while you eat bad food and sleep in cheap motels or the back of your rig.

Uber

Speaking of trucking, while I have not been commenting much on the Waymo/Uber lawsuit because of my inside knowledge, and the personalities don’t bear too much on the future of the technology, it certainly has been getting fast and furious.

You can read the due diligence report Uber had prepared before buying Otto, and a Wired article which starts with a silly headline but has some real information as well.

Other items

Luminar, the young 1.5 micron LIDAR startup, has announced that Toyota will use their LIDARs.

Lyft has added Ford, along with Google to its partner list. Since Lyft did a $500M investment deal with GM, it’s clear they don’t want to stick with just one player, even for that sum. Google may have larger sums — it does seem clear that the once happy partnership of Uber and Google is over.

Baidu announced a 10 billion Yuan investment fund for self-driving startups.

Rumours suggest Waymo may expand their Phoenix pilot to a real self-driving taxi service for the public sooner than expected.

What is “bank angle” of a drone?

This is the angle between the longitudinal axis of the aircraft and the horizontal axis, when the drone flies inclined. For example, when the drone flies in perfectly horizontal position, the bank angle is zero. When it starts to turn its axis the bank angle starts to increase. When flying straight, planes bank angle is zero and when making left or right turns, they have a greater than zero bank angle.

Photo Credit: www.roboticmagazine.com

Ask, discuss anything about robots and drones in our forums

See our Robot Book

This post was originally written by RoboticMagazine.com and displaying without our permission is not allowed.

 

 

 

The post What is “bank angle” of a drone? appeared first on Roboticmagazine.

25 women in robotics you need to know about – 2017

Ada Lovelace Day on October 10 2017 is a day to celebrate the achievements of women in technology and there was no shortage of women to feature on Robohub’s annual Ada Lovelace Day “25 women in robotics you need to know about” list. (If you don’t see someone you expected then they’ll probably be on next year’s list, or on our first four lists from 2013, 2014, 2015, 2016 – please read them too!)

This year we are featuring women from all over the world, including early stage entrepreneurs, seasoned business women, investors, inventors, makers, educators, and organizers; we also feature early career researchers, established academics, senior scientists and politicians. The unifying characteristic of all these women is their inspirational story, their enthusiasm, their fearlessness, their vision, ambition, and accomplishments. Every year we’re inspired and hope that you are too.

It’s been a roller coaster year of tough headlines for tech diversity … In February, engineer Susan Fowler wrote a blog post “Reflecting on one very, very strange year at Uber. For some it was a wake-up call to the sexual harassment in tech culture, and for others it was just a public confirmation of what was already well-known. A series of high-profile mea culpa’s from male investors and CEOs ensued; then James Damore was fired from Google after implying that biological differences — not sexism — lie behind the gender gap.

It seems negative, but the publicity around bias, harassment and lack of diversity does provide public vindication for women like Susan Fowler, Tracy Chou, Erica Joy Baker and Ellen Pao who took stands against sexism and suffered for it. We’re now starting to see some positive outcomes. For example, Ellen Pao has just released a book, Reset, about her experience suing a prominent venture capital firm for bias and says, “My lawsuit failed. Others won’t.”

This year, Ellen Pao, Tracy Chou, Erica Joy Baker joined other women fighting against sexism and racism in the tech industry by starting Project Include, a non-profit that uses data and advocacy to accelerate diversity and inclusion solutions in the tech industry. Tracy Chou was also named as one of MIT Tech review’s Innovators under 35 alongside some 25 Women in Robotics alumni – Angela Schoellig [2013] and Anca Draga [2016].

Women in robotics still face challenges, even danger, such as Stella Uzochukwu-Denis and her fearless female robotics students face from Boko Haram extremists. And we all face the relentless lack of diversity and general apathy about the gender gap in our daily workplaces.

And yet robotics itself faces huge challenges. We are a very small segment of the very rich tech industry and robotics startups struggle to attract great talent. We have an opportunity to improve our diversity hiring practices to gain more recruits as well as increasing our internal innovation capacity, something that Linda Pouliot of Dishcraft writes about with elegance. As Pouliot notes, if you’re a robotics startup looking to hire, your personal network is your biggest asset — yet another reason for women in robotics to know about each other and to network, like with the Women in Robotics organization.

Speaking of networks, we’re biased towards the countries and careers that we know well. It’s a challenge to provide a representative sample of the wide range of jobs around the world that women are doing in robotics. Perhaps you can help us for next time with more nominations from other regions? Email nominations@womeninrobotics.org with suggestions.

Without further ado, here are 25 women in robotics you should know about (in alphabetical order) for 2017. Enjoy!

Muyinatu Bell
Assistant Professor at Johns HopkinsMuyinatu A. Lediju Bell is the director of the Photoacoustic and Ultrasonic Systems Engineering (PULSE) Lab, a highly interdisciplinary research program to engineer and deploy innovative biomedical imaging systems that address unmet clinical needs in neurosurgical navigation, cardiovascular disease, women’s health, cancer detection and treatment.  Before Johns Hopkins, she obtained a PhD in Biomedical Engineering from Duke University and spent a year abroad at the Institute of Cancer Research and Royal Marsden Hospital in the UK. Dr Bell is also the recipient of the NIH K99/R00 Pathway to Independence Award and was named one of MIT Technology Review’s 35 Innovators Under 35.

 

Jeanette Bohg
Assistant Professor at Stanford and Guest Researcher at Max Planck Institute for Intelligent SystemsJeannette Bohg is an Assistant Professor in Computer Science at Stanford and Guest Researcher at the Autonomous Motion Department of MPI. Her research focuses on perception for autonomous robotic manipulation and grasping, and she is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Before joining the Autonomous Motion lab in January 2012, she was a PhD student at the Computer Vision and Active Perception lab (CVAP) at KTH in Stockholm. Her thesis on Multi-modal scene understanding for Robotic Grasping was performed under the supervision of Prof. Danica Kragic. She studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively.

 

 

Maria Chiara Carozza
Professor of Biorobotics at Sant’Anna School of Advanced Studies (SSSUP)After graduating in Physics at the university of Pisa and obtaining a PhD in Engineering, Maria Chiara Carozza became Professor of Biorobotics. She was Director of the Research Department, Coordinator of the SSSUP Laboratory ARTS and elected Rector of SSSUP in 2007. As well as being involved in many EU and multinational projects such as CYBERLEGS, ROBOCASA, WAY, CogLaboration, Nanobiotouch, Evryon, SmartHand, Neurobotics, RobotCub and CyberHand, she is also active in politics. She was Minister of Education, University and Research in the Letta Government developing a national research program and remains active in Italian Parliament. Recipient of many awards, Dr Carozza has published more than 80 ISI publications,130 papers, holds 15 patents and is active in international conferences and professional societies. Her primary interest remain improving conditions for all in society through bioengineering, HRI, humanoid robotics, intelligent environments, prosthetics, tactile sensors and artificial skin.

 

Helen Chan Wolf
Original Shakey Team at SRI InternationalHelen Chan Wolf joined the SRI AI Group in 1966 and worked on Shakey the world’s first mobile autonomous robot. In 2017 Shakey was honored by an IEEE Milestone. Shakey was the first robot to embody artificial intelligence, to perceive its surroundings, deduce facts, make a plan to achieve a goal, navigate from place to place, monitor execution of the plan, and improve through learning. Wolf’s job was to work with the images and extract coordinates for Shakey. Her research papers included scene analysis, image matching and map guided interpretation of remotely sensed images. She was also one of the pioneers of automated facial recognition.

 

Neha Chaudhry
Founder of Walk to Beat / Bristol Robotics Lab IncubatorAfter studying a Masters Degree in Marketing at UWE Bristol, Product Design Engineer Neha Chaudhry went on to develop award winning Walk to Beat. Inspired by her late grandad who suffered from Parkinson’s for 8 years, her product is a robotic walking stick with an innovative technology that gives out pulses in the handle – it’s discreet and looks good, so people feel empowered instead of disabled. She has won five prizes for her work including three awards for entrepreneurship, and the Entrepreneurship award – European Robotics Forum.

 

Sonia Chernova
Assistant Professor at the School of Interactive Computing, Georgia TechSonia Chernova is the Catherine M. and James E. Allchin Early-Career Assistant Professor in the School of Interactive Computing at Georgia Tech. She received her Ph.D. and B.S. degrees in Computer Science from Carnegie Mellon University, and held positions as a Postdoctoral Associate at the MIT Media Lab and as Assistant Professor at Worcester Polytechnic Institute prior to joining Georgia Tech. She directs the Robot Autonomy and Interactive Learning (RAIL) lab, working on developing robots that are able to effectively operate in human environments. Her research interests span robotics and artificial intelligence, including semantic reasoning, adjustable autonomy, human computation and cloud robotics.

 

Maartje De Graaf
Postdoctoral Research Associate Cognitive, Linguistic and Psychological Sciences, Brown UniversityMaartje De Graaf joined Brown’s Humanity Centered Robotics Initiative in 2017 with a Rubicon grant from the Netherlands Organization for Scientific Research (NWO) to investigate the underlying psychological and cognitive processes of how people explain robot behaviors, and whether and how these processes differentiate from how people explain human behaviors. Before starting at Brown University, she was a postdoctoral researcher at the Department of Communication Science, University of Twente, The Netherlands. She has a Bachelor of Business Administration in Communication Management, a Master of Science in Communication Studies and a PhD in Human-Robot Interaction.

 

Kay Firth-Butterfield
Project Head for AI and Machine Learning at World Economic Forum / Executive Committee Vice-Chair for IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems / CoFounder of AI Austin Kay Firth-Butterfield is a Barrister and Judge who works on the societal impact of AI and robotics. She is also a Distinguished Scholar of the Robert E Strauss Center at the University of Texas, where she cofounded the Consortium for Law and Policy of Artificial Intelligence and Robotics. She is the former Chief Officer of the Lucid.ai Ethics Advisory Panel and Vice-Chair of The IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems. Additionally, she is a Partner in the Cognitive Finance group and an adjunct Professor of Law. She advises governments, think tanks, businesses, inter-governmental bodies and non-profits about artificial intelligence, law and policy.

 

Gabby Frierson aka RoboGabby
Student at Cane Bay Middle SchoolGabby is a young middle schooler who posts about building and programming robots as “RoboGabby”. Her goal is to attract more young girls, like herself, to exploring STEM. Gabby shares tutorials on VEX IQ, ROBOTC, Robot Virtual Worlds, Python, Java and is currently shooting some new tutorials. Her sheros are Katherine Johnson and Ayanna Howard who have proved that all girls of color, or just girls in general can be into STEM, robotics and more.

 

Frances Gabe
VALE: 1915-2016 Inventor and roboticistFrances Gabe was a renowned inventor, and a woman ahead of her time. Daughter of a builder, she was happier on the building site than in school “which moved too slow for me”. As an adult she took issue with housework. “Housework is a thankless, unending job,” she told The Ottawa Citizen in 1996. “It’s a nerve-twangling bore. Who wants it? Nobody!” Touring the US speaking to women’s groups, she self funded, and over 15 years, built her prototype house, where she lived for most of her life. She patented 68 different inventions, perhaps most cleverly her insitu dishwashing drawer and clothes laundering cupboards. But by the time she died in Dec 2016 aged 101, few people remembered her passion for automating ‘women’s work’, let alone celebrated her as the world’s first self taught female roboticist.

 

Simone Giertz aka Queen of Shitty Robots
Inventor, Youtuber and DIY Astronaut Simone Giertz started building robots as a child, however it wasn’t the career she had planned, which ranged from studying physics in Stockholm, to being an MMA sports journalist and working on Sweden’s website. She started a youtube channel for her comedy sketches and ended up showing off her ‘shitty robots’ and blowing up the internet. In an interview with Paper she describes how she got tired of being too serious and started to enjoy everything that she did. Now Simone is in San Francisco as a part time host of Tested and continuing her own Youtube. You can support her on Patreon.

 

Suzanne Gildert
CoFounder & CSO of Kindred.AISuzanne Gildert is co-founder and CSO of Kindred AI building personal robots that use machine learning to recognize patterns and make decisions. She oversees the design and engineering of the company’s human-like robots and is responsible for the development of cognitive architectures that allow these robots to learn about themselves and their environments. Before founding Kindred, Suzanne worked as a physicist at D-Wave, designing and building superconducting quantum processors, and as a researcher in quantum artificial intelligence software applications. She received her PhD in experimental physics from the University of Birmingham and likes science outreach, retro tech art, coffee, cats, electronic music and extreme lifelogging. She is a published author of a book of art and poetry.

 

Raia Hadsell
Research Scientist at Google DeepMindRaia Hadsell joined DeepMind in London in early 2014, to extend her research interests in robotics, neural networks, and real world learning systems. After an undergraduate degree in religion and philosophy from Reed College, Raia did a computer science PhD with Yann LeCun, at NYU, focused on machine learning using Siamese neural nets (often called a ‘triplet loss’ today) and on deep learning for mobile robots in the wild. Her thesis, ‘Learning Long-range vision for offroad robots’, was awarded the Outstanding Dissertation award in 2009. She spent a post-doc at CMU Robotics Institute, working with Drew Bagnell and Martial Hebert, and then became a research scientist at SRI International, at the Vision and Robotics group in Princeton, NJ. Her current work focuses on a number of fundamental challenges in AGI, including continual and transfer learning, deep reinforcement learning, and neural models of navigation.

 

Sarah Hensley
MIT EECS Angle Undergraduate Research and Innovation Scholar at MIT & NASASarah Hensley is in the SuperUROP program at MIT which combines her undergraduate and masters EE studies with “real world research” at the Jet Propulsion Lab and the DARPA Robotics Challenge. Sarah is continuing to work on evaluating the force and torque control capabilities of Valkyrie’s series elastic actuators, in readiness for space-related tasks such as opening airlock hatches, attaching and removing power cables, repairing equipment, and retrieving samples.

 

Anjali Jaiprakash
Advance QLD Research Fellow, Australian Center for Robotic Vision QUTAnjali Jaiprakash is a life sciences researcher who embraces novel technologies to solve medical challenges. She has experience in the fields of medical robotics, medical devices, orthopaedics, trauman, bone and cartilage biology, with research in hospital and clinical settings. Anjali is the core scientist for 2 research teams; Developing vision and control systems for robotic knee arthroscopy; and Developing a universal retinal diagnostic system. She was also a finalist for Imperial College London’s 2016 Best Project Award and recipient of the 2017 Tall Poppy Science Award from the Australian Institute of Policy and Science.

 

Leslie P Kaelbling
Panasonic Professor of Computer Science and Engineering and Research Director of CSAIL at MITLeslie Kaelbling has previously held positions at Brown University, the Artificial Intelligence Center of SRI International, and at Teleos Research. She received an A. B. in Philosophy in 1983 and a Ph. D. in Computer Science in 1990, both from Stanford University. Prof. Kaelbling has done substantial research on designing situated agents, mobile robotics, reinforcement learning, and decision-theoretic planning. In 2000, she founded the Journal of Machine Learning Research where she currently serves as editor-in-chief. Prof. Kaelbling is an NSF Presidential Faculty Fellow, a former member of the AAAI Executive Council, the 1997 recipient of the IJCAI Computers and Thought Award, a trustee of IJCAII and a fellow of the AAAI.

 

Valery Komissarova
Hardware VC at Grishin RoboticsValery Komissarova is a robotics investor with Grishin Robotics. Prior to that, she oversaw the internal and external relations at the internet company Mail.Ru Group, which is the biggest player in Eastern Europe, for 4 years, navigating the company’s communication policy through numerous M&As and IPOs as well as fast growth from 300 employees to 3,000. She has an extensive technological background in software engineering and systems architecture and has written books and articles about topics ranging from developing drivers to information security. Valery studied international business and management at Bournemouth University, and she also has a diploma from the Chartered Institute of Public Relations and Certificate in IR of the Investor Relations Society UK.

 

Sharon (Soon Bok) Lee
CEO of Robot of the FutureThe first product from Korean startup Robot Of The Future is Windowmate – a robot windowcleaner. CEO Sharon (Soon Bok) Lee founded the company in mid 2014, developed the IP and prototypes and was selected by the Korean Govt for a Silicon Valley Startup Program. Since then, Sharon has been rolling out a global sales campaign starting with Japan and then moving to Europe, with use cases being both residential for high density apartment living and commercial. Sharon brings lengthy experience as a technology manager and CEO to Robot of the Future and was awarded the 2015 VIP ASIA Award for CEOs.

Wanxi Liu
Systems Analyst at Intuitive Surgical and Robotics BloggerWanxi Liu graduated from Stanford as a Mechanical Engineering master in June, 2015, and is currently working at Intuitive Surgical as Systems Analyst (Control/Robotics Engineer). She did her undergraduate in Optical Engineering, but her strong interests in personal assistance or service robots and medical robots lead her to developing robotic simulations, haptics applictions, and mechatronic system design. She also write regular robotics blogs. “For those of you who are interested in robotics, read Chinese, and use WeChat – search for official account ROBOTICS and you’ll find all the interesting articles I wrote about various aspects of robots. Hit Follow if you like them!”

 

Linda Pouliot
CoFounder of Neato & Dishcraft RoboticsLinda Pouliot is a serial entrepreneur with deep expertise in robotics, product management, operations and manufacturing. In 2004 she co-founded Neato Robotics and was VP Product Management and Operations, leading the design, development and manufacturing of Neato’s laser guided vacuum cleaner. The company is now the number two player globally in consumer robotic vacuums. After Neato, Linda became the Chief Operating Officer of Adiri (acquired by ReliaBrand), where she oversaw the redesign and manufacturing of the international award winning Adiri bottle. She then co-founded the game advertising platform Mahoot. Linda is currently the Founder/CEO of Dishcraft Robotics.

 

Julie Schoenfeld
Founder & CEO of StrobeJulie Schoenfeld is a serial entrepreneur, and Founder and CEO of Stobe Inc., a technology company that develops laser-imaging for self-driving cars. Recently acquired by GM for an undisclosed amount, Strobe will be folded into GM’s self-driving subsidiary Cruise Automation. Schoenfeld has been CEO of four other companies in her career and is adept at raising venture capital and navigating aquisitions. Her first company, Net Effect, was acquired by Ask Jeeves for $288 million in stock. More recently she helped Perfect Market navigate its aquisition by Taboola.

 

Catherine Simon
President and Founder of Innorobo / InnoEchoCatherine Simon is the President and Founder of Innorobo, one of Europe’s key events dedicated to the service robotics sector, which brings together robotics companies, laboratories, start-ups, inventors, SMEs and funding providers in order to drive innovation. She also founded InnoEcho, a business strategy consultancy for the new technologies sector. Innorobo began as a regional show in Lyon, France, and recently moved to Paris to reflect its recent growth; the 2017 Innorobo event ran over three days and attracted 170 exhibitors and over 7K visitors.

 

Raquel Urtasun
Assistant Professor at University of Toronto, Head of Uber ATG, Co-Founder of Vector Institute for AIRaquel Urtasun is the Head of Uber ATG Toronto. She is also an Associate Professor in the Department of Computer Science at the University of Toronto, a Canada Research Chair in Machine Learning and Computer Vision and a co-founder of the Vector Institute for AI. She is a world leading expert in machine perception for self-driving cars, and her research interests include machine learning, computer vision, robotics and remote sensing.

 

Stella Uzochukwu-Denis
Program Coordinator at Odyssey Educational FoundationStella Uzochukwu-Denis is an electrical engineer and the founder of The Odyssey Educational Foundation, a Nigerian NGO that provides STEM education and robotics experiences to school children in Abuja – a region of Nigeria where militant attacks have kept hundreds of thousands of children out of school in recent years. The foundation’s main goal is to encourage children, and girls in particular, to pursue careers in science and technology. The foundation has trained well over 450 school age girls since its launch in 2013. “My utlimate goal is to ensure that kids become college-ready, career-ready and world-ready.”

 

Aimee van Wynsberghe
Co-Founder of Foundation for Responsible Robotics, Assistant Professor at Delft University of TechnologyAimee van Wynsberghe is assistant professor of ethics and technology at Delft University of Technology in the Netherlands. She is co-founder and president of the Foundation for Responsible Robotics. She is also a member of the 4TU center for ethics and technology where she heads the robotics task force. With the help of an NWO personal research grant she is researching how we can responsibly design service robots. Her past research looked at evaluating and designing care robots.

Do you have a story to tell about how visibility helped your robotics career? Would you like to nominate someone for next year’s list? Do you want to help organize Women in Robotics events or join the Women in Robotics network? We’d love to hear from you. Know of any great women in robotics who should be on this list next year? Check the lists from our previous years (2013, 20142015 and 2016), and feel free to leave your nominations in the comments section below, or email us at nominations [at] womeninrobotics.org.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Everything’s bigger in China

Recent news about growth of Chinese robotics and related AI indicate just how massive their investments are and how well they are paying off. For example, 90% of the personal robots on display at the IFA consumer electronics trade show held in Berlin in September were developed and manufactured by Chinese companies. 

Further, Preqin reported that Q3 venture-backed deals totaled $49 billion. Included in the top 10 deals were Uber-competitor Grab’s raising $2 billion from SoftBank and Didi Chuxing and Alibaba’s $1.1 bn investment in eBay-like Tokopedia and $.8 bn to Cainiao (see below). Half of the top 10 were in Asia; only three were for US-based companies.

Three Chinese companies stand out with Texas-size robotics-related activity: Midea/Kuka is planning to sell 50-55% of its annual $3 bn output in China by 2020; Siasun’s robots are exported to 30+ countries; and Alibaba is investing $15 billion over five years in internal logistics for their growing e-commerce business.

Alibaba (BABA:NYSE)

Amazon take note: China’s largest smart warehouse is manned by mobile robots moving shelves to picking and packing stations — and they look amazingly similar to Amazon’s Kiva robots.

Alibaba is emulating Amazon in putting robots into the logistics warehouses it operates for sorting, picking and moving applications. Through its investment in logistics company Cainiao, and similar investments in local startups Geek+ and Quicktron, both of which make Kiva-like mobile robots and provide extensive network and traffic management software for e-commerce distribution centers. Cainiao currently executes 57 million deliveries a day. Alibaba, which had owned 47% of Cainiao, has invested a further $807 million to increase its stake to 51%. Alibaba’s goal for Cainiao is to delivery anywhere in China within 24 hours and anywhere in the world within 72.

Warehousing robots aren’t Alibaba’s only play. They are also investing in service robots through their joint venture with SoftBank Robotics and Foxconn and also augmented reality big-data-driven logistics navigation and picking solutions as well as other types of AGVs for towing, moving and sorting pallets, boxed goods and shelves.

In addition to the Cainiao investment, Alibaba also invested $1.1 billion in PT Tokopedia, a large e-Bay-like service covering Indonesia. Overall, Alibaba has committed $15 billion over the next five years to build out a global logistics network.

Midea Group (000333:SHE)


Midea, China’s 4th largest consumer products manufacturer, and the country’s biggest maker of air conditioners, refrigerators and appliances, has a masterplan to revamp itself into China’s leading robot manufacturer.

  • Last year, for around $4.5 billion, they acquired the world’s 4th largest robot manufacturer, Germany-based Kuka AG.
  • At their air conditioner plant, Midea has deployed 800 robots and replaced 24,000 workers in their quest to improve quality and reduce costs.
  • In another factory, Midea engineers have made it so six robots produce and assemble remote control devices every seven seconds with 100% quality.
  • Early this year they set up an alliance with Israel-based advanced motion control and automation systems company Servotronix.
  • Then they invested another $1.5 billion in a new factory in southern China to manufacturer and assemble service and industrial robots (7,000 and 2,000 per year respectively).
  • These robots will be for sale as well as for internal use and the goal is that by 2025, 17,000 industrial robots will be produced at that factory in addition to Kuka’s goals at Kuka’s separate facilities.
  • Kuka plans to sell 50-55% of its annual output ($3 bn+) in China by 2020.
  • Midea is doubling the number of research engineers working on product development and AI. Research projects include robotic bartenders, consumer food processors and industrial-grade food production robots.

Midea’s investments and strategic alliances underscore their ambition to lead in automation and robotics within China and, later, globally.

Siasun Robot & Automation (300024:SHE)

According to The Wall Street Journal, Siasun’s 2016 revenue was $2.02 bn which was 20.47% greater than FY 2015. Forbes rates Siasun as #20 on the Innovative Growth Companies list with a market cap of $5.1 bn and 2,500 employes.

Siasun focuses on four verticals: advanced manufacturing equipment, rail transit automation, autonomous energy equipment and advanced robotics (across all divisions). In addition to fixed and mobile industrial robots, Siasun has a line of clean room robots and a new collaborative robot. They also have an extensive line of mobile robots for material handling, warehouses, restaurants, public spaces and indoor cleaning and security. Online retailer JD.com has teamed up with Siasun to automate JD’s logistic network and JD says that it also plans to develop delivery drones and driverless vehicles.

Qu Daokui, president of Siasun, said the company is looking to invest in robot technology in Europe and the United States, with acquisitions starting from at least $1 billion. “We are interested in companies that have state-of-the-art technologies or have a key presence in the industry chain,” Qu said recently at the 2017 World Robot Conference in Beijing.

Currently, the Shenyang-based company’s industrial robots and other products are exported to more than 30 countries and regions. Moreover, two-thirds of Siasun’s customers are foreign companies. According to China Daily, Siasun robots are at work in Ford and General Motors auto plants in the U.S.

Last year, Siasun teamed up with Israeli companies and universities in a China-Israel robot research institute in Guangzhou where they are jointly working on artificial intelligence which Qu billed as of great importance to robots by giving them “wings”.

Bottom Line

Many critics and pundits warn that the free-flowing incentives China has been giving to effect its 5-year plans and Made in China 2025 program has produced fraud, false figures and unknown results. They worry about overcapacity and that many of the new companies involved in robotics are just in it to get the subsidies and tax breaks.

Nevertheless, the three companies profiled above attest to the fact that China’s overall goal to become a high-tech maker and user of robotics and AI is working… and working BIG. Texas BIG.

Robotic bugs train insects to be helpers

Robots help ants with daily chores so they can be accepted into the colony. Image credit – Dr Bertrand Collignon

by Aisling Irwin

Tiny mobile robots are learning to work with insects in the hope the creatures’ sensitive antennae and ability to squeeze into small spaces can be put to use serving humans.

With a soft electronic whirr, a rather unusual looking ant trundles along behind a column of its arthropod comrades as they march off to fetch some food.

While the little insects begin ferrying tiny globules of sugar back home, their mechanical companion bustles forward to effortlessly pick up the entire container and carry it back to the nest.

It is a dramatic demonstration of how robots can be introduced and accepted into insect societies.

But the research, which is being conducted as part of the EU-funded CyBioSys project, could be an important step towards using robots to subtly control, or work alongside, animals or humans.

‘The idea is to be able to solve (a) problem with a better solution than they (the robots and insects) can produce individually,’ said Dr Bertrand Collignon, who is leading the research at the École Polytechnique Fédérale de Lausanne, in Switzerland.

The robots, which ‘live’ with the ants, pick up signs that food has been discovered through a camera mounted inside the nest. The camera alerts the robots when it detects an increasing numbers of ants are departing – a sign that food has been found.

The robots – reprogrammed off-the-shelf Thymio bots managed by simple Raspberry Pi computers – then use sensors to follow the columns of exiting ants. Once the ants have led their robotic counterparts to their discovery, the robots take over, using their superior muscle power to lug it home.

Dr Collignon described this as a ‘cyber-biological system’, which improves both on the natural order, and on what robots could achieve on their own. By getting ants and robots to collaborate, each community plays to its strengths, he says.

‘The ants are good at exploring the environment very efficiently, with many scouts patrolling the vicinity of the nest at the same time,’ said Dr Collignon, who is a Marie Skłodowska-Curie action fellow. ‘But individual ants are not able to transport large amounts of food and some can get lost between the food and the nest.’

‘By getting ants and robots to collaborate, each community plays to its strengths.’

Dr Bertrand Collignon, École Polytechnique Fédérale de Lausanne, Switzerland

Robots are like pack animals in comparison, carrying an order of magnitude more food than an ant can, and accomplishing in a few minutes what would have taken the ants hours.

Dr Collignon believes it is the first project to consider an insect swarm as a biosensor and then embed in a robot the ability to extract data from the colony.

But he also believes this research could be combined with other work teaching robots to communicate with animals. Instead of relying on top-down instructions — like a shepherd dog herding sheep — this would work by subtly influencing them from a position as one of the group.

As many social insects such as ants and bees can form aggressive colonies that normally do not respond well to outsiders, influencing them from within may offer a new approach.

In a previous EU-funded project, LEURRE, a team pioneered the creation of small mobile robots that could interact with cockroaches and influence their collective behaviour.

When kept in a pen together, cockroaches will gradually gather under the same dark shelter. They achieve this simply by following two rules: stay close to other cockroaches, and head for somewhere dark.

But when the researchers released small robots into the pen programmed with slightly different rules — stay close to other cockroaches but prefer a lighter refuge — in time the cockroaches, along with the robots, gathered in the lighter shelter instead.

Dr Collignon believes that the two types of robotic work – collaboration and communication – could find applications in search and rescue, exploring environments too dangerous or inaccessible for humans. Eventually, small animals could be used to get into restricted environments such as collapsed buildings.

By integrating artificial systems, such as robots, into more natural ones – such as a warehouse full of chickens – it could lead to new solutions to help control animal behaviour on farms. An example might be preventing deadly mass panic attacks amongst intensively reared animals by using robots that can detect the early signs of an impending stampede and diverting one by behaving in a different way.

‘The first step is to be able to track what natural agents are doing and react appropriately to that,’ he said. ‘That’s already a tricky thing. Once you have sensed what nature is doing, you can then interact. The robotic agent can do what it has been designed for and then act on the system.’


More info
CyBioSys

What is an Airspeed Sensor

It measures the speed of the drone relative to the air, by measuring the positive and negative pressure differences around the drone. When purchased, they usually come together with pitot tube and connection cables. It is recommended for advanced users or drones only, as it necessitates an extra layer of control and tuning. Through pitot tube, the pressure is measured and then this is converted to air speed. Air speed varies with the square root of air pressure. The pitot tube, which takes in the air, transmits it to the sensor through rubber tubing. The sensor is connected to flight controller through a 4 wire I2C cable. Air speed of drone is different than its speed relative to ground. When calculating flight time for a certain distance, the ground speed is used. For example, if the aircraft is moving in the air with 200 km/h, into a headwind of 5 km/h, then its ground speed is 195 km/h. This is how fast the shadow of the aircraft moves on the ground. When airspeed is corrected for pressure and temperature, true airspeed is obtained. This is the true speed at which the aircraft moves through the air fluid that surrounds it.

 

Ask, discuss anything about robots and drones in our forums

See our Robot Book

This post was originally written by RoboticMagazine.com and displaying without our permission is not allowed.

The post What is an Airspeed Sensor appeared first on Roboticmagazine.

Aerial manipulator for contact inspection selected for innovation prize

The European project AEROARMS is one of 10 innovations selected to compete for the Innovation Radar Prize in the “Industrial & Enabling Tech” category. AEROARMS is a European project that proposes to develop the first UAV robotic system with multiple arms and advanced manipulation capabilities for industrial inspection and maintenance.

The selected innovation is a torque-free contact device for integration into multi-rotor platforms. This technology was developed by CATEC research center, within the AEROARMS project. It enables drones to perform inspections that require contact, like ultrasonic. This is a major step for drones to not only “see” from the air but also “touch and feel”. The drone presents a tilted-rotor configuration that allows very precise movements and a contact device that decouples and dampens external perturbations (wind, external forces while touching) from the aerial platform.

AEROARMS is an ongoing H2020 project with more than 5.7 million euros budget, participants from five countries and nine partners, including the University of Seville, coordinator of AERAORMS, CATEC, the Technical University of Catalonia, the German DLR Institute of Robotics and Mechatronics and the companies TÜV NORD Systems GmbH and Elektra UAS GmbH, the French Centre National De La Recherche Scientifique, the Italian Consorzio C.R.E.A.T.E, and the Swiss companies ALSTOM Inspection Robotics and Sensima Inspection. The project will finish in 2019.

September 2017 fundings, acquisitions and IPOs

26 different startups were funded to the tune of $507 million in September, up from $369 million in August. Six acquisitions were reported during the month including Deere’s acquisition of California Blue River Technology for $305 million. And Restoration Robotics’ IPO will start being listed on NASDAQ early in October.

Fundings

  • LeddarTech, the Canadian developer of sensors and LiDAR distancing systems for ADAS and other mobile systems, raised $101 million in a Series C funding led by Osram with participation by Delphi, Magneti Marelli, Integrated Device Technology, Fonds de solidarité FTQ, BDC Capital and GO Capital. This round of funding will allow LeddarTech to enhance its ASIC development efforts, expand its R&D team, and accelerate ongoing LiDAR development programs with select Tier-1 automotive customers for rapid market deployment.
  • Innoviz Technologies, the Israeli solid-state LiDAR startup, raised $65 million in a Series B funding. Delphi Automotive PLC and Magna International participated in the round, along with additional new investors including 360 Capital Partners, Glory Ventures, Naver and others. All Series A investors also participated in the round.
  • Roobo, the Chinese startup and manufacturer of the Domgy consumer robot, raised $53 million in a Series B round led by Seven Seas Partners and IFlyTek, a Chinese developer of self-driving technologies, speech recognition for human-machine and human-human communication and related software and chips.
  • JingChi, a Sunnyvale self-driving car vision systems startup, raised $52 million in a seed round. Although the lead investor was Qiming Venture Partners, the company did not disclose the identity of any additional investors in the round.
  • Five AI, a Bristol, UK self-driving technology and ride-sharing startup, raised $35 million  in a Series A funding round led by Lakestar Capital, with Amadeus Capital Partners, Notion Capital and Kindred (which all previously invested in its seed round) also participating.
  • Airobotics, the Israeli autonomous drone platform for the mining, utilities and gas industry, raised $32.5 million in a series C funding round led by BlueRun Ventures. With the funding, Airobotics is starting a new Homeland Security and Defense division, as well as the “Airobotics Safe Cities” initiative, which uses fully automated drones to perform emergency operations in cities.
  • Cambridge Medical Robotics, a UK startup developing a next-generation robotic surgical system closed a Series A funding round of $26 million from Watrium and existing investors Cambridge Innovation Capital, LGT Global Invest, Escala Capital and ABB Technology Ventures.
  • Kinova Robotics,  a Canadian provider of robotics for the disabled, has raised $20 million to transition into three new areas of service robotics: collaborative robots for inspection and pick and place operations, manipulators for mobile platforms, and medical robots for research and therapies. Funding came from four major contributors, including lead investor Fonds Manufacturier Québécois; and KTB Network (South Korea), Foxconn (Taiwan); and BDC Capital (Canada).
  • Humatics, a Cambridge, Mass.-based developer of sensors, software, and control systems that enable robots to work within human environments, raised $18 million in a Series A funding. Fontinalis Partners led the round, and was joined by investors including Airbus Ventures, Lockheed Martin Ventures, Intact Ventures, Tectonic Ventures, Presidio Ventures, Blue Ivy Ventures, Ray Stata, and Andy Youmans.
  • Lighthouse AI, a Silicon Valley startup developing a deep learning, 3D sensing, interactive home assistant, raised $17 million (in May) led by Eclipse, Felicis Ventures, Andy Rubin’s Playground Ventures, SignalFire and StartX. Their new home security device can accurately distinguish between adults, children, pets and objects, known and unknown faces, and actions and report upon and play back based on what it finds.
  • Tonbo Imaging, an Indian defense vision systems startup, raised $17 million in a Series B funding round led by Walden Riverwood Ventures with Artiman Ventures, Edelweiss, and Qualcomm Ventures.
  • Drive.AI, a Silicon Valley self-driving startup, raised another $15 million (after their $50 million Series B round earlier this year) from Grab, an Uber rival Asian on-demand transportation and mobile payments platform, and unnamed others. Drive CEO Sameep Tandon said: “We look at Singapore as a technological juggernaut. When innovations happen in the region, basically they start in Singapore and then move out to other places within the region, whether it’s Indonesia, Vietnam or China. What’s also really interesting to us about Singapore is they have this sort of existential problem here – for them autonomous driving is not a matter of ‘if,’ it’s a matter of ‘when.’”
  • Ushr Inc., a Livonia, Mich.-based startup developing high-definition mapping technology and software for autonomous and semi-autonomous vehicles, raised $10 million in a Series A funding round led by Forte Ventures and including EnerTech Capital, Emerald Technology Ventures, and GM Ventures.
  • Agrible, an Illinois startup offering a suite of software tools for connected farmers, raised $9.7 million of a $15.7 million Series B round of funding led by Maumee Ventures, iSelect Fund, and existing investors Flyover Capital, Archer Daniels Midland, and Serra Ventures.
  • Bonsai AI, a Berkeley, CA AI startup, raised $7.6 million (in May) in a Series A round led by Microsoft Ventures and NEA, with participation from Samsung, Siemens, and ABB Technology Ventures.
  • Metawave, a Palo Alto self-driving perception spin-off from PARC, raised $7 million in seed funding. Backers included Khosla Ventures, Motus Ventures, and Thyra Global Management.
  • Ori Systems, a Boston startup with a novel interior space robotic furniture system, raised $6 million in a Series A funding round led by Khosla Ventures.
  • Specim Spectral Imaging, the Finnish company providing imaging systems to Zen Robotics for waste sorting and management, raised $4.2 million from Bocap SME Achievers Fund II Ky.
  • OpenSpace, a San Francisco machine vision startup, raised $3 million in seed funding. Lux Capital led the round, and was joined by investors includingFoundation Capital, National Science Foundation, the Box Group, AngelList, Goldcrest, Sterling Capital and Comet Labs.
  • Furhat Robotics, the Swedish startup developing social robots, raised $2.5 million in a seed funding round from Balderton Capital and LocalGlobe. The company is currently working with Swedish public services as well as companies like Honda, Intel, Merck, Toyota, and KPMG to develop apps on the platform, eg: A Swedish employment agency is using the conversational robot to prepare people for job interviews and to train teachers; Honda is using Furhat to develop a conversational tool for the elderly in a smart home setting; KPMG is designing a Furhat-enabled financial advisor interface. A recent Forbes article reports that both Disney and Intel are customers of this 50-person startup. Watch this fascinating Bloomberg video:

  • Reactive Robotics, a Munich startup developing rehab robotics for hospitals with ICUs for mechanically ventilated, neurological or trauma patients, raised an amount estimated to be around $2.5 million led by MTIP MedTech Innovation Partners AG, High-Tech Gründerfonds, Bayern Kapital, TQ-Group, and Dr. Doll Holding GmbH. Reactive Robotics said it expects to deliver its 1st clinical test product by the 1st quarter of 2018.
  • Betterview, a San Francisco-based software startup that can analyze detailed aerial footage captured by drones, raised $2 million.  Compound Venture Capital led the round, and was joined by investors Maiden Re, 645 Ventures, Arab Angel, Winklevoss Capital, Chestnut Street Ventures, Pierre Valade, Haystackand MetaProp.
  • Sea Machines Robotics, a Boston startup developing unmanned marine systems, raised $1.5 million (in May) in a round led by Connecticut-based LaunchCapital with participation from Cambridge-based venture capital firm Accomplice, Techstars, LDV Capital, and the Geekdom Fund. Sea Machines provides software and hardware to turn existing boats into autonomous vehicles.

Fundings (amount unknown)

  • SharkNinja, a home products distributor, raised an undisclosed sum from CDH investments, a large private equity fund, who said they purchased “a significant equity interest.” No amounts were disclosed. SharkNinja launched a Roomba-like robot vacuum to their line of products — at half the price of iRobot’s Roomba. Analysts are saying that SharkNinja “is a credible threat to iRobot” given its knack for marketing, as well as engineering high-quality products at value price points — two strengths that helped it successfully take market share from Dyson in recent years in the upright-vacuum market.
  • Acutronic Robotics, a Swiss company providing multi-axis motion simulators, has received Series A funding from the Sony Innovation Fund. No financial details were given. Funds will be used to enable Acutronic to accelerate the development of their Hardware Robot Operating System (H-ROS), to compete with ROS-I and legacy software from robot manufacturers. “H-ROS aims to change the landscape of robotics by creating an ecosystem where hardware components can be reused among different robots, regardless of the original manufacturer. We strongly believe that the future of robotics will be about modular robots that can be easily repaired and reconfigured. H-ROS aims to shape this future.”
  • Ocean Aero, a San Diego unmanned marine systems startup, raised an undisclosed amount from Lockheed Martin Ventures. “Ocean Aero represents the next generation of environmentally powered, autonomous ocean systems. Our investment will allow us to better respond to customers’ maritime needs with technology solutions for a diverse set of missions,” said Chris Moran, ED and GM of Lockheed Martin Ventures.

Acquisitions

  • John Deere, the farm equipment manufacturer, acquired Blue River Technology, a Silicon Valley AI and farm equipment startup for $305 million. Blue River has honed their See & Spray and Sense & Decide devices to analyze every plant in a field and apply herbicides only to weeds and overly crowded plants needing thinning thereby dramatically reducing the amount of chemicals used. Their robots are towed behind a tractor similar to conventional spraying equipment but Blue River’s towed implements have onboard cameras that use machine-learning software to distinguish between crops and weeds, and automated sprayers to target and spray the unwanted plants. Further, Blue River devices have a second set of cameras to automatically check its work as it operates and to gather data on the tens of thousands of plants in each field so that its analytics software can continue improving the devices and the process. Daniel Theobald, Founder and Chief Innovation Officer at Vecna, a Cambridge, MA provider of mobile robots, said:“It’s a smart move by Deere. They realize the time window in which ag industry execs will continue to buy dumb equipment is rapidly coming to a close. The race to automate is on and traditional equipment manufacturers who don’t embrace automation will face extinction. Agriculture is ripe for the benefits that robotics has to offer. Automation allows farmers to decrease water use, reduce the use of pesticides and other methods that are no longer sustainable, and helps solve ever worsening labor shortages.”
  • OMRON, the Japanese company that acquired robot maker Adept Technology last year, has just acquired Microscan Systems, the Renton, WA-based barcode reading and machine vision systems company, for $157 million. Microscan was a wholly owned subsidiary of UK-based Spectris Plc.
  • Neato Robotics, the California maker of home robot vacuums, was acquired by German appliance maker Vorwerk. Financial terms were not disclosed. Vorwerk invested in Neato back in 2010 but now has completely acquired Neato outright and fully owns its business and technology, which could help the international operation expand into the growing robotic vacuum industry.
  • Siemens, the German conglomerate, acquired Tass International for an undisclosed amount. Tass develops software that simulates traffic scenarios, validates autonomous driving and replicates ADAS (advanced driver assistance systems) in crash testing. It has 200 employees and annual revenue of around $32 million.
  • Precision Planting, a developer and reseller of mechanical, monitoring and control systems for precision ag applications, was acquired by AGCO, a global manufacturer and distributor of ag equipment, for an undisclosed amount. Precision Planting was a subsidiary of The Climate Corporation (a subsidiary of Monsanto).
  • Nabors Industries, an oil and gas drilling company, has acquired Robotic Drilling Systems, a Norwegian provider of a system for unmanned drill-floor operations. No figures were disclosed regarding the transaction.

IPOs

  • Restoration Robotics, a Silicon Valley FDA-approved robotic hair transplant startup, has filed to be listed on NASDAQ under the symbol HAIR. They plan to offer 3.125 million shares priced at around $8 per share — a $25 million IPO. It is expected to price during the week of October 9, 2017. If that price holds, it would establish a market value of $225 million for the company.
Page 382 of 398
1 380 381 382 383 384 398