Archive 30.10.2017

Page 1 of 4
1 2 3 4

Automated Ball Return System For Driving Ranges

Automated Managed Services roll out their upgraded automated ball return system, which handles ball washing and transportation back to the dispenser of golf balls

Established in late 2013, Automated Managed Services (AMS) have been offering driving range robots as an outfield maintenance solution to golf facilities. Their increasing success continues to reshape the idea of what golf maintenance should look and be like, as they rollout their newly redesigned ball return system across new and previous AMS locations.

The automated ball return system is responsible for the washing and transportation back to the dispenser of golf balls. It works in conjunction with the robot ballpicker that goes out and collects the balls out on the outfield. Once the robot is full it returns to its base and drops them into the return system. This process is fully automatic, from the time the balls are collected to being transported back to the dispenser, no human interaction is involved.

The design itself consists of a stainless steel ball drop zone that is shaped like half a diamond. This is installed into the ground and it is what the robot drops the balls into. The half diamond shape allows the balls to be funnelled towards the centre, at the base of the drop zone container is a slider that moves back and forth. With each back and forth motion the balls drop into u-bend shaped cage, this allows any debris such as small stones to fall away. Leaving the balls to roll into a connected green transportation pipe, where compressed air pushes the balls along back to the ball dispenser. During this transportation process water is introduced and the balls are cleaned. The return system is controlled via control panel that is usually located alongside the ball dispenser unit as well as the air compressor for the transportation pipe.

The design and development of the new system was undertaken by the owner of AMS Philip Sear and his technical director Sam Daybell. Philip had this to say about the ball return system:

“Research and development are a key component of our technology infrastructure, so we always strive to improve our products and services. With this in mind the new design is definitely more efficient in processing the balls and returning them to the dispenser. An example of this can be seen in the modification on how we use water in the system, we decided to only introduce water into transportation pipe. After previously also having it in the ball drop zone itself, this ensures water is used more resourcefully along with the balls being cleaned effectively. Overall we are very pleased with the new design as it continues our sustainability in offering a solution that streamlines resources and is cost-effective for our clients”

The new return system is currently being installed at FourAshes Golf Centre based in Solihull, who have been utilising robot technology at their facility for the past 4 years.  It is also part of a new installation being undertaken at Grimsby Golf Club and was installed at High Legh Golf Club based in Knutsford.

About AMS Robot Technology
Automated Managed Services provides golf ball and grass management for driving range facilities, designed to help to streamline resources, reduce costs and improve the overall health of golf driving range outfields.If you would like more information about the AMS’s Outfield Robots, please contact:

Natalie St Hill
Tel: 01462 676 222
natalie@automeatedmanagedservices.com
www.automatedmanagedservices.com

The post Automated Ball Return System For Driving Ranges appeared first on Roboticmagazine.

Can artificial intelligence learn to scare us?

Just in time for Halloween, a research team from the MIT Media Lab’s Scalable Cooperation group has introduced Shelley: the world’s first artificial intelligence-human horror story collaboration.

Shelley, named for English writer Mary Shelley — best known as the author of “Frankenstein: or, the Modern Prometheus” — is a deep-learning powered artificial intelligence (AI) system that was trained on over 140,000 horror stories on Reddit’s infamous r/nosleep subreddit. She lives on Twitter, where every hour, @shelley_ai tweets out the beginning of a new horror story and the hashtag #yourturn to invite a human collaborator. Anyone is welcome to reply to the tweet with the next part of the story, then Shelley will reply again with the next part, and so on. The results are weird, fun, and unpredictable horror stories that represent both creativity and collaboration — traits that explore the limits of artificial intelligence and machine learning.

“Shelley is a combination of a multi-layer recurrent neural network and an online learning algorithm that learns from crowd’s feedback over time,” explains Pinar Yanardhag, the project’s lead researcher. “The more collaboration Shelley gets from people, the more and scarier stories she will write.”

Shelley starts stories based on the AI’s own learning dataset, but she responds directly to additions to the story from human contributors — which, in turn, adds to her knowledge base. Each completed story is then collected on the Shelley project website.

“Shelley’s creative mind has no boundaries,” the research team says. “She writes stories about a pregnant man who woke up in a hospital, a mouth on the floor with a calm smile, an entire haunted town, a faceless man on the mirror anything is possible!”

One final note on Shelley: The AI was trained on a subreddit filled with adult content, and the researchers have limited control over her — so parents beware.

Robohub Podcast #246: Smart Swarms, with Vijay Kumar



In this episode, Jack Rasiel interviews Vijay Kumar, Professor and Dean of Engineering at the University of Pennsylvania.  Kumar discusses the guiding ideas behind his research on micro unmanned aerial vehicles, gives his thoughts on the future of robotics in the lab and field, and speaks about setting realistic expectations for robotics technology.

 

Vijay Kumar

Vijay Kumar is the Nemirovsky Family Dean of Penn Engineering with appointments in the Departments of Mechanical Engineering and Applied Mechanics, Computer and Information Science, and Electrical and Systems Engineering at the University of Pennsylvania.

Dr. Kumar received his Bachelor of Technology degree from the Indian Institute of Technology, Kanpur and his Ph.D. from The Ohio State University in 1987. He has been on the Faculty in the Department of Mechanical Engineering and Applied Mechanics with a secondary appointment in the Department of Computer and Information Science at the University of Pennsylvania since 1987. In his time at the university, Dr. Kumar has held numerous positions including director of the GRASP Laboratory, Chairman of the Department of Mechanical Engineering and Applied Mechanics, and Deputy Dean for Education in the School of Engineering and Applied Science. From 2012 to 2013, he served as the assistant director of robotics and cyber physical systems at the White House Office of Science and Technology Policy.

 

Links

 

 

Congress’ automated driving bills are both more and less than they seem

Bills being considered by Congress deserve our attention—but not our full attention. To wit: When it comes to safety-related regulation of automated driving, existing law is at least as important as the bills currently in Congress (HB 3388 and SB 1885). Understanding why involves examining all the ways that the developer of an automated driving system might deploy its system in accordance with federal law as well as all the ways that governments might regulate that system. And this examination reveals some critical surprises.

As automated driving systems get closer to public deployment, their developers are closely evaluating how the full set of Federal Motor Vehicle Safety Standards (FMVSS) will apply to these systems and to the vehicles on which they are installed. Rather than specifying a comprehensive regulatory framework, these standards impose requirements on only some automotive features and functions. Furthermore, manufacturers of vehicles and of components thereof self-certify that their products comply with these standards. In other words, unlike its European counterparts (and a small number of federal agencies overseeing products deemed more dangerous than motor vehicles), the National Highway Traffic Safety Administration (NHTSA) does not prospectively approve most of the products it regulates.

There are at least seven (!) ways that the developer of an automated driving system could conceivably navigate this regulatory regime.

First, the developer might design its automated driving system to comply with a restrictive interpretation of the FMVSS. The attendant vehicle would likely have conventional braking and steering mechanisms as well as other accoutrements for an ordinary human driver. (These conventional mechanisms could be usable, as on a vehicle with only part-time automation, or they might be provided solely for compliance.) NHTSA implied this approach in its 2016 correspondence with Google, while another part of the US Department of Transportation even highlighted those specific FMVSS provisions that a developer would need to design around. Once the developer self-certifies that its system in fact complies with the FMVSS, it can market it.

Second, the developer might ask NHTSA to clarify the agency’s understanding of these provisions with a view toward obtaining a more accommodating interpretation. Previously—and, more to the point, under the previous administration—NHTSA was somewhat restrictive in its interpretation, but a new chief counsel might reach a different conclusion about whether and how the existing standards apply to automated driving. In that case, the developer could again simply self-certify that its system indeed complies with the FMVSS.

Third, the developer might petition NHTSA to amend the FMVSS to more clearly address (or expressly abstain from addressing) automated driving systems. This rulemaking process would be lengthy (measured in years rather than months), but a favorable result would give the developer even more confidence in self-certifying its system.

Fourth, the developer could lobby Congress to shorten this process—or preordain the result—by expressly accommodating automated driving systems in a statute rather than in an agency rule. This is not, by the way, what the bills currently in Congress would do.

Fifth, the developer could request that NHTSA exempt some of its vehicles from portions of the FMVSS. This exemption process, which is prospective approval by another name, requires the applicant to demonstrate that the safety level of its feature or vehicle “at least equals the safety level of the standard.” Under existing law, the developer could exempt no more than 2,500 new vehicles per year. Notably, however, this could include heavy trucks as well as passenger cars.

Sixth, the developer could initially deploy its vehicles “solely for purposes of testing or evaluation” without self-certifying that those vehicles comply with the FMVSS. Although this exception is available only to established automotive manufacturers, a new or recent entrant could partner with or outright buy one of the companies in that category. Many kinds of large-scale pilot and demonstration projects could be plausibly described as “testing or evaluation,” particularly by companies that are comfortable losing money (or comfortable describing their services as “beta”) for years on end.

Seventh, the developer could ignore the FMVSS altogether. Under federal law, “a person may not manufacture for sale, sell, offer for sale, introduce or deliver for introduction in interstate commerce, or import into the United States, any [noncomplying] motor vehicle or motor vehicle equipment.” But under the plain meaning of this provision (and a related definition of “interstate commerce”), a developer could operate a fleet of vehicles equipped with its own automated driving system within a state without certifying that those vehicles comply with the FMVSS.

This is the background law against which Congress might legislate—and against which its bills should be evaluated.

Both bills would dramatically expand the number of exemptions that NHTSA could grant to each manufacturer, eventually reaching 100,000 per year in the House version. Some critics of the bills have suggested that this would give free rein to manufactures to deploy tens of thousands of automated vehicles without any prior approval.

But considering this provision in context provides two key insights. First, automated driving developers may already be able to lawfully deploy tens of thousands of their vehicles without any prior approval—by designing them to comply with the FMVSS, by claiming testing or evaluation, or by deploying an in-state service. Second, the exemption process gives NHTSA far more power than it otherwise has: The applicant must convince the agency to affirmatively permit it to market its system.

Both bills would also require the manufacturer of an automated driving system to submit a “safety evaluation report” to NHTSA that “describes how the manufacturer is addressing the safety of such vehicle or system.” This requirement would formalize the safety assessment letters that NHTSA encouraged in its 2016 and 2017 automated vehicle policies. These three frameworks all evoke my earlier proposal for what I call the “public safety case,” wherein an automated driving developer tells the rest of us what they are doing, why they think it is reasonably safe, and why we should believe them.

Unsurprisingly, I think this is a fine idea. It encourages innovation in safety assurance and regulation, informs regulators, and—if disclosure is meaningful—helps educate the public at large. Congress could strengthen these provisions as currently drafted, and it could give NHTSA the resources needed to effectively engage with these reports. Regardless, in evaluating the bills, it is important to understand that these provisions increase rather than decrease what an automated driving system developer must do under federal law. They are an addition rather than an alternative to each of the seven pathways described above.

Both bills would also exclude heavy trucks and buses from their definitions of automated vehicle. This exclusion, added at the behest of labor groups concerned about the eventual implications of commercial truck automation, means that NHTSA cannot exempt tens of thousands of heavy vehicles per manufacturer from a safety standard. But each truck manufacturer can still seek to exempt up to 2,500 vehicles per year—if such an exemption is even required. And, depending on how language relating to the safety evaluation reports is interpreted, this exemption might even relieve automated truck manufacturers of the obligation to submit these reports.

Finally, these bills largely preserve NHTSA’s existing regulatory authority—and that authority involves much more than making rules and granting exemptions to those rules. Crucially, the agency can conduct investigations and pursue recalls—even if a vehicle fully complies with the applicable FMVSS. This is because ensuring motor vehicle safety requires more than satisfying specific safety standards. And this broader definition of safety—“the performance of a motor vehicle or motor vehicle equipment in a way that protects the public against unreasonable risk of accidents occurring because of the design, construction, or performance of a motor vehicle, and against unreasonable risk of death or injury in an accident, and includes nonoperational safety of a motor vehicle”—gives NHTSA great power.

States and the municipalities within them also play an important role in regulating road safety—and my next post considers the effect of the Senate bill in particular on this state and local authority.

New RoboBee flies, dives, swims, and explodes out the of water

New, hybrid RoboBee can fly, dive into water, swim, propel itself back out of water, and safely land. The RoboBee is retrofitted with four buoyant and a central gas collection chamber. Once the RoboBee swims to the surface, an electrolytic plate in the chamber converts water into oxyhydrogen, a combustible gas fuel. Credit: Wyss Institute at Harvard University

By Leah Burrows

We’ve seen RoboBees that can fly, stick to walls, and dive into water. Now, get ready for a hybrid RoboBee that can fly, dive into water, swim, propel itself back out of water, and safely land.

New floating devices allow this multipurpose air-water microrobot to stabilize on the water’s surface before an internal combustion system ignites to propel it back into the air.

This latest-generation RoboBee, which is 1,000 times lighter than any previous aerial-to-aquatic robot, could be used for numerous applications, from search-and-rescue operations to environmental monitoring and biological studies.

The research is described in Science Robotics. It was led by a team of scientists from the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). 

“This is the first microrobot capable of repeatedly moving in and through complex environments,” says Yufeng Chen, Ph.D., currently a Postdoctoral Fellow at the Wyss Institute who was a graduate student in the Microrobotics Lab at SEAS when the research was conducted and is the first author of the paper. “We designed new mechanisms that allow the vehicle to directly transition from water to air, something that is beyond what nature can achieve in the insect world.”

Designing a millimeter-sized robot that moves in and out of water has numerous challenges. First, water is 1,000 times denser than air, so the robot’s wing flapping speed will vary widely between the two mediums. If the flapping frequency is too low, the RoboBee can’t fly. If it’s too high, the wing will snap off in the water.

By combining theoretical modeling and experimental data, the researchers found the Goldilocks combination of wing size and flapping rate, scaling the design to allow the bee to operate repeatedly in both air and water. Using this multimodal locomotive strategy, the robot to flaps its wings at 220 to 300 hertz in air and nine to 13 hertz in water.

Another major challenge the team had to address: at the millimeter scale, the water’s surface might as well be a brick wall. Surface tension is more than 10 times the weight of the RoboBee and three times its maximum lift. Previous research demonstrated how impact and sharp edges can break the surface tension of water to facilitate the RoboBee’s entry, but the question remained: How does it get back out again?

To solve that problem, the researchers retrofitted the RoboBee with four buoyant outriggers — essentially robotic floaties — and a central gas collection chamber. Once the RoboBee swims to the surface, an electrolytic plate in the chamber converts water into oxyhydrogen, a combustible gas fuel.

“Because the RoboBee has a limited payload capacity, it cannot carry its own fuel, so we had to come up with a creative solution to exploit resources from the environment,” says Elizabeth Farrell Helbling, graduate student in the Microrobotics Lab and co-author of the paper. “Surface tension is something that we have to overcome to get out of the water, but is also a tool that we can utilize during the gas collection process.”

The gas increases the robot’s buoyancy, pushing the wings out of the water, and the floaties stabilize the RoboBee on the water’s surface. From there, a tiny, novel sparker inside the chamber ignites the gas, propelling the RoboBee out of the water. The robot is designed to passively stabilize in air, so that it always lands on its feet.

“By modifying the vehicle design, we are now able to lift more than three times the payload of the previous RoboBee,” says Chen. “This additional payload capacity allowed us to carry the additional devices including the gas chamber, the electrolytic plates, sparker, and buoyant outriggers, bringing the total weight of the hybrid robot to 175 miligrams, about 90mg heavier than previous designs. We hope that our work investigating tradeoffs like weight and surface tension can inspire future multi-functional microrobots – ones that can move on complex terrains and perform a variety of tasks.”

Because of the lack of onboard sensors and limitations in the current motion-tracking system, the RoboBee cannot yet fly immediately upon propulsion out of water but the team hopes to change that in future research.

“The RoboBee represents a platform where forces are different than what we – at human scale – are used to experiencing,” says Wyss Core Faculty Member Robert Wood, Ph.D., who is also the Charles River Professor of Engineering and Applied Sciences at Harvard and senior author of the paper. “While flying the robot feels as if it is treading water; while swimming it feels like it is surrounded by molasses. The force from surface tension feels like an impenetrable wall. These small robots give us the opportunity to explore these non-intuitive phenomena in a very rich way.”

The paper was co-authored by Hongqiang Wang, Ph.D., Postdoctoral Fellow at the Wyss Institute and SEAS; Noah Jafferis, Ph.D., Postdoctoral Fellow at the Wyss Institute; Raphael Zufferey, Postgraduate Researcher at Imperial College, London; Aaron Ong, Mechanical Engineer at the University of California, San Diego and former member of the Microrobotics Lab; Kevin Ma, Ph.D., Postdoctoral Fellow at the Wyss Institute; Nicholas Gravish, Ph.D., Assistant Professor at the University of California, San Diego and former member of the Microrobotics Lab; Pakpong Chirarattananon, Ph.D., Assistant Professor at the City University of Hong Kong and former member of the Microrobotics Lab; and Mirko Kovac, Ph.D., Senior Lecturer at Imperial College, London and former member of the Microrobotics Lab and Wyss Institute. It was supported by the National Science Foundation and the Wyss Institute for Biologically Inspired Engineering.

Overview of the International Conference on Robot Ethics and Safety Standards – with survey on autonomous cars

The International Conference on Robot Ethics and Safety Standards (ICRESS-2017) took place in Lisbon, Portugal, from 20th to 21st October 2017. Maria Isabel Aldinhas Ferreira and JoĂŁo Silva Sequeira coordinated the conference with the aim to create a vibrant multidisciplinary discussion around pressing safety, ethical, legal and societal issues of the rapid introduction of robotic technology in many environments.

There were several fascinating keynote presentations. Mathias Scheutz’ inaugural speech highlighted the need for robots to act in a way that would be perceived as using moral principles and judgement. It was refreshing to see that we could potentially have autonomous robots that arrive at appropriate decisions that would be seen as “right” or “wrong” by an external observer.

On the other hand, Rodolphe GĂ©lin provided the perspective of robot manufacturers, and how difficult the issues of safety have become. The expectations of the public regarding robots seem to go beyond other conventional machines. The discussion was very diverse, and some suggested schemes similar to licensing would be required to qualify humans to operate robots (as they re-train them or re-program them). Other schemes for insurance and liabilities were suggested.

Professional bodies, experts and standards were discussed by the other two keynote presentations. Raja Chatila from the IEEE Global AI Ethics Initiative perspective, and Gurvinder Singh Virk from that of several ISO robot standardisation groups.

The conference also hosted a panel discussion, where interesting issues were debated like the challenges posed by the proliferation of drones in the general public. Such a topic has characteristics different from many other problems societies have faced with the introduction of new technologies. Drones can be 3D printed from many designs with potentially no liability to the designer, they can be operated with virtually no complex training, and they can be controlled from sufficient long distances that recuperating the drone would be insufficient to track the operator/owner. Their cameras and data recording can potentially be used to what some would consider privacy breaches, and they could compete for the space of already operating commercial aviation. It seems unclear what regulations and what bodies are to intervene and even so, how to enforce them. Would something similar happen when the public acquires pet-robots or artificial companions?

The presentations of accepted papers raised many issues, including the difficulties to create legal foundations for liability schemes and the responsibilities attributed to machines and operators. Particular aspects included the fact that for specific tasks, computers do significantly better than the average person (examples are driving a car and negotiation a curve). Other challenges are that humans will be in the proximity of robots on a regular basis in manufacturing or office environments with many new potential risks.

The vibrant nature of the conference concluded that the challenges are emerging much more rapidly than the answers.

We’ve also just launched a survey on software behaviours that an autonomous car should have when faced with difficult decisions. Just click here. Participants may win a prize, participation is completely voluntary and anonymous.

Robocars will make traffic worse before it gets better

Many websites paint a very positive picture of the robocar future. And it is positive, but far from perfect. One problem I worry about in the short term is the way robocars are going to make traffic worse before they get a chance to make it better.

The goal of all robocars is to make car travel more pleasant and convenient, and eventually cheaper. You can’t make something better and cheaper without increasing demand for it, and that means more traffic.

This is particularly true for the early-generation pre-robocar vehicles in the plans of many major automakers. One of the first products these companies have released is sometimes called the “traffic jam assist.” This is a self-driving system that only works at low speed in a traffic jam.

Turns out that’s easy to do, effectively a solved problem. Low speed is inherently easier, and the highway is a simple driving environment without pedestrians, cyclists, intersections or cars going the other way. When you are boxed in with other cars in a jam, all you have to do is go with the flow. The other cars tell you where you need to go. Sometimes it can be complex when you get to whatever is blocking the road to cause the jam, but handoff to a human at low speeds is also fairly doable.

These products will be widely available soon, and they will make traffic jams much more pleasant. Which means there might be more of them.

I don’t have a 9 to 5 job, so I avoid travel in rush hour when I can. If somebody suggests we meet somewhere at 9am, I try to push it to 9:30 or 10. If I had a traffic jam assist car, I would be more willing to take the meeting at 9. When on the way, if I encountered a traffic jam, I would just think, “Ah, I can get some email done.”

After the traffic jam assist systems, come the highway systems which allow you to take your eyes off the road for an extended time. They arrive pretty soon, too. These will encourage slightly longer commutes. That means more traffic, and also changes to real estate values. The corporate-run commuter buses from Google, Yahoo and many other tech companies in the SF Bay Area have already done that, making people decide they want to live in San Francisco and work an hour’s bus ride away in Silicon Valley. The buses don’t make traffic worse, but those doing this in private cars will.

Is it all doom?

Fortunately, some factors will counter a general trend to worse traffic, particularly as full real robocars arrive, the ones that can come unmanned to pick you up and drop you off.

  • As robocars reduce accident levels, that will reduce one of the major causes of traffic congestion.
  • Robocars don’t need to slow down and stare at accidents or other unusual things on the road, which also causes congestion.
  • Robocars won’t overcompensate on “sags” (dips) in the road. This overcompensation on sags is the cause of almost half the traffic congestion on Japanese highways
  • Robocars look like they’ll be mainly electric. That doesn’t do much about traffic, but it does help with emissions.
  • Short-haul “last mile” robocars can actually make the use of trains, buses and carpools vastly more convenient.
  • Having only a few cars which drive more regularly, even something as simple as a good quality adaptive cruise control, actually does a lot to reduce congestion.
  • The rise of single person half-width vehicles promises a capacity increase, since when two find one another on the road, they can share the lane.
  • While it won’t happen in the early days, eventually robocars will follow the car in front of them with a shorter gap if they have a faster reaction time. This increases highway capacity.
  • Early robocars won’t generate a lot of carpooling, but it will pick up fairly soon (see below.)

What not to worry about

There are a few nightmare situations people have talked about that probably won’t happen. Today, a lot of urban driving involves hunting for parking. If we do things right, robocars won’t ever hunt for parking. They (and you) will be able to make an online query for available space at the best price and go directly do it. But they’ll do that after they drop you off, and they don’t need to park super close to your destination the way you need to. To incorporate city spaces into this market, a technology upgrade will be needed, and that may take some time, but private spaces can get in the game quickly.

What also won’t happen is people telling their car to drive around rather than park, to save money. Operating a car today costs about $20/hour, which is vastly more than any hourly priced parking, so nobody is going to do that to save money unless there is literally no parking for many miles. (Yes, there are parking lots that cost more than $20, but that’s because they sell you many hours or a whole day and don’t want a lot of in and out traffic. Robocars will be the most polite parking customers around, hiding valet-style at the back of the lot and leaving when you tell them.)

Another common worry is that people will send their cars on long errands unmanned. That mom might take the car downtown, and send it all the way back for dad to do a later commute, then back to pickup the kids at school. While that’s not impossible, it’s actually not going to be the cheap or efficient thing to do. Thanks to robotaxis, we’re going to start thinking of cars as devices that wear out by the mile, not by the year, and all their costs will be by the mile except parking and $2 of financing per day. All this unmanned operation will almost double the cost of the car, and the use of robotic taxi services (Robocar Uber) will be a much better deal.

There will be empty car moves, of course. But it should not amount to more than 15% of total miles. In New York, taxis are vacant of a passenger for 38% of miles, but that’s because they cruise around all day looking for fares. When you only move when summoned, the rate is much better.

And then it gets better

After this “winter” of increased traffic congestion, the outlook gets better. Aside from the factors listed above, in the long term we get the potential for several big things to increase road capacity.

The earliest is dynamic carpooling, as you see with services like UberPool and LyftLines. After all, if you look at a rush-hour highway, you see that most of the seats going by are empty. Tools which can fill these seats can increase the capacity of the roads close to three times just with the cars that are moving today.

The next is advanced robocar transit. The ability to make an ad-hoc, on-demand transit system that combines vans and buses with last mile single person vehicles in theory allows almost arbitrary capacity on the roads. At peak hours, heavy use of vans and buses to carry people on the common segments of their routes could result in a 10-fold (or even more) increase in capacity, which is more than enough to handle our needs for decades to come.

Next after that is dynamic adaptation of roads. In a system where cities can change the direction of roads on demand, you can get more than a doubling of capacity when you combine it with repurposing of street parking. On key routes, street parking can be reserved only for robocars prior to rush hour, and then those cars can be told they must leave when rush hour begins. (Chances are they want to leave to serve passengers anyway.) Now your road has seriously increased capacity, and if it’s also converted to one-way in the peak direction, you could almost quadruple it.

The final step does not directly involve robocars, since all cars must have a smartphone and participate for it to work. This is the use of smart, internet based road metering. With complete metering, you never get more cars trying to use a road segment than it has capacity to handle, and so you very rarely get traffic congestion. You also don’t get induced demand that is greater than the capacity, solving the bane of transportation planners.

Humanoids 2017 photo contest – vote here

For the first time, Humanoids 2017 is hosting a photo contest. We received 39 photos from robotics laboratories and institutes all over the world, showing humanoids in serious or funny contexts.

A jury composed of Erico Guizzo (IEEE Spectrum), Sabine Hauert (Robohub) and Giorgio Metta (Humanoids 2017 Awards Chair) will select the winning photos among those that will receive the more likes on social media.

The idea of using these media is to increase awareness and interest for humanoids and robotics among the public, as these photos can be easily shared and reach people outside the research and robotics field.

You can see and vote for all the photos on Facebook or below by liking the photos (make sure you look at all of them!). More information about the competition can be found here.

How to start with self-driving cars using ROS

Self-driving cars are inevitable.

In recent years, self-driving cars have become a priority for automotive companies. BMW, Bosch, Google, Baidu, Toyota, GE, Tesla, Ford, Uber and Volvo are investing in autonomous driving research. Also, many new companies have appeared in the autonomous cars industry: Drive.ai, Cruise, nuTonomy, Waymo to name a few (read this post for a list of 260 companies involved in the self-driving industry).

The rapid development of this field has prompted a large demand for autonomous cars engineers. Among the skills required, knowing how to program with ROS is becoming an important one. You just have to visit the robotics-worldwide list to see the large amount of job offers for working/researching in autonomous cars, which demand knowledge of ROS.

Why ROS is interesting for autonomous cars

Robot Operating System (ROS) is a mature and flexible framework for robotics programming. ROS provides the required tools to easily access sensors data, process that data, and generate an appropriate response for the motors and other actuators of the robot. The whole ROS system has been designed to be fully distributed in terms of computation, so different computers can take part in the control processes, and act together as a single entity (the robot).

Due to these characteristics, ROS is a perfect tool for self-driving cars. After all, an autonomous vehicle can be considered as just another type of robot, so the same types of programs can be used to control them. ROS is interesting because:

1. There is a lot of code for autonomous cars already created. Autonomous cars require the creation of algorithms that are able to build a map, localize the robot using lidars or GPS, plan paths along maps, avoid obstacles, process pointclouds or cameras data to extract information, etc… Many algorithms designed for the navigation of wheeled robots are almost directly applicable to autonomous cars. Since those algorithms are already available in ROS, self-driving cars can just make use of them off-the-shelf.

2. Visualization tools are already available. ROS has created a suite of graphical tools that allow the easy recording and visualization of data captured by the sensors, and representation of the status of the vehicle in a comprehensive manner. Also, it provides a simple way to create additional visualizations required for particular needs. This is tremendously useful when developing the control software and trying to debug the code.

3. It is relatively simple to start an autonomous car project with ROS onboard. You can start right now with a simple wheeled robot equipped with a pair of wheels, a camera, a laser scanner, and the ROS navigation stack. You’re set up in a few hours. That could serve as a basis to understand how the whole thing works. Then you can move to more professional setups, like for example buying a car that is already prepared for autonomous car experiments, with full ROS support (like the Dataspeed Inc. Lincoln MKZ DBW kit).
Self-driving car companies have identified those advantages and have started to use ROS in their developments. Examples of companies using ROS include BMW (watch their presentation at ROSCON 2015), Bosch or nuTonomy.

Weak points of using ROS

ROS is not all nice and good. At present, ROS presents two important drawbacks for autonomous vehicles:

1. Single point of failure. All ROS applications rely on a software component called the roscore. That component, provided by ROS itself, is in charge of handling all coordination between the different parts of the ROS application. If the component fails, then the whole ROS system goes down. This implies that it does not matter how well your ROS application has been constructed. If roscore dies, your application dies.

2. ROS is not secure. The current version of ROS does not implement any security mechanism for preventing third parties from getting into the ROS network and reading the communication between nodes. This implies that anybody with access to the network of the car can get to the ROS messaging and kidnap the car behavior.

All those drawbacks are expected to be solved in the newest version of ROS, ROS 2. Open Robotics, the creators of ROS have recently released a second beta of ROS 2 which can be tested here. It is expected there will be a release version by the end of 2017.

In any case, we believe that the ROS-based path to self-driving vehicles is the way to go. That is why, we propose a low budget learning path for becoming a self-driving cars engineer, based on the ROS framework.

Our low cost solution to become a self-driving car engineer

Step 1
First thing you need is to learn ROS. ROS is quite a complex framework to learn and requires dedication and effort. Watch the following video for a list of the 5 best methods to learn ROS. Learning basic ROS will help you understand how to create programs with that framework, and how to reuse programs made by others.

Step 2
Next, you need to get familiar with the basic concepts of robot navigation with ROS. Learning how the ROS navigation stack works will provide you the knowledge of basic concepts in navigation like mapping, path planning or sensor fusion. There is no better way to learn this than taking the ROS Navigation in 5 days course developed by Robot Ignite Academy (disclaimer – this is provided by my company The Construct).

Step 3
Third step would be to learn the basic ROS application to autonomous cars: how to use the sensors available in any standard of autonomous car, how to navigate using a GPS, how to generate an algorithm for obstacle detection based on the sensors data, how to interface ROS with the Can-bus protocol used in all the cars used in the industry…

The following video tutorial is ideal to start learning ROS applied to Autonomous Vehicles from zero. The course teaches how to program a car with ROS for autonomous navigation by using an autonomous car simulation. The video is available for free, but if you want to get the most of it, we recommend you to do the exercises at the same time by enrolling into the Robot Ignite Academy.

Step 4
After the basic ROS for Autonomous Cars course, you should learn more advanced subjects like obstacles and traffic signals identification, road following, as well as coordination of vehicles in cross roads. For that purpose, our recommendation would be to use the Duckietown project at MIT. The project provides complete instructions to physically build a small size town, with lanes, traffic lights and traffic signals, to practice algorithms in the real world (even if at a small scale). It also provides instructions to build the autonomous cars that should populate the town. Cars are based on differential drives and a single camera for sensors. That is why they achieve a very low cost (around 100$ per each car).

Image by Duckietown project

Due to the low monetary requirements, and to the good experience it offers for testing real stuff, the Duckietown project is ideal to start practicing autonomous cars concepts like line following based on vision, detecting other cars, traffic signal-based behavior. Still, if your budget is below that cost, you can use a Gazebo simulation of the Duckietown, and still practice most of the content.

Step 5
Then if you really want to go pro, you need to practice with real life data. For that purpose we propose you install and learn from the Autoware project. This project provides real data obtained from real cars on real streets, by means of ROS bags. ROS bags are logs containing data captured from sensors which can be used in ROS programs as if the programs were connected to the real car. By using those bags, you will be able to test algorithms as if you had an autonomous car to practice with (the only limitation is that the data is always the same and restricted to the situation that happened when it was recorded).

Image by the Autoware project

The Autoware project is an amazing huge project that, apart from the ROS bags, provides multiple state-of-the-art algorithms for localization, mapping, obstacles detection and identification using deep learning. It is a little bit complex and huge, but definitely worth studying for a deeper understanding of ROS with autonomous vehicles. I recommend you to watch the Autoware ROSCON2017 presentation for an overview of the system (will be available in October 2017).

Step 6
Final step would be to start implementing your own ROS algorithms for autonomous cars and testing them in different, realistic situations. Previous steps provided you with real-life situations, the bags were limited to the situations where they were recorded. Now it is time to test your algorithms in different situations. You can use already existing algorithms in a mix of all the steps above, but at some point, you will see that all those implementations lack some things required for your goals. You will have to start developing your own algorithms, and you will need lots of tests. For this purpose, one of the best options is to use a Gazebo simulation of an autonomous car as a testbed for your ROS algorithms. Recently, Open Robotics has released a simulation of cars for Gazebo 8 simulator.

Image by Open Robotics

That simulation based on ROS contains a Prius car model, together with 16 beam lidar on the roof, 8 ultrasonic sensors, 4 cameras, and 2 planar lidar, which you can use to practice and create your own self-driving car algorithms. By using that simulation, you will be able to put the car in as many different situations as you want, checking if your algorithm works on those situations, and repeating as many times as you want until it works.

Conclusion

Autonomous driving is an exciting subject with demand for experienced engineers increasing year after year. ROS is one of the best options to quickly jump into the subject. So learning ROS for self-driving vehicles is becoming an important skill for engineers. We have presented here a full path to learn ROS for autonomous vehicles while keeping the budget low. Now it is your turn to make the effort and learn. Money is not an excuse anymore. Go for it!

Join the Robohub community!

As you know, Robohub is a non-profit dedicated to connecting the robotics community to the public. Over nearly a decade we’ve produced more than 200 podcasts and helped thousands of roboticists communicate about their work through videos and blog posts.

Our website Robohub.org provides free high-quality information, and is seen as a top blog in robotics with nearly 1.5M pageviews every year and 20k followers on social media (facebook, twitter).

If you have a story you would like to share (news, tutorials, papers, conference summaries), please send it to editors@robohub.org and we’ll do our best to help you reach a wide audience.

In addition, we’re currently growing our community of volunteers. If you’re interested in blogging, video/podcasting, moderating discussions, curating news, covering conferences, or helping with sustainability of our non-profit, we would love to hear from you!

Just fill in this very short form.

By joining the community, you’ll be part of a grassroots international organisation. You’ll learn about robotics from the top people in the field, travel to conferences, and will improve your communication skills. More important, you’ll be helping us make sure robotics is portrayed in a high-quality manner to the public.

And thanks to all those who have already joined the community, supported us, or sent us their news!

Page 1 of 4
1 2 3 4