Page 386 of 405
1 384 385 386 387 388 405

Three concerns about granting citizenship to robot Sophia

Citizen Sophia. Flickr/AI for GOOD Global Summit, CC BY

I was surprised to hear that a robot named Sophia was granted citizenship by the Kingdom of Saudi Arabia.

The announcement last week followed the Kingdom’s commitment of US$500 billion to build a new city powered by robotics and renewables.

One of the most honourable concepts for a human being, to be a citizen and all that brings with it, has been given to a machine. As a professor who works daily on making AI and autonomous systems more trustworthy, I don’t believe human society is ready yet for citizen robots.

To grant a robot citizenship is a declaration of trust in a technology that I believe is not yet trustworthy. It brings social and ethical concerns that we as humans are not yet ready to manage.

Who is Sophia?

Sophia is a robot developed by the Hong Kong-based company Hanson Robotics. Sophia has a female face that can display emotions. Sophia speaks English. Sophia makes jokes. You could have a reasonably intelligent conversation with Sophia.

Sophia’s creator is Dr David Hanson, a 2007 PhD graduate from the University of Texas.

Sophia is reminiscent of “Johnny 5”, the first robot to become a US citizen in the 1986 movie Short Circuit. But Johnny 5 was a mere idea, something dreamt up by comic science fiction writers S. S. Wilson and Brent Maddock.

Did the writers imagine that in around 30 years their fiction would become a reality?

Risk to citizenship

Citizenship – in my opinion, the most honourable status a country grants for its people – is facing an existential risk.

As a researcher who advocates for designing autonomous systems that are trustworthy, I know the technology is not ready yet.

We have many challenges that we need to overcome before we can truly trust these systems. For example, we don’t yet have reliable mechanisms to assure us that these intelligent systems will always behave ethically and in accordance with our moral values, or to protect us against them taking a wrong action with catastrophic consequences.

Here are three reasons I think it is a premature decision to grant Sophia citizenship.

1. Defining identity

Citizenship is granted to a unique identity.

Each of us, humans I mean, possesses a unique signature that distinguishes us from any other human. When we get through customs without talking to a human, our identity is automatically established using an image of our face, iris and fingerprint. My PhD student establishes human identity by analysing humans’ brain waves.

What gives Sophia her identity? Her MAC address? A barcode, a unique skin mark, an audio mark in her voice, an electromagnetic signature similar to human brain waves?

These and other technological identity management protocols are all possible, but they do not establish Sophia’s identity – they can only establish hardware identity. What then is Sophia’s identity?

To me, identity is a multidimensional construct. It sits at the intersection of who we are biologically, cognitively, and as defined by every experience, culture, and environment we encountered. It’s not clear where Sophia fits in this description.

2. Legal rights

For the purposes of this article, let’s assume that Sophia the citizen robot is able to vote. But who is making the decision on voting day – Sophia or the manufacturer?

Presumably also Sophia the citizen is “liable” to pay income taxes because Sophia has a legal identity independent of its creator, the company.

Sophia must also have the right for equal protection similar to other citizens by law.

Consider this hypothetical scenario: a policeman sees Sophia and a woman each being attacked by a person. That policeman can only protect one of them: who should it be? Is it right if the policeman chooses Sophia because Sophia walks on wheels and has no skills for self-defence?

Today, the artificial intelligence (AI) community is still debating what principles should govern the design and use of AI, let alone what the laws should be.

The most recent list proposes 23 principles known as the Asilomar AI Principles. Examples of these include: Failure Transparency (ascertaining the cause if an AI system causes harm); Value Alignment (aligning the AI system’s goals with human values); and Recursive Self-Improvement (subjecting AI systems with abilities to self-replicate to strict safety and control measures).

3. Social rights

Let’s talk about relationships and reproduction.

As a citizen, will Sophia, the humanoid emotional robot, be allowed to “marry” or “breed” if Sophia chooses to? Students from North Dakota State University have taken steps to create a robot that self-replicates using 3D printing technologies.

If more robots join Sophia as citizens of the world, perhaps they too could claim their rights to self-replicate into other robots. These robots would also become citizens. With no resource constraints on how many children each of these robots could have, they could easily exceed the human population of a nation.

As voting citizens, these robots could create societal change. Laws might change, and suddenly humans could find themselves in a place they hadn’t imagined.

The Conversation

This article was originally published on The Conversation. Read the original article.

October 2017 fundings, acquisitions and IPOs

Twenty-eight different startups were funded in October cumulatively raising $862 million, up from $507 million in September. Three of the top four fundings were for startups involved in the self-driving process. An additional five lower-amount startups were also funded for self-driving applications or components along with two of the six acquisitions.

Six acquisitions were reported during the month including Delphi Automotive’s buying nuTonomy for $450 million and Boeing’s acquisition of 550-employee Aurora Flight Sciences.

On the IPO front, Altair Engineering raised $156 million and Restoration Robotics raised $25 million when both went live on the NASDAQ stock exchange this month.

Fundings

  • Mapbox, a Washington, DC and San Francisco provider of nav systems for car companies and others involved in autonomous vehicles, raised $164 million in a Series C round led by the SoftBank Vision Fund, with participation from existing investors including Foundry Group, DFJ Growth, DBL Partners, and Thrive Capital. “Location data is central and mission critical to the development of the world’s most exciting technologies,” said Rajeev Misra, who helps oversee SoftBank’s Vision Fund.
  • Element AI, a Canadian startup providing learning platform solutions for self-driving and advanced manufacturing, raised $135 CAD million (around $105 million) in a Series A round (in June) led by Data Collective, a SV-based venture capital firm, and included participation by Fidelity Investments Canada, National Bank of Canada, Intel Capital, and Real Ventures.
  • Ninebot, the Chinese consumer products company that bought out Segway and raised $80 million in 2015, raised another $100 million in a Series C round  from the SDIC Fund Management Co. and the China Mobile Fund.
  • Horizon Robotics, another Chinese startup, raised $100 million in a Series A round led by Intel Capital with participation by Wu Capital, Morningside Venture Capital, Linear Venture, Hillhouse Capital and Harvest Investments. Horizon is developing self-driving vehicle autopilot and self-navigating consumer and neural network chips. Wendell Brooks, Intel SVP and President of Intel Capital which invested in Horizon said, “By 2020, every autonomous vehicle on the road will create 4 TB of data per day. A million self-driving cars will create the same amount of data every day as 3 billion people. As Intel transitions to a data company, Intel Capital is actively investing in startups across the technology spectrum that can help expand the data ecosystem and pathfind important new technologies.”
  • Innoviz Technologies, an Israel-based developer of LiDAR sensing technology for autonomous vehicles, raised $73 million in Series B funding. Investors include Samsung Catalyst and SoftBank Ventures Korea.
  • Zume Pizza, the Silicon Valley robotic pizza making startup, raised $48 million in a Series B funding. Investors in the round were not detailed. Zume is already delivering pizzas in Silicon Valley. It uses an assembly line of robots to flatten dough into circles, spread sauce and cheese, and slide the pies into and out of an 800 degree oven. Pizzas finish cooking in ovens inside delivery trucks.
  • Momenta AIa Beijing autonomous driving tech startup using machine vision (rather than LiDAR), raised $46 in a Series B round led by NIO Capital, Sequoia Capital China, Hillhouse Capital and Cathay Innovation Fund.
  • Wonder Workshop, previously named Play-i, a Silicon Valley and Chinese educational robot startup, raised $41 million in a Series C round from a series of investors including Tencent, TAL Education Group, MindWorks Ventures, Madrona Venture Group, Softbank Korea, VTRON Group, TCL Capital, Sinovation Ventures, Bright Success, WI Harper, and CRV. Wonder Workshop’s Dot and Dash robots are in use by thousands of student groups and schools around the world. “We founded Wonder Workshop to provide all children — girls and boys of all ages — with the skills needed to succeed in the future economy. This round of financing will allow us to continue on our mission to inspire the inventors of tomorrow,” said Vikas Gupta, CEO.
  • FogHorn Systems, a Silicon Valley smart manufacturing software startup, raised $30 million in a Series B round led by Intel Capital and Saudi Aramco Energy Ventures with new investor Honeywell Ventures and all previous investors participating, including Series A investors March Capital Partners, GE, Dell Technologies Capital, Robert Bosch Venture Capital, Yokogawa Electric Corporation, Darling Ventures and seed investor The Hive.
  • Nanotronic Imaging, an Ohio testing solutions provider, raised $30 million in a Series D funding led by Investment Corp of Dubai and Peter Thiel’s Founders Fund.
  • Wandercraft, a French rehabilitation exoskeleton startup, raised $17.8 million in a Series B round from XAnge, Innovation Capital, Idinvest Partners, Cemag Invest and BPIFrance.
  • Ever AIa San Francisco startup developing facial recognition, announced that they had raised $16 million in a Series B funding led by Icon Ventures with participation from Felicis Ventures and Khosla Ventures. On the same day SoftBank announced their intention to use Ever AI’s facial recognition platform as a new feature for their Pepper robot.
  • Built Robotics, a San Francisco startup developing a self-driving kit for construction equipment – a self-driving excavator – raised $15 million in a Series A round led by NEA (New Enterprise Associates) with participation by Founders Fund, Lemnos and angel investors including Eric Stromberg, Maria Thomas, Carl Bass, Edward Lando and Justin Kan.
  • Veo Robotics, a Cambridge, MA-based vision systems startup, raised $12 million in a Series A funding. Lux Capital and GV led the round, and were joined by unnamed investors including Next47.
  • Riverfield Surgical Robot Lab, a Japanese startup, raised $10 million in a Series B round led by Toray Engineering and included SBI Investment, Jafco and Beyond Next Ventures.
  • Beijing Beehive Agriculture Technology Co. raised $9.4 million in an A funding round led by Tendence Capital and other unnamed sources.  The funding marks the company’s second financing round after it raised around $5 million from e-commerce giant JD.com Inc. and others in its pre-A funding.
  • Titan Medical, a Canadian robotic single-port surgery device developer, raised $9.1 million: $2.6 million by floating 13.4 million common shares in a private placement to more than a dozen robotic surgeons in the US and Canada and an additional $6.5 million from the early exercise of purchase warrants for 42.6 million common shares.
  • Robart, an Austria-based developer of AI and navigation intelligence for autonomous consumer robots, raised $7.2 million in a Series B funding. CM-CIC Innovation led the round, and was joined by Innovacom, Robert Bosch Venture Capital and SEB Alliance.
  • Nileworks,  a Japanese drone crop spraying startup, raised $7.1 million from a group of Japanese investors including public-private partnership the Innovation Network Corporation of Japan, agricultural chemical maker Kumiai Chemical Industry Co., Sumitomo Corporation and its subsidiary  Sumitomo Chemical Co., the Japanese National Federation of Agricultural Co-operative Associations, and The Norinchukin Bank. When the product goes on sale in 2019 the company will target rice farmers in Japan.
  • Impossible Objects, an Illinois provider of 3D printing tech, raised $6.4 million in a Series A funding led by OCA Ventures and joined by IDEA Fund Partners, Mason Avenue Investments, Huizenga Capital Management and Inflection Equity Partners.
  • AeroFarms, the indoor vertical farming startup which raised $34 million reported earlier this year, rounded out their $40 million Series D funding with $6 million from Ikea Group and chef David Chang of the Momofuku Group. AeroFarms just built its 9th indoor farm in Newark, NJ.
  • Blickfeld, a Munich-based LiDAR maker for autonomous driving, raised $4.25 million in seed funding. Investors include Unternehmertum Venture Capital Partners, High-Tech Gruenderfonds, Fluxunit – OSRAM Ventures and Tengelmann Ventures.
  • Realtime Roboticsa Boston motion planning and control startup, raised $2 million in seed funding from SPARX Group, Scrum Ventures, and Toyota AI Ventures.
  • Vitae Industries, a Rhode Island pharma dispensing robot maker, raised $1.8 million in seed funding from Lerer Hippeau Ventures and Slater Technology Fund. Other investors in the round included Techstars, BoxGroup, Compound and Founder Collective.
  • Acutronic Robotics, a Swiss startup that last year acquired Spanish component maker Erle Robotics, raised an undisclosed amount from Sony in a Series A funding round. Sony will also adopt Acutronic’s Hardware Robot Operating System (H-ROS), for use in its own robotics division. Sony’s strategic use of the H-ROS platform in its own operations, and DARPA’s prior investment, suggest there’s a lot of interest in H-ROS for unifying legacy robotic systems from old-line robot providers.
  • Bharati Robotic Systems, a Pune, India-based industrial robotic cleaning startup, raised an undisclosed amount of funding from its existing investors – Society For Innovation and Entrepreneurship (SINE, IITB Incubator), and other angel investors.
  • IUVO, an Italian exoskeleton and wearable prosthetics spin-off from the Scuola Superiore Sant’Anna, has received a joint investment from robot manufacturer Comau and Össur, a global provider of non-invasive orthopedics. No financial amounts were provided however Comau and Össur will now hold a majority share of IUVO. “This joint venture represents a key step toward the creation of wearable robotic exoskeletons that can enhance human mobility and quality of life,” emphasized Mauro Fenzi, CEO of Comau. “By uniting the know-how and enabling technologies of the various partners, we are in a unique position to extend the use of robotics beyond manufacturing and toward a truly progressive global reality. I believe the differentiating factor of a project like IUVO is the combination of Comau’s automation skills and Össur’s extensive experience in bionics and bracing to enable the production of products, such as the exoskeletons, and to be able to demonstrate the benefits of robotics”.
  • Ultimaker, a manufacturer of professional desktop 3D printers and employer of over 300, raised an undisclosed amount from NPM Capital, a Benelux investment company.

Acquisitions

  • Delphi Automotive, a UK Tier 1 automotive supplier, acquired nuTonomy, a Boston self-driving ride sharing startup, for $450 million. nuTonomy, a spin-off from MIT and Singapore and with funding from Ford, has grown to 100 employees including 70 engineers and scientists. The acquisition will double Delphi’s autonomous driving applications team.
  • HTI Cybernetics, a Michigan industrial robotics integrator and contract manufacturer, has been acquired by Chongqing Nanshang Investment Group for around $50 million. HTI provides robotic welding systems to the auto industry and also has a contract welding services facility in Mexico.
  • Ridecell, a San Francisco mobility platform provider of car sharing, ride sharing and autonomous vehicles software, has acquired Auro Robotics, a Silicon Valley self-driving vehicle startup with shuttles operating on the Santa Clara University campus, for an undisclosed amount but which TheInformation estimates to be around $20 million.
  • Applied Automation, a UK components manufacturer of automation and control equipment, is changing and upgrading their status to include becoming an integrator of industrial and collaborative robots and, through the acquisition of PTG Precision Engineers, has gained talented engineering manpower to augment their sales/integration efforts. PTG is located across the street from Applied. No financial details about the acquisition were provided by either party.
  • General Motors acquired Pasadena-based Strobe, a vision systems startup developing an optical micro-oscillator for LiDAR timing, navigation and sensing applications, for an undisclosed amount. Strobe will join the Cruise Automation self-driving group.
  • Boeing is acquiring Aurora Flight Sciences, a 550 employee Virginia-based UAS provider, for an undisclosed amount. “Since its inception, Aurora has been focused on the development of innovative aircraft that leverage autonomy to make aircraft smarter,” said John Langford, Aurora Flight Sciences founder and chief executive officer. “As an integral part of Boeing, our pioneered technologies of long-endurance aircraft, robotic co-pilots, and autonomous electric VTOLs will be transitioned into world-class products for the global infrastructure.”

IPOs

  • Restoration Robotics, a San Jose, Calif.-based company focused on robotics that assist doctors in hair transplant procedures, raised $25 million in an upsized offering of 3.6 million shares priced at $7. In 2016, the company posted revenue of $15.6 million and a loss of $21.8 million. HAIR is now listed on the NASDAQ stock exchange.
  • Altair Engineeringa Troy, Mich.-based engineering software maker, raised $156 million in an IPO of 12 million shares at $13. The stock (ALTR) is now trading on Nasdaq. Altair develops simulation and design software for industrial applications, automobiles, consumer goods and all types of robotics.
  • Nilfisk Holdings, a Danish manufacturer of industrial cleaning machines including a new line of autonomous cleaners, was spun off from NKT A/S, a Danish conglomerate, and went public on the NASDAQ Copenhagen exchange as NLFSK. Financial details were not disclosed.

Brain surgery: The robot efficacy test?

An analysis by Stanford researchers shows that the use of robot-assisted surgery to remove kidneys wasn’t always more cost-effective than using traditional laparascopic methods.
Master Video/Shutterstock

The internet hummed last week with reports that “Humans Still Make Better Surgeons Than Robots.” Stanford University Medical Center set off the tweetstorm with its seemingly scathing report on robotic surgery. When reading the research of 24,000 patients with kidney cancer, I concluded that the problem lied with the humans overcharging patients versus any technology flaw. In fact, the study praised robotic surgery for complicated procedures and suggested the fault lied with hospitals unnecessarily pushing robotic surgery for simple operations over conventional methods, which led to “increases in operating times and cost.”

Dr. Benjamin Chung, the author of the report, stated that the expenses were due to either “the time needed for robotic operating room setup” or the surgeon’s “learning curve” with the new technology. Chung defended the use of robotic surgery by claiming that “surgical robots are helpful because they offer more dexterity than traditional laparoscopic instrumentation and use a three-dimensional, high-resolution camera to visualize and magnify the operating field. Some procedures, such as the removal of the prostate or the removal of just a portion of the kidney, require a high degree of delicate maneuvering and extensive internal suturing that render the robot’s assistance invaluable.”

Chung’s concern was due to the dramatic increase in hospitals selling robotic-assisted surgeries to patients rather than more traditional methods for kidney removals. “Although the laparoscopic procedure has been standard care for a radical nephrectomy for many years, we saw an increase in the use of robotic-assisted approaches, and by 2015 these had surpassed the number of conventional laparoscopic procedures,” explains Chung. “We found that, although there was no statistical difference in outcome or length of hospital stay, the robotic-assisted surgeries cost more and had a higher probability of prolonged operative time.”

The dexterity and precision of robotic instruments has been proven in live operating theaters for years, as well as multitude concept videos on the internet of fruit being autonomously stitched up. Dr. Joan Savall, also of Stanford, developed a robotic system that is even capable of performing (unmanned) brain surgery on a live fly. For years, medical students have been ripping the heads off of the drosophila with tweezers in the hopes of learning more about the insect’s anatomy. Instead, Savall’s machine gently follows the fly using computer vision to precisely target its thorax; literally a moving bullseye the size of a period. The robot is so careful that the insect is unfazed and flies off after the procedure. Clearly, the robot is quicker and more exacting than even the most careful surgeon. According to journal Nature Methods, the system can operate on 100 flies an hour.

Last week, Dr. Dennis Fowler of Columbia University and CEO of Platform Imaging, said that he imagines a future whereby the surgeon will program the robot to finish the procedure and stitch up the patient. He said senior surgeons already pass such mundane tasks to their medical students, ‘so why not a robot?’ Platform Imaging is an innovative startup that aims to reduce the amount of personnel or equipment a hospital needs when performing laparoscopic surgeries. Long-term, it plans to add snake robots to its flexible camera to empower surgeons with the greatest amount of maneuverability. In addition to the obvious health benefits to the patient, robotic surgeries like Dr. Fowler’s will reduce the number of workplace injuries to laparoscopic surgeons. According to a University of Maryland study, 87% of surgeons who perform laparoscopic procedures complain of eye strain, hand, neck, back and leg pain, headaches, finger calluses, disc problems, shoulder muscle spasm and carpel tunnel syndrome. Many times these injuries are so debilitating that they lead to early retirement. The author of the report, Dr. Adrian Park, explains “In laparoscopic surgery, we are very limited in our degrees of movement, but in open surgery we have a big incision, we put our hands in, we’re directly connected with the target anatomy. With laparoscopic surgery, we operate by looking at a video screen, often keeping our neck and posture in an awkward position for hours. Also, we’re standing for extended periods of time with our shoulders up and our arms out, holding and maneuvering long instruments through tiny, fixed ports.” In Dr. Fowler’s view, robotic surgery is a game changer by expanding the longevity of a physician’s career.

At the children’s National Health System in Washington, D.C, the Smart Tissue Autonomous Robot (STAR) provided a sneak peak to the future of surgery. Using advanced 3D imaging systems and precise force-sensing instruments the STAR was able to autonomously stitch up soft tissue samples (of a living pig above) with sub-millimeter accuracy that is by far greater than even the most precise human surgeons. According to the study published in the journal Science Translational Medicine, there are 45 million soft tissue surgeries performed each year in the United States.

Dr. Peter Kim, STAR’s creator, says “Imagine that you need a surgery, or your loved one needs a surgery. Wouldn’t it be critical to have the best surgeon and the best surgical techniques available?” Dr. Kim espouses, “Even though we take pride in our craft of doing surgical procedures, to have a machine or tool that works with us in ensuring better outcome safety and reducing complications—[there] would be a tremendous benefit.”

“Now driverless cars are coming into our lives,” explains Dr. Kim. “It started with self-parking, then a technology that tells you not to go into the wrong lane. Soon you have a car that can drive by itself.” Similarly, Dr. Kim and Dr. Fowler envision a time in the near future when surgical robots could go from assisting humans to being overseen by humans. Eventually, Dr. Kim says they may one day take over. After all, Dr Kim’s  “programmed the best surgeon’s techniques, based on consensus and physics, into the machine.”

The idea of full autonomy in the operating room and on the road raises a litany of ethical concerns, such as the acceptable failure rate of machines. The value proposition for self-driving cars is very clear – road safety. In 2015, there were approximately 35,000 road fatalities; self-driving cars will reduce that figure dramatically. However, what is unclear is what will be the new acceptable rate of fatalities with machines. Professor Amnon Shashua, of Hebrew University and founder of Mobileye, has struggled with this dilemma for years. “If you drop 35,000 fatalities down to 10,000 – even though from a rational point of view it sounds like a good thing, society will not live with that many people killed by a computer,” explains Dr. Shashua. While everyone would agree that zero failure is the most desired outcome in reality Shashua says, “this will never happen.” He elaborates, “What you need to show is that the probability of an accident drops by two to three orders of magnitude. If you drop [35,000 fatalities] down to 200, and those 200 are because of computer errors, then society will accept these robotic cars.”

Dr. Iyad Rahwan of MIT is much more to the point, “If we cannot engender trust in the new system, we risk the entire autonomous vehicle enterprise.” According to his research, “Most people want to live in a world where cars will minimize casualties. But everybody wants their own car to protect them at all costs.” Dr. Rahwan is referring to the Old Trolly Problem – does the machine save its driver or the pedestrian when encountered with a choice? Dr. Rahwan declares, “This is a big social dilemma. Who will buy a car that is programmed to kill them in some instances? Who will insure such a car?” Last May at the Paris Motor Show Christoph von Hugo, of Daimler Benz, emphatically answered: “If you know you can save at least one person, at least save that one. Save the one in the car.”

The ethics of unmanned systems and more will be discussed at the next RobotLab forum on “The Future of Autonomous Cars” with Steve Girsky formerly of General Motors – November 29th @ 6pm, WeWork Grand Central NYC, RSVP

The senate’s automated driving bill could squash state authority

My previous post on the House and Senate automated driving bills (HB 3388 and SB 1885) concluded by noting that, in addition to the federal government, states and the municipalities within them also play an important role in regulating road safety.These numerous functions involve, among others, designing and maintaining roads, setting and enforcing traffic laws, licensing and punishing drivers, registering and inspecting vehicles, requiring and regulating automotive insurance, and enabling victims to recover from the drivers or manufacturers responsible for their injuries.

Unfortunately, the Senate bill could preempt many of these functions. The House bill contains modest preemption language and a savings clause that admirably tries to clarify the line between federal and state roles. The Senate bill, in contrast, currently contains a breathtakingly broad preemption provision that was proposed in committee markup by, curiously, a Democratic senator.

(I say “currently” for two reasons. First, a single text of the bill is not available online; only the original text plus the marked-up texts for the Senate Commerce Committee’s amendments to that original have been posted. Second, whereas HB 3388 has passed the full House, SB 1885 is still making its way through the Senate.)

Under one of these amendments to the Senate bill, “[n]o State or political subdivision of a State may adopt, maintain, or enforce any law, rule, or standard regulating the design, construction, or performance of a highly automated vehicle or automated driving system with respect to any of the safety evaluation report subject areas.” These areas are system safety, data recording, cybersecurity, human-machine interface, crashworthiness, capabilities, post-crash behavior, accounting for applicable laws, and automation function.

A savings provision like the one in the House bill was in the original Senate bill but apparently dropped in committee.

A plain reading of this language suggests that all kinds of state and local laws would be void in the context of automated driving. Restrictions on what kind of data can be collected by motor vehicles? Fine for conventional driving, but preempted for automated driving. Penalties for speeding? Fine for conventional driving, but preempted for automated driving. Deregistration of an unsafe vehicle? Same.

The Senate language could have an even more subtly dramatic effect on state personal injury law. Under existing federal law, FMVSS compliance “does not exempt a person from liability at common law.” (The U.S. Supreme Court has fabulously muddied what this provision actually means by, in two cases, reaching essentially opposite conclusions about whether a jury could find a manufacturer liable under state law for injuries caused by a vehicle design that was consistent with applicable FMVSS.)

The Senate bill preserves this statutory language (whatever it means) and even adds a second sentence providing that “nothing” in the automated driving preemption section “shall exempt a person from liability at common law or under a State statute authorizing a civil remedy for damages or other monetary relief.”

Although this would seem to reinforce the power of a jury to determine what is reasonable in a civil suit, the Senate bill makes this second sentence “subject to” the breathtakingly broad preemption language described above. On its plain meaning, this language accordingly restricts rather than respects state tort and product liability law.

This is confusing (whether intentionally or unintentionally), so consider a stylized illustration:

1) You may not use the television.

2) Subject to (1), you may watch The Simpsons.

This language probably bars you from watching The Simpsons (at least on the television). If the intent were instead to permit you to do so, the language would be:

1) You may not use the television.

2) Notwithstanding (1), you may watch The Simpsons.

The amendment as proposed could have said “notwithstanding” instead of “subject to.” It did not.

I do not know the intent of the senators who voted for this automated driving bill and for this amendment to it. They may have intended a result other than the one suggested by their language. Indeed, they may have even addressed these issues without recording the result in the documents subsequently released. If so, they should make these changes, or they should make their changes public.

And if not, everyone from Congress to City Hall should consider what this massive preemption would mean.

Happy Halloween!

Happy Halloween everyone! Here’s a selection of this year’s robot videos and tweets to get you in the mood.


Automated Ball Return System For Driving Ranges

Automated Managed Services roll out their upgraded automated ball return system, which handles ball washing and transportation back to the dispenser of golf balls

Established in late 2013, Automated Managed Services (AMS) have been offering driving range robots as an outfield maintenance solution to golf facilities. Their increasing success continues to reshape the idea of what golf maintenance should look and be like, as they rollout their newly redesigned ball return system across new and previous AMS locations.

The automated ball return system is responsible for the washing and transportation back to the dispenser of golf balls. It works in conjunction with the robot ballpicker that goes out and collects the balls out on the outfield. Once the robot is full it returns to its base and drops them into the return system. This process is fully automatic, from the time the balls are collected to being transported back to the dispenser, no human interaction is involved.

The design itself consists of a stainless steel ball drop zone that is shaped like half a diamond. This is installed into the ground and it is what the robot drops the balls into. The half diamond shape allows the balls to be funnelled towards the centre, at the base of the drop zone container is a slider that moves back and forth. With each back and forth motion the balls drop into u-bend shaped cage, this allows any debris such as small stones to fall away. Leaving the balls to roll into a connected green transportation pipe, where compressed air pushes the balls along back to the ball dispenser. During this transportation process water is introduced and the balls are cleaned. The return system is controlled via control panel that is usually located alongside the ball dispenser unit as well as the air compressor for the transportation pipe.

The design and development of the new system was undertaken by the owner of AMS Philip Sear and his technical director Sam Daybell. Philip had this to say about the ball return system:

“Research and development are a key component of our technology infrastructure, so we always strive to improve our products and services. With this in mind the new design is definitely more efficient in processing the balls and returning them to the dispenser. An example of this can be seen in the modification on how we use water in the system, we decided to only introduce water into transportation pipe. After previously also having it in the ball drop zone itself, this ensures water is used more resourcefully along with the balls being cleaned effectively. Overall we are very pleased with the new design as it continues our sustainability in offering a solution that streamlines resources and is cost-effective for our clients”

The new return system is currently being installed at FourAshes Golf Centre based in Solihull, who have been utilising robot technology at their facility for the past 4 years.  It is also part of a new installation being undertaken at Grimsby Golf Club and was installed at High Legh Golf Club based in Knutsford.

About AMS Robot Technology
Automated Managed Services provides golf ball and grass management for driving range facilities, designed to help to streamline resources, reduce costs and improve the overall health of golf driving range outfields.If you would like more information about the AMS’s Outfield Robots, please contact:

Natalie St Hill
Tel: 01462 676 222
natalie@automeatedmanagedservices.com
www.automatedmanagedservices.com

The post Automated Ball Return System For Driving Ranges appeared first on Roboticmagazine.

Can artificial intelligence learn to scare us?

Just in time for Halloween, a research team from the MIT Media Lab’s Scalable Cooperation group has introduced Shelley: the world’s first artificial intelligence-human horror story collaboration.

Shelley, named for English writer Mary Shelley — best known as the author of “Frankenstein: or, the Modern Prometheus” — is a deep-learning powered artificial intelligence (AI) system that was trained on over 140,000 horror stories on Reddit’s infamous r/nosleep subreddit. She lives on Twitter, where every hour, @shelley_ai tweets out the beginning of a new horror story and the hashtag #yourturn to invite a human collaborator. Anyone is welcome to reply to the tweet with the next part of the story, then Shelley will reply again with the next part, and so on. The results are weird, fun, and unpredictable horror stories that represent both creativity and collaboration — traits that explore the limits of artificial intelligence and machine learning.

“Shelley is a combination of a multi-layer recurrent neural network and an online learning algorithm that learns from crowd’s feedback over time,” explains Pinar Yanardhag, the project’s lead researcher. “The more collaboration Shelley gets from people, the more and scarier stories she will write.”

Shelley starts stories based on the AI’s own learning dataset, but she responds directly to additions to the story from human contributors — which, in turn, adds to her knowledge base. Each completed story is then collected on the Shelley project website.

“Shelley’s creative mind has no boundaries,” the research team says. “She writes stories about a pregnant man who woke up in a hospital, a mouth on the floor with a calm smile, an entire haunted town, a faceless man on the mirror anything is possible!”

One final note on Shelley: The AI was trained on a subreddit filled with adult content, and the researchers have limited control over her — so parents beware.

Robohub Podcast #246: Smart Swarms, with Vijay Kumar



In this episode, Jack Rasiel interviews Vijay Kumar, Professor and Dean of Engineering at the University of Pennsylvania.  Kumar discusses the guiding ideas behind his research on micro unmanned aerial vehicles, gives his thoughts on the future of robotics in the lab and field, and speaks about setting realistic expectations for robotics technology.

 

Vijay Kumar

Vijay Kumar is the Nemirovsky Family Dean of Penn Engineering with appointments in the Departments of Mechanical Engineering and Applied Mechanics, Computer and Information Science, and Electrical and Systems Engineering at the University of Pennsylvania.

Dr. Kumar received his Bachelor of Technology degree from the Indian Institute of Technology, Kanpur and his Ph.D. from The Ohio State University in 1987. He has been on the Faculty in the Department of Mechanical Engineering and Applied Mechanics with a secondary appointment in the Department of Computer and Information Science at the University of Pennsylvania since 1987. In his time at the university, Dr. Kumar has held numerous positions including director of the GRASP Laboratory, Chairman of the Department of Mechanical Engineering and Applied Mechanics, and Deputy Dean for Education in the School of Engineering and Applied Science. From 2012 to 2013, he served as the assistant director of robotics and cyber physical systems at the White House Office of Science and Technology Policy.

 

Links

 

 

Congress’ automated driving bills are both more and less than they seem

Bills being considered by Congress deserve our attention—but not our full attention. To wit: When it comes to safety-related regulation of automated driving, existing law is at least as important as the bills currently in Congress (HB 3388 and SB 1885). Understanding why involves examining all the ways that the developer of an automated driving system might deploy its system in accordance with federal law as well as all the ways that governments might regulate that system. And this examination reveals some critical surprises.

As automated driving systems get closer to public deployment, their developers are closely evaluating how the full set of Federal Motor Vehicle Safety Standards (FMVSS) will apply to these systems and to the vehicles on which they are installed. Rather than specifying a comprehensive regulatory framework, these standards impose requirements on only some automotive features and functions. Furthermore, manufacturers of vehicles and of components thereof self-certify that their products comply with these standards. In other words, unlike its European counterparts (and a small number of federal agencies overseeing products deemed more dangerous than motor vehicles), the National Highway Traffic Safety Administration (NHTSA) does not prospectively approve most of the products it regulates.

There are at least seven (!) ways that the developer of an automated driving system could conceivably navigate this regulatory regime.

First, the developer might design its automated driving system to comply with a restrictive interpretation of the FMVSS. The attendant vehicle would likely have conventional braking and steering mechanisms as well as other accoutrements for an ordinary human driver. (These conventional mechanisms could be usable, as on a vehicle with only part-time automation, or they might be provided solely for compliance.) NHTSA implied this approach in its 2016 correspondence with Google, while another part of the US Department of Transportation even highlighted those specific FMVSS provisions that a developer would need to design around. Once the developer self-certifies that its system in fact complies with the FMVSS, it can market it.

Second, the developer might ask NHTSA to clarify the agency’s understanding of these provisions with a view toward obtaining a more accommodating interpretation. Previously—and, more to the point, under the previous administration—NHTSA was somewhat restrictive in its interpretation, but a new chief counsel might reach a different conclusion about whether and how the existing standards apply to automated driving. In that case, the developer could again simply self-certify that its system indeed complies with the FMVSS.

Third, the developer might petition NHTSA to amend the FMVSS to more clearly address (or expressly abstain from addressing) automated driving systems. This rulemaking process would be lengthy (measured in years rather than months), but a favorable result would give the developer even more confidence in self-certifying its system.

Fourth, the developer could lobby Congress to shorten this process—or preordain the result—by expressly accommodating automated driving systems in a statute rather than in an agency rule. This is not, by the way, what the bills currently in Congress would do.

Fifth, the developer could request that NHTSA exempt some of its vehicles from portions of the FMVSS. This exemption process, which is prospective approval by another name, requires the applicant to demonstrate that the safety level of its feature or vehicle “at least equals the safety level of the standard.” Under existing law, the developer could exempt no more than 2,500 new vehicles per year. Notably, however, this could include heavy trucks as well as passenger cars.

Sixth, the developer could initially deploy its vehicles “solely for purposes of testing or evaluation” without self-certifying that those vehicles comply with the FMVSS. Although this exception is available only to established automotive manufacturers, a new or recent entrant could partner with or outright buy one of the companies in that category. Many kinds of large-scale pilot and demonstration projects could be plausibly described as “testing or evaluation,” particularly by companies that are comfortable losing money (or comfortable describing their services as “beta”) for years on end.

Seventh, the developer could ignore the FMVSS altogether. Under federal law, “a person may not manufacture for sale, sell, offer for sale, introduce or deliver for introduction in interstate commerce, or import into the United States, any [noncomplying] motor vehicle or motor vehicle equipment.” But under the plain meaning of this provision (and a related definition of “interstate commerce”), a developer could operate a fleet of vehicles equipped with its own automated driving system within a state without certifying that those vehicles comply with the FMVSS.

This is the background law against which Congress might legislate—and against which its bills should be evaluated.

Both bills would dramatically expand the number of exemptions that NHTSA could grant to each manufacturer, eventually reaching 100,000 per year in the House version. Some critics of the bills have suggested that this would give free rein to manufactures to deploy tens of thousands of automated vehicles without any prior approval.

But considering this provision in context provides two key insights. First, automated driving developers may already be able to lawfully deploy tens of thousands of their vehicles without any prior approval—by designing them to comply with the FMVSS, by claiming testing or evaluation, or by deploying an in-state service. Second, the exemption process gives NHTSA far more power than it otherwise has: The applicant must convince the agency to affirmatively permit it to market its system.

Both bills would also require the manufacturer of an automated driving system to submit a “safety evaluation report” to NHTSA that “describes how the manufacturer is addressing the safety of such vehicle or system.” This requirement would formalize the safety assessment letters that NHTSA encouraged in its 2016 and 2017 automated vehicle policies. These three frameworks all evoke my earlier proposal for what I call the “public safety case,” wherein an automated driving developer tells the rest of us what they are doing, why they think it is reasonably safe, and why we should believe them.

Unsurprisingly, I think this is a fine idea. It encourages innovation in safety assurance and regulation, informs regulators, and—if disclosure is meaningful—helps educate the public at large. Congress could strengthen these provisions as currently drafted, and it could give NHTSA the resources needed to effectively engage with these reports. Regardless, in evaluating the bills, it is important to understand that these provisions increase rather than decrease what an automated driving system developer must do under federal law. They are an addition rather than an alternative to each of the seven pathways described above.

Both bills would also exclude heavy trucks and buses from their definitions of automated vehicle. This exclusion, added at the behest of labor groups concerned about the eventual implications of commercial truck automation, means that NHTSA cannot exempt tens of thousands of heavy vehicles per manufacturer from a safety standard. But each truck manufacturer can still seek to exempt up to 2,500 vehicles per year—if such an exemption is even required. And, depending on how language relating to the safety evaluation reports is interpreted, this exemption might even relieve automated truck manufacturers of the obligation to submit these reports.

Finally, these bills largely preserve NHTSA’s existing regulatory authority—and that authority involves much more than making rules and granting exemptions to those rules. Crucially, the agency can conduct investigations and pursue recalls—even if a vehicle fully complies with the applicable FMVSS. This is because ensuring motor vehicle safety requires more than satisfying specific safety standards. And this broader definition of safety—“the performance of a motor vehicle or motor vehicle equipment in a way that protects the public against unreasonable risk of accidents occurring because of the design, construction, or performance of a motor vehicle, and against unreasonable risk of death or injury in an accident, and includes nonoperational safety of a motor vehicle”—gives NHTSA great power.

States and the municipalities within them also play an important role in regulating road safety—and my next post considers the effect of the Senate bill in particular on this state and local authority.

New RoboBee flies, dives, swims, and explodes out the of water

New, hybrid RoboBee can fly, dive into water, swim, propel itself back out of water, and safely land. The RoboBee is retrofitted with four buoyant and a central gas collection chamber. Once the RoboBee swims to the surface, an electrolytic plate in the chamber converts water into oxyhydrogen, a combustible gas fuel. Credit: Wyss Institute at Harvard University

By Leah Burrows

We’ve seen RoboBees that can fly, stick to walls, and dive into water. Now, get ready for a hybrid RoboBee that can fly, dive into water, swim, propel itself back out of water, and safely land.

New floating devices allow this multipurpose air-water microrobot to stabilize on the water’s surface before an internal combustion system ignites to propel it back into the air.

This latest-generation RoboBee, which is 1,000 times lighter than any previous aerial-to-aquatic robot, could be used for numerous applications, from search-and-rescue operations to environmental monitoring and biological studies.

The research is described in Science Robotics. It was led by a team of scientists from the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). 

“This is the first microrobot capable of repeatedly moving in and through complex environments,” says Yufeng Chen, Ph.D., currently a Postdoctoral Fellow at the Wyss Institute who was a graduate student in the Microrobotics Lab at SEAS when the research was conducted and is the first author of the paper. “We designed new mechanisms that allow the vehicle to directly transition from water to air, something that is beyond what nature can achieve in the insect world.”

Designing a millimeter-sized robot that moves in and out of water has numerous challenges. First, water is 1,000 times denser than air, so the robot’s wing flapping speed will vary widely between the two mediums. If the flapping frequency is too low, the RoboBee can’t fly. If it’s too high, the wing will snap off in the water.

By combining theoretical modeling and experimental data, the researchers found the Goldilocks combination of wing size and flapping rate, scaling the design to allow the bee to operate repeatedly in both air and water. Using this multimodal locomotive strategy, the robot to flaps its wings at 220 to 300 hertz in air and nine to 13 hertz in water.

Another major challenge the team had to address: at the millimeter scale, the water’s surface might as well be a brick wall. Surface tension is more than 10 times the weight of the RoboBee and three times its maximum lift. Previous research demonstrated how impact and sharp edges can break the surface tension of water to facilitate the RoboBee’s entry, but the question remained: How does it get back out again?

To solve that problem, the researchers retrofitted the RoboBee with four buoyant outriggers — essentially robotic floaties — and a central gas collection chamber. Once the RoboBee swims to the surface, an electrolytic plate in the chamber converts water into oxyhydrogen, a combustible gas fuel.

“Because the RoboBee has a limited payload capacity, it cannot carry its own fuel, so we had to come up with a creative solution to exploit resources from the environment,” says Elizabeth Farrell Helbling, graduate student in the Microrobotics Lab and co-author of the paper. “Surface tension is something that we have to overcome to get out of the water, but is also a tool that we can utilize during the gas collection process.”

The gas increases the robot’s buoyancy, pushing the wings out of the water, and the floaties stabilize the RoboBee on the water’s surface. From there, a tiny, novel sparker inside the chamber ignites the gas, propelling the RoboBee out of the water. The robot is designed to passively stabilize in air, so that it always lands on its feet.

“By modifying the vehicle design, we are now able to lift more than three times the payload of the previous RoboBee,” says Chen. “This additional payload capacity allowed us to carry the additional devices including the gas chamber, the electrolytic plates, sparker, and buoyant outriggers, bringing the total weight of the hybrid robot to 175 miligrams, about 90mg heavier than previous designs. We hope that our work investigating tradeoffs like weight and surface tension can inspire future multi-functional microrobots – ones that can move on complex terrains and perform a variety of tasks.”

Because of the lack of onboard sensors and limitations in the current motion-tracking system, the RoboBee cannot yet fly immediately upon propulsion out of water but the team hopes to change that in future research.

“The RoboBee represents a platform where forces are different than what we – at human scale – are used to experiencing,” says Wyss Core Faculty Member Robert Wood, Ph.D., who is also the Charles River Professor of Engineering and Applied Sciences at Harvard and senior author of the paper. “While flying the robot feels as if it is treading water; while swimming it feels like it is surrounded by molasses. The force from surface tension feels like an impenetrable wall. These small robots give us the opportunity to explore these non-intuitive phenomena in a very rich way.”

The paper was co-authored by Hongqiang Wang, Ph.D., Postdoctoral Fellow at the Wyss Institute and SEAS; Noah Jafferis, Ph.D., Postdoctoral Fellow at the Wyss Institute; Raphael Zufferey, Postgraduate Researcher at Imperial College, London; Aaron Ong, Mechanical Engineer at the University of California, San Diego and former member of the Microrobotics Lab; Kevin Ma, Ph.D., Postdoctoral Fellow at the Wyss Institute; Nicholas Gravish, Ph.D., Assistant Professor at the University of California, San Diego and former member of the Microrobotics Lab; Pakpong Chirarattananon, Ph.D., Assistant Professor at the City University of Hong Kong and former member of the Microrobotics Lab; and Mirko Kovac, Ph.D., Senior Lecturer at Imperial College, London and former member of the Microrobotics Lab and Wyss Institute. It was supported by the National Science Foundation and the Wyss Institute for Biologically Inspired Engineering.

Overview of the International Conference on Robot Ethics and Safety Standards – with survey on autonomous cars

The International Conference on Robot Ethics and Safety Standards (ICRESS-2017) took place in Lisbon, Portugal, from 20th to 21st October 2017. Maria Isabel Aldinhas Ferreira and João Silva Sequeira coordinated the conference with the aim to create a vibrant multidisciplinary discussion around pressing safety, ethical, legal and societal issues of the rapid introduction of robotic technology in many environments.

There were several fascinating keynote presentations. Mathias Scheutz’ inaugural speech highlighted the need for robots to act in a way that would be perceived as using moral principles and judgement. It was refreshing to see that we could potentially have autonomous robots that arrive at appropriate decisions that would be seen as “right” or “wrong” by an external observer.

On the other hand, Rodolphe Gélin provided the perspective of robot manufacturers, and how difficult the issues of safety have become. The expectations of the public regarding robots seem to go beyond other conventional machines. The discussion was very diverse, and some suggested schemes similar to licensing would be required to qualify humans to operate robots (as they re-train them or re-program them). Other schemes for insurance and liabilities were suggested.

Professional bodies, experts and standards were discussed by the other two keynote presentations. Raja Chatila from the IEEE Global AI Ethics Initiative perspective, and Gurvinder Singh Virk from that of several ISO robot standardisation groups.

The conference also hosted a panel discussion, where interesting issues were debated like the challenges posed by the proliferation of drones in the general public. Such a topic has characteristics different from many other problems societies have faced with the introduction of new technologies. Drones can be 3D printed from many designs with potentially no liability to the designer, they can be operated with virtually no complex training, and they can be controlled from sufficient long distances that recuperating the drone would be insufficient to track the operator/owner. Their cameras and data recording can potentially be used to what some would consider privacy breaches, and they could compete for the space of already operating commercial aviation. It seems unclear what regulations and what bodies are to intervene and even so, how to enforce them. Would something similar happen when the public acquires pet-robots or artificial companions?

The presentations of accepted papers raised many issues, including the difficulties to create legal foundations for liability schemes and the responsibilities attributed to machines and operators. Particular aspects included the fact that for specific tasks, computers do significantly better than the average person (examples are driving a car and negotiation a curve). Other challenges are that humans will be in the proximity of robots on a regular basis in manufacturing or office environments with many new potential risks.

The vibrant nature of the conference concluded that the challenges are emerging much more rapidly than the answers.

We’ve also just launched a survey on software behaviours that an autonomous car should have when faced with difficult decisions. Just click here. Participants may win a prize, participation is completely voluntary and anonymous.

Robocars will make traffic worse before it gets better

Many websites paint a very positive picture of the robocar future. And it is positive, but far from perfect. One problem I worry about in the short term is the way robocars are going to make traffic worse before they get a chance to make it better.

The goal of all robocars is to make car travel more pleasant and convenient, and eventually cheaper. You can’t make something better and cheaper without increasing demand for it, and that means more traffic.

This is particularly true for the early-generation pre-robocar vehicles in the plans of many major automakers. One of the first products these companies have released is sometimes called the “traffic jam assist.” This is a self-driving system that only works at low speed in a traffic jam.

Turns out that’s easy to do, effectively a solved problem. Low speed is inherently easier, and the highway is a simple driving environment without pedestrians, cyclists, intersections or cars going the other way. When you are boxed in with other cars in a jam, all you have to do is go with the flow. The other cars tell you where you need to go. Sometimes it can be complex when you get to whatever is blocking the road to cause the jam, but handoff to a human at low speeds is also fairly doable.

These products will be widely available soon, and they will make traffic jams much more pleasant. Which means there might be more of them.

I don’t have a 9 to 5 job, so I avoid travel in rush hour when I can. If somebody suggests we meet somewhere at 9am, I try to push it to 9:30 or 10. If I had a traffic jam assist car, I would be more willing to take the meeting at 9. When on the way, if I encountered a traffic jam, I would just think, “Ah, I can get some email done.”

After the traffic jam assist systems, come the highway systems which allow you to take your eyes off the road for an extended time. They arrive pretty soon, too. These will encourage slightly longer commutes. That means more traffic, and also changes to real estate values. The corporate-run commuter buses from Google, Yahoo and many other tech companies in the SF Bay Area have already done that, making people decide they want to live in San Francisco and work an hour’s bus ride away in Silicon Valley. The buses don’t make traffic worse, but those doing this in private cars will.

Is it all doom?

Fortunately, some factors will counter a general trend to worse traffic, particularly as full real robocars arrive, the ones that can come unmanned to pick you up and drop you off.

  • As robocars reduce accident levels, that will reduce one of the major causes of traffic congestion.
  • Robocars don’t need to slow down and stare at accidents or other unusual things on the road, which also causes congestion.
  • Robocars won’t overcompensate on “sags” (dips) in the road. This overcompensation on sags is the cause of almost half the traffic congestion on Japanese highways
  • Robocars look like they’ll be mainly electric. That doesn’t do much about traffic, but it does help with emissions.
  • Short-haul “last mile” robocars can actually make the use of trains, buses and carpools vastly more convenient.
  • Having only a few cars which drive more regularly, even something as simple as a good quality adaptive cruise control, actually does a lot to reduce congestion.
  • The rise of single person half-width vehicles promises a capacity increase, since when two find one another on the road, they can share the lane.
  • While it won’t happen in the early days, eventually robocars will follow the car in front of them with a shorter gap if they have a faster reaction time. This increases highway capacity.
  • Early robocars won’t generate a lot of carpooling, but it will pick up fairly soon (see below.)

What not to worry about

There are a few nightmare situations people have talked about that probably won’t happen. Today, a lot of urban driving involves hunting for parking. If we do things right, robocars won’t ever hunt for parking. They (and you) will be able to make an online query for available space at the best price and go directly do it. But they’ll do that after they drop you off, and they don’t need to park super close to your destination the way you need to. To incorporate city spaces into this market, a technology upgrade will be needed, and that may take some time, but private spaces can get in the game quickly.

What also won’t happen is people telling their car to drive around rather than park, to save money. Operating a car today costs about $20/hour, which is vastly more than any hourly priced parking, so nobody is going to do that to save money unless there is literally no parking for many miles. (Yes, there are parking lots that cost more than $20, but that’s because they sell you many hours or a whole day and don’t want a lot of in and out traffic. Robocars will be the most polite parking customers around, hiding valet-style at the back of the lot and leaving when you tell them.)

Another common worry is that people will send their cars on long errands unmanned. That mom might take the car downtown, and send it all the way back for dad to do a later commute, then back to pickup the kids at school. While that’s not impossible, it’s actually not going to be the cheap or efficient thing to do. Thanks to robotaxis, we’re going to start thinking of cars as devices that wear out by the mile, not by the year, and all their costs will be by the mile except parking and $2 of financing per day. All this unmanned operation will almost double the cost of the car, and the use of robotic taxi services (Robocar Uber) will be a much better deal.

There will be empty car moves, of course. But it should not amount to more than 15% of total miles. In New York, taxis are vacant of a passenger for 38% of miles, but that’s because they cruise around all day looking for fares. When you only move when summoned, the rate is much better.

And then it gets better

After this “winter” of increased traffic congestion, the outlook gets better. Aside from the factors listed above, in the long term we get the potential for several big things to increase road capacity.

The earliest is dynamic carpooling, as you see with services like UberPool and LyftLines. After all, if you look at a rush-hour highway, you see that most of the seats going by are empty. Tools which can fill these seats can increase the capacity of the roads close to three times just with the cars that are moving today.

The next is advanced robocar transit. The ability to make an ad-hoc, on-demand transit system that combines vans and buses with last mile single person vehicles in theory allows almost arbitrary capacity on the roads. At peak hours, heavy use of vans and buses to carry people on the common segments of their routes could result in a 10-fold (or even more) increase in capacity, which is more than enough to handle our needs for decades to come.

Next after that is dynamic adaptation of roads. In a system where cities can change the direction of roads on demand, you can get more than a doubling of capacity when you combine it with repurposing of street parking. On key routes, street parking can be reserved only for robocars prior to rush hour, and then those cars can be told they must leave when rush hour begins. (Chances are they want to leave to serve passengers anyway.) Now your road has seriously increased capacity, and if it’s also converted to one-way in the peak direction, you could almost quadruple it.

The final step does not directly involve robocars, since all cars must have a smartphone and participate for it to work. This is the use of smart, internet based road metering. With complete metering, you never get more cars trying to use a road segment than it has capacity to handle, and so you very rarely get traffic congestion. You also don’t get induced demand that is greater than the capacity, solving the bane of transportation planners.

Humanoids 2017 photo contest – vote here

For the first time, Humanoids 2017 is hosting a photo contest. We received 39 photos from robotics laboratories and institutes all over the world, showing humanoids in serious or funny contexts.

A jury composed of Erico Guizzo (IEEE Spectrum), Sabine Hauert (Robohub) and Giorgio Metta (Humanoids 2017 Awards Chair) will select the winning photos among those that will receive the more likes on social media.

The idea of using these media is to increase awareness and interest for humanoids and robotics among the public, as these photos can be easily shared and reach people outside the research and robotics field.

You can see and vote for all the photos on Facebook or below by liking the photos (make sure you look at all of them!). More information about the competition can be found here.

How to start with self-driving cars using ROS

Self-driving cars are inevitable.

In recent years, self-driving cars have become a priority for automotive companies. BMW, Bosch, Google, Baidu, Toyota, GE, Tesla, Ford, Uber and Volvo are investing in autonomous driving research. Also, many new companies have appeared in the autonomous cars industry: Drive.ai, Cruise, nuTonomy, Waymo to name a few (read this post for a list of 260 companies involved in the self-driving industry).

The rapid development of this field has prompted a large demand for autonomous cars engineers. Among the skills required, knowing how to program with ROS is becoming an important one. You just have to visit the robotics-worldwide list to see the large amount of job offers for working/researching in autonomous cars, which demand knowledge of ROS.

Why ROS is interesting for autonomous cars

Robot Operating System (ROS) is a mature and flexible framework for robotics programming. ROS provides the required tools to easily access sensors data, process that data, and generate an appropriate response for the motors and other actuators of the robot. The whole ROS system has been designed to be fully distributed in terms of computation, so different computers can take part in the control processes, and act together as a single entity (the robot).

Due to these characteristics, ROS is a perfect tool for self-driving cars. After all, an autonomous vehicle can be considered as just another type of robot, so the same types of programs can be used to control them. ROS is interesting because:

1. There is a lot of code for autonomous cars already created. Autonomous cars require the creation of algorithms that are able to build a map, localize the robot using lidars or GPS, plan paths along maps, avoid obstacles, process pointclouds or cameras data to extract information, etc… Many algorithms designed for the navigation of wheeled robots are almost directly applicable to autonomous cars. Since those algorithms are already available in ROS, self-driving cars can just make use of them off-the-shelf.

2. Visualization tools are already available. ROS has created a suite of graphical tools that allow the easy recording and visualization of data captured by the sensors, and representation of the status of the vehicle in a comprehensive manner. Also, it provides a simple way to create additional visualizations required for particular needs. This is tremendously useful when developing the control software and trying to debug the code.

3. It is relatively simple to start an autonomous car project with ROS onboard. You can start right now with a simple wheeled robot equipped with a pair of wheels, a camera, a laser scanner, and the ROS navigation stack. You’re set up in a few hours. That could serve as a basis to understand how the whole thing works. Then you can move to more professional setups, like for example buying a car that is already prepared for autonomous car experiments, with full ROS support (like the Dataspeed Inc. Lincoln MKZ DBW kit).
Self-driving car companies have identified those advantages and have started to use ROS in their developments. Examples of companies using ROS include BMW (watch their presentation at ROSCON 2015), Bosch or nuTonomy.

Weak points of using ROS

ROS is not all nice and good. At present, ROS presents two important drawbacks for autonomous vehicles:

1. Single point of failure. All ROS applications rely on a software component called the roscore. That component, provided by ROS itself, is in charge of handling all coordination between the different parts of the ROS application. If the component fails, then the whole ROS system goes down. This implies that it does not matter how well your ROS application has been constructed. If roscore dies, your application dies.

2. ROS is not secure. The current version of ROS does not implement any security mechanism for preventing third parties from getting into the ROS network and reading the communication between nodes. This implies that anybody with access to the network of the car can get to the ROS messaging and kidnap the car behavior.

All those drawbacks are expected to be solved in the newest version of ROS, ROS 2. Open Robotics, the creators of ROS have recently released a second beta of ROS 2 which can be tested here. It is expected there will be a release version by the end of 2017.

In any case, we believe that the ROS-based path to self-driving vehicles is the way to go. That is why, we propose a low budget learning path for becoming a self-driving cars engineer, based on the ROS framework.

Our low cost solution to become a self-driving car engineer

Step 1
First thing you need is to learn ROS. ROS is quite a complex framework to learn and requires dedication and effort. Watch the following video for a list of the 5 best methods to learn ROS. Learning basic ROS will help you understand how to create programs with that framework, and how to reuse programs made by others.

Step 2
Next, you need to get familiar with the basic concepts of robot navigation with ROS. Learning how the ROS navigation stack works will provide you the knowledge of basic concepts in navigation like mapping, path planning or sensor fusion. There is no better way to learn this than taking the ROS Navigation in 5 days course developed by Robot Ignite Academy (disclaimer – this is provided by my company The Construct).

Step 3
Third step would be to learn the basic ROS application to autonomous cars: how to use the sensors available in any standard of autonomous car, how to navigate using a GPS, how to generate an algorithm for obstacle detection based on the sensors data, how to interface ROS with the Can-bus protocol used in all the cars used in the industry…

The following video tutorial is ideal to start learning ROS applied to Autonomous Vehicles from zero. The course teaches how to program a car with ROS for autonomous navigation by using an autonomous car simulation. The video is available for free, but if you want to get the most of it, we recommend you to do the exercises at the same time by enrolling into the Robot Ignite Academy.

Step 4
After the basic ROS for Autonomous Cars course, you should learn more advanced subjects like obstacles and traffic signals identification, road following, as well as coordination of vehicles in cross roads. For that purpose, our recommendation would be to use the Duckietown project at MIT. The project provides complete instructions to physically build a small size town, with lanes, traffic lights and traffic signals, to practice algorithms in the real world (even if at a small scale). It also provides instructions to build the autonomous cars that should populate the town. Cars are based on differential drives and a single camera for sensors. That is why they achieve a very low cost (around 100$ per each car).

Image by Duckietown project

Due to the low monetary requirements, and to the good experience it offers for testing real stuff, the Duckietown project is ideal to start practicing autonomous cars concepts like line following based on vision, detecting other cars, traffic signal-based behavior. Still, if your budget is below that cost, you can use a Gazebo simulation of the Duckietown, and still practice most of the content.

Step 5
Then if you really want to go pro, you need to practice with real life data. For that purpose we propose you install and learn from the Autoware project. This project provides real data obtained from real cars on real streets, by means of ROS bags. ROS bags are logs containing data captured from sensors which can be used in ROS programs as if the programs were connected to the real car. By using those bags, you will be able to test algorithms as if you had an autonomous car to practice with (the only limitation is that the data is always the same and restricted to the situation that happened when it was recorded).

Image by the Autoware project

The Autoware project is an amazing huge project that, apart from the ROS bags, provides multiple state-of-the-art algorithms for localization, mapping, obstacles detection and identification using deep learning. It is a little bit complex and huge, but definitely worth studying for a deeper understanding of ROS with autonomous vehicles. I recommend you to watch the Autoware ROSCON2017 presentation for an overview of the system (will be available in October 2017).

Step 6
Final step would be to start implementing your own ROS algorithms for autonomous cars and testing them in different, realistic situations. Previous steps provided you with real-life situations, the bags were limited to the situations where they were recorded. Now it is time to test your algorithms in different situations. You can use already existing algorithms in a mix of all the steps above, but at some point, you will see that all those implementations lack some things required for your goals. You will have to start developing your own algorithms, and you will need lots of tests. For this purpose, one of the best options is to use a Gazebo simulation of an autonomous car as a testbed for your ROS algorithms. Recently, Open Robotics has released a simulation of cars for Gazebo 8 simulator.

Image by Open Robotics

That simulation based on ROS contains a Prius car model, together with 16 beam lidar on the roof, 8 ultrasonic sensors, 4 cameras, and 2 planar lidar, which you can use to practice and create your own self-driving car algorithms. By using that simulation, you will be able to put the car in as many different situations as you want, checking if your algorithm works on those situations, and repeating as many times as you want until it works.

Conclusion

Autonomous driving is an exciting subject with demand for experienced engineers increasing year after year. ROS is one of the best options to quickly jump into the subject. So learning ROS for self-driving vehicles is becoming an important skill for engineers. We have presented here a full path to learn ROS for autonomous vehicles while keeping the budget low. Now it is your turn to make the effort and learn. Money is not an excuse anymore. Go for it!

Join the Robohub community!

As you know, Robohub is a non-profit dedicated to connecting the robotics community to the public. Over nearly a decade we’ve produced more than 200 podcasts and helped thousands of roboticists communicate about their work through videos and blog posts.

Our website Robohub.org provides free high-quality information, and is seen as a top blog in robotics with nearly 1.5M pageviews every year and 20k followers on social media (facebook, twitter).

If you have a story you would like to share (news, tutorials, papers, conference summaries), please send it to editors@robohub.org and we’ll do our best to help you reach a wide audience.

In addition, we’re currently growing our community of volunteers. If you’re interested in blogging, video/podcasting, moderating discussions, curating news, covering conferences, or helping with sustainability of our non-profit, we would love to hear from you!

Just fill in this very short form.

By joining the community, you’ll be part of a grassroots international organisation. You’ll learn about robotics from the top people in the field, travel to conferences, and will improve your communication skills. More important, you’ll be helping us make sure robotics is portrayed in a high-quality manner to the public.

And thanks to all those who have already joined the community, supported us, or sent us their news!

Page 386 of 405
1 384 385 386 387 388 405