Page 380 of 400
1 378 379 380 381 382 400

How will robots and AI change our way of life in 2030?

Sydney Padua’s Ada Lovelace is a continual inspiration.

At #WebSummit 2017, I was part of a panel on what the future will bring in 2030 with John Vickers from Blue Abyss, Jacques Van den Broek from Randstad and Stewart Rogers from Venture Beat. John talked about how technology will allow humans to explore amazing new places. Jacques demonstrated how humans were more complex than our most sophisticated AI and thus would be an integral part of any advances. And I focused on how the current technological changes would look amplified over a 10–12 year period.

After all, 2030 isn’t that far off, so we have already invented all the tech, but it isn’t widespread yet and we’re only guessing what changes will come about with the network effects. As William Gibson said, “The future is here, it’s just not evenly distributed yet.”

What worries me is that right now we’re worried about robots taking jobs. And yet the jobs at most risk are the ones in which humans are treated most like machines. So I say, bring on the robots! But what also worries me is that the current trend towards a gig economy and micro transactions powered by AI, ubiquitous connectivity and soon blockchain, will mean that we turn individuals back into machines. Just part of a giant economic network, working in fragments of gigs not on projects or jobs. I think that this inherent ‘replaceability’ is ultimately inhumane.

When people say they want jobs, they really mean they want a living wage and a rewarding occupation. So let’s give the robots the gigs.

Here’s the talk: “Life in 2030”
It’s morning, the house gently blends real light tones and a selection of bird song to wake me up. Then my retro ‘Teasmade’ serves tea and the wall changes from sunrise to news channels and my calendar for today. I ask the house to see if my daughter’s awake and moving. And to remind her that the clothes only clean themselves if they’re in the cupboard, not on the floor.

Affordable ‘Pick up’ bots are still no good at picking up clothing although they’re good at toys. In the kitchen I spend a while recalibrating the house farm. I’m enough of a geek to put the time into growing legumes and broccoli. It’s pretty automatic to grow leafy greens and berries, but larger fruits and veg are tricky. And only total hippies spend the time on home grown vat meat or meat substitutes.

I’m proud of how energy neutral our lifestyle is, although humans always seem to need more electricity than we can produce. We still have our own car, which shuttles my daughter to school in remote operated semi autonomous mode where control is distributed between the car, the road network and a dedicated 5 star operator. Statistically it’s the safest form of transport, and she has the comfort of traveling in her own family vehicle.

Whereas I travel in efficiency mode — getting whatever vehicle is nearby heading to my destination. I usually pick the quiet setting. I don’t mind sharing my ride with other people or drivers but I like to work or think as I travel.

I work in a creative collective — we provide services and we built the collective around shared interests like historical punk rock and farming. Branding our business or building our network isn’t as important as it used to be because our business algorithms adjust our marketing strategies and bid on potential jobs faster than we could.

The collective allows us to have better health and social plans than the usual gig economy. Some services, like healthcare or manufacturing still have to have a lot of infrastructure, but most information services can cowork or remote work and our biggest business expense is data subscriptions.

This is the utopic future. For the poor, it doesn’t look as good. Rewind..
It’s morning. I’m on Basic Income, so to get my morning data & calendar I have to listen to 5 ads and submit 5 feedbacks. Everyone in our family has to do some, but I do extra so that I get parental supervision privileges and can veto some of the kid’s surveys.

We can’t afford to modify the house to generate electricity, so we can’t afford decent house farms. I try to grow things the old way, in dirt, but we don’t have automation and if I’m busy we lose produce through lack of water or bugs or something. Everyone can afford Soylent though. And if I’ve got some cash we can splurge on junk food, like burgers or pizza.

My youngest still goes to a community school meetup but the older kids homeschool themselves on the public school system. It’s supposed to be a personalized AI for them but we still have to select which traditional value package we subscribed to.

I’m already running late for work. I see that I have a real assortment of jobs in my queue. At least I’ll be getting out of the house driving people around for a while, but I’ve got to finish more product feedbacks while I drive and be on call for remote customer support. Plus I need to do all the paperwork for my DNA to be used on another trial or maybe a commercial product. Still, that’s how you get health care — you contribute your cells to the health system.

We also go bug catching, where you scrape little pieces of lichen, or dog poo, or insects into the samplers, anything that you think might be new to the databases. One of my friends hit jackpot last year when their sample was licensed as a super new psychoactive and she got residuals.

I can’t afford to go online shopping so I’ll have to go to a mall this weekend. Physical shopping is so exhausting. There are holo ads and robots everywhere spamming you for feedback and getting in your face. You might have some privacy at home but in public, everyone can eye track you, emote you and push ads. It’s on every screen and following you with friendly robots.

It’s tiring having to participate all the time. Plus you have to take selfies and foodies and feedback and survey and share and emote. It used to be ok doing it with a group of friends but now that I have kids ….
Robots and AI make many things better although we don’t always notice it much. But they also make it easier to optimize us and turn us into data, not people.

Robust distributed decision-making in robot swarms

Credit: Jerry Wright

Reaching an optimal shared decision in a distributed way is a key aspect of many multi-agent and swarm robotic applications. As humans, we often have to come to some conclusions about the current state of the world so that we can make informed decisions and then act in a way that will achieve some desired state of the world. Of course, expecting every person to have perfect, up-to-date knowledge about the current state of the world is unrealistic, and so we often rely on the beliefs and experiences of others to inform our own beliefs.

We see this too in nature, where honey bees must choose between a large number of potential nesting sites in order to select the best one. When a current hive grows too large, the majority of bees must choose a new site to relocate to via a process called “swarming” – a problem that can be generalised to choosing the best of a given number of choices. To do this, bees rely on a combination of their own experiences and the experiences of others in the hive in order to reach an agreement about which is the best site. We can learn from solutions found in nature to develop our own models and apply these to swarms of robots. By having pairs of robots interact and reach agreements at an individual level, we can distribute the decision-making process across the entire swarm.

Decentralised algorithms such as these are often considered to be more robust than their centralised counterparts because there is no single point of failure, but this is rarely put to the test. Robustness is crucial in large robot swarms, given that individual robots are often made with cheap and reliable hardware to keep costs down. Robustness is also important in scenarios which might be critical to the protection or preservation of life such as in search and rescue operations. In this context we aim to introduce an alternative model for distributed decision-making in large robot swarms and examine its robustness to the presence of malfunctioning robots, as well as compare it to an existing model: the weighted voter model (Valentini et al., 2014).

kilobot_img_annotated
Kilobots are small, low cost robots used to study swarm robotics. They each contain 2 motors for movement and an RGB LED and IR transmitter for communication.

In this work we consider a simplified version of the weighted voter model where robots (specifically kilobots) move around randomly in a 1.2m^2 arena and at any point in time are in one of two primary states: either signalling, or updating. Those in the signalling state are signalling their current belief that either “choice A is the best” or “choice B is the best”, and they continue to do so for a length of time proportional to the quality of the choice i.e. in our experiments, choice A has a quality of 9 and choice B has a quality of 7, and so those believing that choice A is the best choice will signal for a longer duration than those believing that choice B is the best. This naturally creates a bias in the swarm where those signalling for choice A will do so for longer than those signalling for choice B, and this will inevitably affect the updating robots. Those in the updating state will select a signalling robot at random from their local neighbours, provided that they are within their communication radius (10 cm limit for the Kilobots), and adopt that robot’s belief.

We compare this model to our “three-valued model”. Instead of immediately adopting the signalling robot’s belief, robots instead follow these rule: Provided that a belief in choice A corresponds to a truth state of 1 and choice B a truth state of 0, then we introduce a third truth state of 1/2 representing “undecided” or “unknown” as an intermediate state. If the two robots conflict in their beliefs such that one believes choice A (1) to be the best and the other choice B (0) then the updating robot adopts a new belief state of 1/2. If one robot has a stronger belief, either in choice A (1) or choice B (0), and the other is undecided (1/2), then the stronger belief is preserved. This approach eventually leads to the swarm successfully reaching consensus about which is the best choice. Furthermore, the swarm chooses the best of the two choices, which is A in this case.

We then go on to adapt our model so that a percentage of robots is malfunctioning, meaning they adopt a random belief state (either 1 or 0) instead of updating their belief based on other robots, before continuing to signal for that random choice. We run experiments both in simulation and on a physical robot swarms of kilobots.

Results

In the figure, we show results as a trajectory of the Kilobots signalling for either choice A or choice B.

kilobot
Experiments involve a population of 400 kilobots where, on average, 10% of the swarm is malfunctioning (signalling for a random choice). We can see that the three-valued model eventually reaches 100% of the (functional) kilobots in the swarm signalling for choice A in just over 4 minutes. This model outperforms the weighted voter model which, while quicker to come to a decision, achieves below 90% on average. The inclusion of our “undecided” state slows convergence, but in doing so provides a means for robots to avoid adopting the belief of malfunctioning robots when they are in disagreement. For robotic systems where malfunction is a possibility, it therefore seems preferable to choose the three-valued model.

In the video, we show a time-lapse of experiments performed on 400 Kilobots where blue lights represent those signalling for choice A, and red those for choice B. Those in the intermediate state (1/2) may be coloured either red or blue. The green Kilobots are performing as if malfunctioning, such that they adopt a random belief and then signal for that belief.

In the future, we would like to consider ways to close the gap between the three-valued model and the weighted voter model in terms of decision-making speed while maintaining improved performance in the presence of malfunctioning robots. We also intend to consider different distributed algorithms for decision-making which take account of numerous beliefs being signalled within a robot’s radius of communication. So far, both models considered in this work only consider a single robot’s belief while updating, but there exist other models, such as majority rule models and models for opinion-pooling, which take account of the beliefs of many robots. Finally, we intend to investigate models that scale well with the number of choices that the swarm must choose between. Currently, most models only consider a small number of choices, but the models discussed here require discrete signalling periods which would need to increase as the number of choices increases.


This article was originally posted on EngMaths.org.

For more information, read Crosscombe, M., Lawry, J., Hauert, S., & Homer, M. (2017). Robust Distributed Decision-Making in Robot Swarms: Exploiting a Third Truth State. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017)

1000 local events expected during the European Robotics Week 2017

The importance of robotics for Europe’s regions will be the focus of a week-long celebration of robotics taking place around Europe on 17–27 November 2017. The European Robotics Week 2017 (ERW2017) is expected to include more than 1000 local events for the public — open days by factories and research laboratories, school visits by robots, talks by experts and robot competitions are just some of the events.

Robotics is increasingly important in education. “Since 2011, we have been asking schools throughout all regions of Europe to demonstrate robotics education at all levels,” says Reinhard Lafrenz, the Secretary General of euRobotics, the association for robotics researchers and industry which organises ERW2017. “I am delighted that many skilled teachers and enthusiastic local organisers have taken up this challenge and we have seen huge success in participation, with over 1000 events expected to be organised in all regions of Europe this year.”

All over Europe, ERW2017 will show the public how robots can support our daily lives, for example, by helping during surgery and, in the future, by providing support and care for people with disabilities, or how robots can monitor the environment. Robotics is also an essential part of EU-funded digital innovation hubs and could, in the future, contribute to the creation of new jobs.

Some of the highlights of the ERW events announced so far are:

  • in Italy, the School of Robotics will webstream an event at the KUKA robot company;
  • in Bosnia-Herzegovina, there will be dozens of SPARKreactors League robotics competitions;
  • in Latvia and Iceland, there will be ERW events for the first time;
  • in Spain, over 200 events are being organised in schools, more than half of them in Catalonia;
  • in Germany, nearly 40 events will include First Lego League competitions for young people, and an education day at the Fraunhofer IPA research organisation and a humanoid robot workshop at Hamburg University of Technology.

The ERW2017 Central Event organised in Brussels will see the “Robots Discovery” exhibition hosted by the European Committee of the Regions 20-23 November), where robotics experts from 30 European and regionally funded projects will outline how their work can impact our society. The exhibiting projects will show robots in healthcare helping during surgery or providing support for elder care, helping students develop digital skills, monitoring the environment and applying agricultural chemicals with precision and less waste, or helping save lives after disasters.

Other events organised in Belgium include the Eurospace Center which will run robotics classes for children (24 November), and the demonstration in Brussels of the self-driving bus of the Finnish Metropolia University of Applied Sciences (22-23 November). ERW2017 will overlap with the last week of the month-long InQbet hackathon on innovation in robotics and artificial intelligence.

euRobotics has recorded 400 000 visitors across Europe to events at the six previous ERWs.

Find your local ERW activities here and follow #ERW2017 on twitter.

Funding trends: self-driving dreams coming true


Participants and startups in the emerging self-driving vehicles industry (components, systems, trucks, cars and buses) have been at it for over almost 60 years. The pace accelerated in 2004, 2005 and 2007 when DARPA sponsored long-distance competitions for driverless cars, and then again in 2009 when Uber began its ride-hailing system.

As the prospects that self-driving ride-hailing fleets, vehicles, systems and associated AI would soon be a reality, startups, fundings, mergers and acquisitions have followed reaching a peak in 2017. Thus far in 2017 more than 55 companies and startups offering everything from solid state distancing sensors to ride-share fleets and mapping systems – plus five strategic acquisitions – raised over $28.2 billion!

2017 Investments Trends: Self-driving

Listed below are month-by-month recaps of self-driving-related fundings and acquisitions as reported by The Robot Report. The two massive fundings by the SoftBank Vision Fund in May, the Intel acquisition of Movidius in March and Ford’s acquisition of Argo in February are extraordinary. Nevertheless, pulling out those billion dollar transactions still shows that $2.4 billion found its way to more than 55 companies. [The trend continues in November with Optimus Ride and Ceres Imaging both raising Series A money.]

Click on the month for funding details, links and profiles for each of the companies.

  • October – $957.24 million:
    • Mapbox-$164M, Element AI-$105M, Horizon Robotics-$100M, Innoviz Technologies-$8, Momenta AI-$46, Built Robotics-$15, Blickfeld-$4.25, nuTonomy was acquired by Delphi Automotive-$450M, and Strobe was acquired by General Motors-unknown amount.
  • September – $275 million:
    • LeddarTech-$101M, Innoviz Technologies-$65M, JingChi-$52M, Five AI-$35M, Drive AI-$15M, Ushr Inc-$10M and Metawave-$7M.
  • August – $70 million:
    • Oryx Vision-$50M and TuSimple-$20M.
  • July – $413 million:
    • Nauto-$159M, Brain Corp-$114M, Momenta AI-$46, Autotalk-$40M, Slamtec-$22M, Embark-$15M, Xometry-$15M and Metamoto-$2M.
  • June – $112.5 million:
    • Drive AI-$50M, Swift Navigation-$34M, AEye-$16M, Carmera-$6.4M, Cognata-$5 and Optimus Ride-$1.1.
  • May$9.676 billion:
    • Didi Chuxing-$5.5 billion, Nvidia-$4 billion, ClearMotion-$100M, Echodyne-$29M, DeepMap-$15M, Hesai Photonics Technology-$16M, TriLumina-$9M, AIRY 3D-$3.5M and Vivacity Labs-$3.3M.
  • April – $306.6 million:
    • Mobvoi-$180M, Peloton Technology-$60M, Luminar Technology-$36M, Renovo Auto-$10M, Aurora Innovation-$6.1M, VIST Group-$6M, DeepScale-$3M, Arbe Robotics-$2.5M, BestMile-$2M, Compound Eye-$1M.
  • March$15.343 billion:
    • Wayray-$18M, EasyMile-$15M, SB Drive-$4.6M, Starsky Robotics-$3.75M and CrowdAI-$2M. Intel acquired Mobileye for $15.3 billion.
  • February$1.024 billion
    • ZongMu Technology-$14.5M andTetraVue-$10M. Ford Motor Co acquired Argo AI-$1 billion.
  • January – $??? million: 
    • Autonomos was acquired by TomTom-unknown amount.

The SoftBank Vision Fund Effect

Plentiful money and sky high valuations are causing more companies to delay IPOs. The SoftBank Vision Fund is a key enabler of this recent phenomena. Founded in 2017 with a goal of $100 billion (they closed with $93 billion) with principle investors including SoftBank, Saudi Arabia’s sovereign wealth fund, Abu Dhabi’s national wealth fund, Apple, Foxconn, Qualcomm and Sharp, the Fund has been disbursing at a rapid pace. According to recode, the Fund, through August, had invested over $30 billion in Uber, ARM, Nvidia, WeWork, OneWeb, Flipkart, OSIsoft, Roivant, SoFi, Fanatics, Improbable, OYO, Slack, Plenty, Nauto and Brain Corp. Many on that list are involved in the self-driving industry.

The NY Times, in an article describing Masayoshi Son’s grand plan for the Fund, wrote that all these companies “have something in common: They are involved in collecting enormous amounts of data, which are crucial to creating the brains for the machines that, in the future, will do more of our jobs and creating tools that allow people to better coexist.”

Further, Son said he believed robots would inexorably change the work force and machines would become more intelligent than people, an event referred to as the “Singularity. Mr. Son [said he] is on a mission to own pieces of all the companies that may underpin the global shifts brought on by artificial intelligence to transportation, food, work, medicine and finance. His vision is not just about predictions like the Singularity. He understands that we’ll need a massive amount of data to get us to a future that’s more dependent on machines and robotics.

Bottom Line

Companies involved in the emerging self-driving industry accounted for most of the dollars invested thus far in 2017. SoftBank’s fund and Masayoshi Son’s grand plan, combined with auto companies grabbing talent through strategic acquisitions, partnerships and investments, are leading the way. Robotics-related agricultural and healthcare-related investments were a distant second and third. Fourth went to underwater drones, systems and components.

3 Crucial Characteristics of an Autonomous Robot

For a robot to truly be considered autonomous, it must possess three very important characteristics: Perception, Decision and Actuation.

 

  • Perception: For an autonomous robot, perception means sensors. Laser scanners, stereo vision cameras (eyes), bump sensors (skin and hair), force-torque sensors (muscle strain), and even spectrometers (smell) are used as input devices for a robot. Similar to how a human uses the five senses to perceive the world, a robot uses sensors to perceive the environment around it.

 

  • Decision: Autonomous robots have a similar decision making structure as humans. The “brain” of a robot is usually a computer, and it makes decisions based on what its mission is, and what information it receives along the way. Autonomous robots also have a capability that is similar to the neurological system in humans. This is called an embedded system; it operates faster and with higher authority than the computer that is executing a mission plan and parsing data. This is how the robot can decide to stop if it notices an obstacle in its way, if it detects a problem with itself, or if its emergency-stop button is pressed.
  • Actuation: People have actuators called muscles. They take all kinds of shapes and perform all kinds of functions. Autonomous robots can have all kinds of actuators too, and a motor of some kind is usually at the heart of the actuator. Whether it’s a wheel, linear actuator, or hydraulic ram, there’s always a motor converting energy into movement.

In summation, a truly autonomous robot is one that can perceive its environment, make decisions based on what it perceives and/or has been programmed to recognize and then actuate a movement or manipulation within that environment.

The best example of an autonomous robot is the Roomba. The Roomba is easily the most prolific, truly autonomous robot on the market today. While only a few hundred dollars, not thousands like many robots for manufacturing, the Roomba can make decisions and take action based on what it perceives in its environment. It can be placed in a room, left alone, and it will do its job without any help or supervision from a person. This is true autonomy.

###

The post above has been submitted to us by https://stanleyinnovation.com

 

The post 3 Crucial Characteristics of an Autonomous Robot appeared first on Roboticmagazine.

Three concerns about granting citizenship to robot Sophia

Citizen Sophia. Flickr/AI for GOOD Global Summit, CC BY

I was surprised to hear that a robot named Sophia was granted citizenship by the Kingdom of Saudi Arabia.

The announcement last week followed the Kingdom’s commitment of US$500 billion to build a new city powered by robotics and renewables.

One of the most honourable concepts for a human being, to be a citizen and all that brings with it, has been given to a machine. As a professor who works daily on making AI and autonomous systems more trustworthy, I don’t believe human society is ready yet for citizen robots.

To grant a robot citizenship is a declaration of trust in a technology that I believe is not yet trustworthy. It brings social and ethical concerns that we as humans are not yet ready to manage.

Who is Sophia?

Sophia is a robot developed by the Hong Kong-based company Hanson Robotics. Sophia has a female face that can display emotions. Sophia speaks English. Sophia makes jokes. You could have a reasonably intelligent conversation with Sophia.

Sophia’s creator is Dr David Hanson, a 2007 PhD graduate from the University of Texas.

Sophia is reminiscent of “Johnny 5”, the first robot to become a US citizen in the 1986 movie Short Circuit. But Johnny 5 was a mere idea, something dreamt up by comic science fiction writers S. S. Wilson and Brent Maddock.

Did the writers imagine that in around 30 years their fiction would become a reality?

Risk to citizenship

Citizenship – in my opinion, the most honourable status a country grants for its people – is facing an existential risk.

As a researcher who advocates for designing autonomous systems that are trustworthy, I know the technology is not ready yet.

We have many challenges that we need to overcome before we can truly trust these systems. For example, we don’t yet have reliable mechanisms to assure us that these intelligent systems will always behave ethically and in accordance with our moral values, or to protect us against them taking a wrong action with catastrophic consequences.

Here are three reasons I think it is a premature decision to grant Sophia citizenship.

1. Defining identity

Citizenship is granted to a unique identity.

Each of us, humans I mean, possesses a unique signature that distinguishes us from any other human. When we get through customs without talking to a human, our identity is automatically established using an image of our face, iris and fingerprint. My PhD student establishes human identity by analysing humans’ brain waves.

What gives Sophia her identity? Her MAC address? A barcode, a unique skin mark, an audio mark in her voice, an electromagnetic signature similar to human brain waves?

These and other technological identity management protocols are all possible, but they do not establish Sophia’s identity – they can only establish hardware identity. What then is Sophia’s identity?

To me, identity is a multidimensional construct. It sits at the intersection of who we are biologically, cognitively, and as defined by every experience, culture, and environment we encountered. It’s not clear where Sophia fits in this description.

2. Legal rights

For the purposes of this article, let’s assume that Sophia the citizen robot is able to vote. But who is making the decision on voting day – Sophia or the manufacturer?

Presumably also Sophia the citizen is “liable” to pay income taxes because Sophia has a legal identity independent of its creator, the company.

Sophia must also have the right for equal protection similar to other citizens by law.

Consider this hypothetical scenario: a policeman sees Sophia and a woman each being attacked by a person. That policeman can only protect one of them: who should it be? Is it right if the policeman chooses Sophia because Sophia walks on wheels and has no skills for self-defence?

Today, the artificial intelligence (AI) community is still debating what principles should govern the design and use of AI, let alone what the laws should be.

The most recent list proposes 23 principles known as the Asilomar AI Principles. Examples of these include: Failure Transparency (ascertaining the cause if an AI system causes harm); Value Alignment (aligning the AI system’s goals with human values); and Recursive Self-Improvement (subjecting AI systems with abilities to self-replicate to strict safety and control measures).

3. Social rights

Let’s talk about relationships and reproduction.

As a citizen, will Sophia, the humanoid emotional robot, be allowed to “marry” or “breed” if Sophia chooses to? Students from North Dakota State University have taken steps to create a robot that self-replicates using 3D printing technologies.

If more robots join Sophia as citizens of the world, perhaps they too could claim their rights to self-replicate into other robots. These robots would also become citizens. With no resource constraints on how many children each of these robots could have, they could easily exceed the human population of a nation.

As voting citizens, these robots could create societal change. Laws might change, and suddenly humans could find themselves in a place they hadn’t imagined.

The Conversation

This article was originally published on The Conversation. Read the original article.

October 2017 fundings, acquisitions and IPOs

Twenty-eight different startups were funded in October cumulatively raising $862 million, up from $507 million in September. Three of the top four fundings were for startups involved in the self-driving process. An additional five lower-amount startups were also funded for self-driving applications or components along with two of the six acquisitions.

Six acquisitions were reported during the month including Delphi Automotive’s buying nuTonomy for $450 million and Boeing’s acquisition of 550-employee Aurora Flight Sciences.

On the IPO front, Altair Engineering raised $156 million and Restoration Robotics raised $25 million when both went live on the NASDAQ stock exchange this month.

Fundings

  • Mapbox, a Washington, DC and San Francisco provider of nav systems for car companies and others involved in autonomous vehicles, raised $164 million in a Series C round led by the SoftBank Vision Fund, with participation from existing investors including Foundry Group, DFJ Growth, DBL Partners, and Thrive Capital. “Location data is central and mission critical to the development of the world’s most exciting technologies,” said Rajeev Misra, who helps oversee SoftBank’s Vision Fund.
  • Element AI, a Canadian startup providing learning platform solutions for self-driving and advanced manufacturing, raised $135 CAD million (around $105 million) in a Series A round (in June) led by Data Collective, a SV-based venture capital firm, and included participation by Fidelity Investments Canada, National Bank of Canada, Intel Capital, and Real Ventures.
  • Ninebot, the Chinese consumer products company that bought out Segway and raised $80 million in 2015, raised another $100 million in a Series C round  from the SDIC Fund Management Co. and the China Mobile Fund.
  • Horizon Robotics, another Chinese startup, raised $100 million in a Series A round led by Intel Capital with participation by Wu Capital, Morningside Venture Capital, Linear Venture, Hillhouse Capital and Harvest Investments. Horizon is developing self-driving vehicle autopilot and self-navigating consumer and neural network chips. Wendell Brooks, Intel SVP and President of Intel Capital which invested in Horizon said, “By 2020, every autonomous vehicle on the road will create 4 TB of data per day. A million self-driving cars will create the same amount of data every day as 3 billion people. As Intel transitions to a data company, Intel Capital is actively investing in startups across the technology spectrum that can help expand the data ecosystem and pathfind important new technologies.”
  • Innoviz Technologies, an Israel-based developer of LiDAR sensing technology for autonomous vehicles, raised $73 million in Series B funding. Investors include Samsung Catalyst and SoftBank Ventures Korea.
  • Zume Pizza, the Silicon Valley robotic pizza making startup, raised $48 million in a Series B funding. Investors in the round were not detailed. Zume is already delivering pizzas in Silicon Valley. It uses an assembly line of robots to flatten dough into circles, spread sauce and cheese, and slide the pies into and out of an 800 degree oven. Pizzas finish cooking in ovens inside delivery trucks.
  • Momenta AIa Beijing autonomous driving tech startup using machine vision (rather than LiDAR), raised $46 in a Series B round led by NIO Capital, Sequoia Capital China, Hillhouse Capital and Cathay Innovation Fund.
  • Wonder Workshop, previously named Play-i, a Silicon Valley and Chinese educational robot startup, raised $41 million in a Series C round from a series of investors including Tencent, TAL Education Group, MindWorks Ventures, Madrona Venture Group, Softbank Korea, VTRON Group, TCL Capital, Sinovation Ventures, Bright Success, WI Harper, and CRV. Wonder Workshop’s Dot and Dash robots are in use by thousands of student groups and schools around the world. “We founded Wonder Workshop to provide all children — girls and boys of all ages — with the skills needed to succeed in the future economy. This round of financing will allow us to continue on our mission to inspire the inventors of tomorrow,” said Vikas Gupta, CEO.
  • FogHorn Systems, a Silicon Valley smart manufacturing software startup, raised $30 million in a Series B round led by Intel Capital and Saudi Aramco Energy Ventures with new investor Honeywell Ventures and all previous investors participating, including Series A investors March Capital Partners, GE, Dell Technologies Capital, Robert Bosch Venture Capital, Yokogawa Electric Corporation, Darling Ventures and seed investor The Hive.
  • Nanotronic Imaging, an Ohio testing solutions provider, raised $30 million in a Series D funding led by Investment Corp of Dubai and Peter Thiel’s Founders Fund.
  • Wandercraft, a French rehabilitation exoskeleton startup, raised $17.8 million in a Series B round from XAnge, Innovation Capital, Idinvest Partners, Cemag Invest and BPIFrance.
  • Ever AIa San Francisco startup developing facial recognition, announced that they had raised $16 million in a Series B funding led by Icon Ventures with participation from Felicis Ventures and Khosla Ventures. On the same day SoftBank announced their intention to use Ever AI’s facial recognition platform as a new feature for their Pepper robot.
  • Built Robotics, a San Francisco startup developing a self-driving kit for construction equipment – a self-driving excavator – raised $15 million in a Series A round led by NEA (New Enterprise Associates) with participation by Founders Fund, Lemnos and angel investors including Eric Stromberg, Maria Thomas, Carl Bass, Edward Lando and Justin Kan.
  • Veo Robotics, a Cambridge, MA-based vision systems startup, raised $12 million in a Series A funding. Lux Capital and GV led the round, and were joined by unnamed investors including Next47.
  • Riverfield Surgical Robot Lab, a Japanese startup, raised $10 million in a Series B round led by Toray Engineering and included SBI Investment, Jafco and Beyond Next Ventures.
  • Beijing Beehive Agriculture Technology Co. raised $9.4 million in an A funding round led by Tendence Capital and other unnamed sources.  The funding marks the company’s second financing round after it raised around $5 million from e-commerce giant JD.com Inc. and others in its pre-A funding.
  • Titan Medical, a Canadian robotic single-port surgery device developer, raised $9.1 million: $2.6 million by floating 13.4 million common shares in a private placement to more than a dozen robotic surgeons in the US and Canada and an additional $6.5 million from the early exercise of purchase warrants for 42.6 million common shares.
  • Robart, an Austria-based developer of AI and navigation intelligence for autonomous consumer robots, raised $7.2 million in a Series B funding. CM-CIC Innovation led the round, and was joined by Innovacom, Robert Bosch Venture Capital and SEB Alliance.
  • Nileworks,  a Japanese drone crop spraying startup, raised $7.1 million from a group of Japanese investors including public-private partnership the Innovation Network Corporation of Japan, agricultural chemical maker Kumiai Chemical Industry Co., Sumitomo Corporation and its subsidiary  Sumitomo Chemical Co., the Japanese National Federation of Agricultural Co-operative Associations, and The Norinchukin Bank. When the product goes on sale in 2019 the company will target rice farmers in Japan.
  • Impossible Objects, an Illinois provider of 3D printing tech, raised $6.4 million in a Series A funding led by OCA Ventures and joined by IDEA Fund Partners, Mason Avenue Investments, Huizenga Capital Management and Inflection Equity Partners.
  • AeroFarms, the indoor vertical farming startup which raised $34 million reported earlier this year, rounded out their $40 million Series D funding with $6 million from Ikea Group and chef David Chang of the Momofuku Group. AeroFarms just built its 9th indoor farm in Newark, NJ.
  • Blickfeld, a Munich-based LiDAR maker for autonomous driving, raised $4.25 million in seed funding. Investors include Unternehmertum Venture Capital Partners, High-Tech Gruenderfonds, Fluxunit – OSRAM Ventures and Tengelmann Ventures.
  • Realtime Roboticsa Boston motion planning and control startup, raised $2 million in seed funding from SPARX Group, Scrum Ventures, and Toyota AI Ventures.
  • Vitae Industries, a Rhode Island pharma dispensing robot maker, raised $1.8 million in seed funding from Lerer Hippeau Ventures and Slater Technology Fund. Other investors in the round included Techstars, BoxGroup, Compound and Founder Collective.
  • Acutronic Robotics, a Swiss startup that last year acquired Spanish component maker Erle Robotics, raised an undisclosed amount from Sony in a Series A funding round. Sony will also adopt Acutronic’s Hardware Robot Operating System (H-ROS), for use in its own robotics division. Sony’s strategic use of the H-ROS platform in its own operations, and DARPA’s prior investment, suggest there’s a lot of interest in H-ROS for unifying legacy robotic systems from old-line robot providers.
  • Bharati Robotic Systems, a Pune, India-based industrial robotic cleaning startup, raised an undisclosed amount of funding from its existing investors – Society For Innovation and Entrepreneurship (SINE, IITB Incubator), and other angel investors.
  • IUVO, an Italian exoskeleton and wearable prosthetics spin-off from the Scuola Superiore Sant’Anna, has received a joint investment from robot manufacturer Comau and Össur, a global provider of non-invasive orthopedics. No financial amounts were provided however Comau and Össur will now hold a majority share of IUVO. “This joint venture represents a key step toward the creation of wearable robotic exoskeletons that can enhance human mobility and quality of life,” emphasized Mauro Fenzi, CEO of Comau. “By uniting the know-how and enabling technologies of the various partners, we are in a unique position to extend the use of robotics beyond manufacturing and toward a truly progressive global reality. I believe the differentiating factor of a project like IUVO is the combination of Comau’s automation skills and Össur’s extensive experience in bionics and bracing to enable the production of products, such as the exoskeletons, and to be able to demonstrate the benefits of robotics”.
  • Ultimaker, a manufacturer of professional desktop 3D printers and employer of over 300, raised an undisclosed amount from NPM Capital, a Benelux investment company.

Acquisitions

  • Delphi Automotive, a UK Tier 1 automotive supplier, acquired nuTonomy, a Boston self-driving ride sharing startup, for $450 million. nuTonomy, a spin-off from MIT and Singapore and with funding from Ford, has grown to 100 employees including 70 engineers and scientists. The acquisition will double Delphi’s autonomous driving applications team.
  • HTI Cybernetics, a Michigan industrial robotics integrator and contract manufacturer, has been acquired by Chongqing Nanshang Investment Group for around $50 million. HTI provides robotic welding systems to the auto industry and also has a contract welding services facility in Mexico.
  • Ridecell, a San Francisco mobility platform provider of car sharing, ride sharing and autonomous vehicles software, has acquired Auro Robotics, a Silicon Valley self-driving vehicle startup with shuttles operating on the Santa Clara University campus, for an undisclosed amount but which TheInformation estimates to be around $20 million.
  • Applied Automation, a UK components manufacturer of automation and control equipment, is changing and upgrading their status to include becoming an integrator of industrial and collaborative robots and, through the acquisition of PTG Precision Engineers, has gained talented engineering manpower to augment their sales/integration efforts. PTG is located across the street from Applied. No financial details about the acquisition were provided by either party.
  • General Motors acquired Pasadena-based Strobe, a vision systems startup developing an optical micro-oscillator for LiDAR timing, navigation and sensing applications, for an undisclosed amount. Strobe will join the Cruise Automation self-driving group.
  • Boeing is acquiring Aurora Flight Sciences, a 550 employee Virginia-based UAS provider, for an undisclosed amount. “Since its inception, Aurora has been focused on the development of innovative aircraft that leverage autonomy to make aircraft smarter,” said John Langford, Aurora Flight Sciences founder and chief executive officer. “As an integral part of Boeing, our pioneered technologies of long-endurance aircraft, robotic co-pilots, and autonomous electric VTOLs will be transitioned into world-class products for the global infrastructure.”

IPOs

  • Restoration Robotics, a San Jose, Calif.-based company focused on robotics that assist doctors in hair transplant procedures, raised $25 million in an upsized offering of 3.6 million shares priced at $7. In 2016, the company posted revenue of $15.6 million and a loss of $21.8 million. HAIR is now listed on the NASDAQ stock exchange.
  • Altair Engineeringa Troy, Mich.-based engineering software maker, raised $156 million in an IPO of 12 million shares at $13. The stock (ALTR) is now trading on Nasdaq. Altair develops simulation and design software for industrial applications, automobiles, consumer goods and all types of robotics.
  • Nilfisk Holdings, a Danish manufacturer of industrial cleaning machines including a new line of autonomous cleaners, was spun off from NKT A/S, a Danish conglomerate, and went public on the NASDAQ Copenhagen exchange as NLFSK. Financial details were not disclosed.

Brain surgery: The robot efficacy test?

An analysis by Stanford researchers shows that the use of robot-assisted surgery to remove kidneys wasn’t always more cost-effective than using traditional laparascopic methods.
Master Video/Shutterstock

The internet hummed last week with reports that “Humans Still Make Better Surgeons Than Robots.” Stanford University Medical Center set off the tweetstorm with its seemingly scathing report on robotic surgery. When reading the research of 24,000 patients with kidney cancer, I concluded that the problem lied with the humans overcharging patients versus any technology flaw. In fact, the study praised robotic surgery for complicated procedures and suggested the fault lied with hospitals unnecessarily pushing robotic surgery for simple operations over conventional methods, which led to “increases in operating times and cost.”

Dr. Benjamin Chung, the author of the report, stated that the expenses were due to either “the time needed for robotic operating room setup” or the surgeon’s “learning curve” with the new technology. Chung defended the use of robotic surgery by claiming that “surgical robots are helpful because they offer more dexterity than traditional laparoscopic instrumentation and use a three-dimensional, high-resolution camera to visualize and magnify the operating field. Some procedures, such as the removal of the prostate or the removal of just a portion of the kidney, require a high degree of delicate maneuvering and extensive internal suturing that render the robot’s assistance invaluable.”

Chung’s concern was due to the dramatic increase in hospitals selling robotic-assisted surgeries to patients rather than more traditional methods for kidney removals. “Although the laparoscopic procedure has been standard care for a radical nephrectomy for many years, we saw an increase in the use of robotic-assisted approaches, and by 2015 these had surpassed the number of conventional laparoscopic procedures,” explains Chung. “We found that, although there was no statistical difference in outcome or length of hospital stay, the robotic-assisted surgeries cost more and had a higher probability of prolonged operative time.”

The dexterity and precision of robotic instruments has been proven in live operating theaters for years, as well as multitude concept videos on the internet of fruit being autonomously stitched up. Dr. Joan Savall, also of Stanford, developed a robotic system that is even capable of performing (unmanned) brain surgery on a live fly. For years, medical students have been ripping the heads off of the drosophila with tweezers in the hopes of learning more about the insect’s anatomy. Instead, Savall’s machine gently follows the fly using computer vision to precisely target its thorax; literally a moving bullseye the size of a period. The robot is so careful that the insect is unfazed and flies off after the procedure. Clearly, the robot is quicker and more exacting than even the most careful surgeon. According to journal Nature Methods, the system can operate on 100 flies an hour.

Last week, Dr. Dennis Fowler of Columbia University and CEO of Platform Imaging, said that he imagines a future whereby the surgeon will program the robot to finish the procedure and stitch up the patient. He said senior surgeons already pass such mundane tasks to their medical students, ‘so why not a robot?’ Platform Imaging is an innovative startup that aims to reduce the amount of personnel or equipment a hospital needs when performing laparoscopic surgeries. Long-term, it plans to add snake robots to its flexible camera to empower surgeons with the greatest amount of maneuverability. In addition to the obvious health benefits to the patient, robotic surgeries like Dr. Fowler’s will reduce the number of workplace injuries to laparoscopic surgeons. According to a University of Maryland study, 87% of surgeons who perform laparoscopic procedures complain of eye strain, hand, neck, back and leg pain, headaches, finger calluses, disc problems, shoulder muscle spasm and carpel tunnel syndrome. Many times these injuries are so debilitating that they lead to early retirement. The author of the report, Dr. Adrian Park, explains “In laparoscopic surgery, we are very limited in our degrees of movement, but in open surgery we have a big incision, we put our hands in, we’re directly connected with the target anatomy. With laparoscopic surgery, we operate by looking at a video screen, often keeping our neck and posture in an awkward position for hours. Also, we’re standing for extended periods of time with our shoulders up and our arms out, holding and maneuvering long instruments through tiny, fixed ports.” In Dr. Fowler’s view, robotic surgery is a game changer by expanding the longevity of a physician’s career.

At the children’s National Health System in Washington, D.C, the Smart Tissue Autonomous Robot (STAR) provided a sneak peak to the future of surgery. Using advanced 3D imaging systems and precise force-sensing instruments the STAR was able to autonomously stitch up soft tissue samples (of a living pig above) with sub-millimeter accuracy that is by far greater than even the most precise human surgeons. According to the study published in the journal Science Translational Medicine, there are 45 million soft tissue surgeries performed each year in the United States.

Dr. Peter Kim, STAR’s creator, says “Imagine that you need a surgery, or your loved one needs a surgery. Wouldn’t it be critical to have the best surgeon and the best surgical techniques available?” Dr. Kim espouses, “Even though we take pride in our craft of doing surgical procedures, to have a machine or tool that works with us in ensuring better outcome safety and reducing complications—[there] would be a tremendous benefit.”

“Now driverless cars are coming into our lives,” explains Dr. Kim. “It started with self-parking, then a technology that tells you not to go into the wrong lane. Soon you have a car that can drive by itself.” Similarly, Dr. Kim and Dr. Fowler envision a time in the near future when surgical robots could go from assisting humans to being overseen by humans. Eventually, Dr. Kim says they may one day take over. After all, Dr Kim’s  “programmed the best surgeon’s techniques, based on consensus and physics, into the machine.”

The idea of full autonomy in the operating room and on the road raises a litany of ethical concerns, such as the acceptable failure rate of machines. The value proposition for self-driving cars is very clear – road safety. In 2015, there were approximately 35,000 road fatalities; self-driving cars will reduce that figure dramatically. However, what is unclear is what will be the new acceptable rate of fatalities with machines. Professor Amnon Shashua, of Hebrew University and founder of Mobileye, has struggled with this dilemma for years. “If you drop 35,000 fatalities down to 10,000 – even though from a rational point of view it sounds like a good thing, society will not live with that many people killed by a computer,” explains Dr. Shashua. While everyone would agree that zero failure is the most desired outcome in reality Shashua says, “this will never happen.” He elaborates, “What you need to show is that the probability of an accident drops by two to three orders of magnitude. If you drop [35,000 fatalities] down to 200, and those 200 are because of computer errors, then society will accept these robotic cars.”

Dr. Iyad Rahwan of MIT is much more to the point, “If we cannot engender trust in the new system, we risk the entire autonomous vehicle enterprise.” According to his research, “Most people want to live in a world where cars will minimize casualties. But everybody wants their own car to protect them at all costs.” Dr. Rahwan is referring to the Old Trolly Problem – does the machine save its driver or the pedestrian when encountered with a choice? Dr. Rahwan declares, “This is a big social dilemma. Who will buy a car that is programmed to kill them in some instances? Who will insure such a car?” Last May at the Paris Motor Show Christoph von Hugo, of Daimler Benz, emphatically answered: “If you know you can save at least one person, at least save that one. Save the one in the car.”

The ethics of unmanned systems and more will be discussed at the next RobotLab forum on “The Future of Autonomous Cars” with Steve Girsky formerly of General Motors – November 29th @ 6pm, WeWork Grand Central NYC, RSVP

The senate’s automated driving bill could squash state authority

My previous post on the House and Senate automated driving bills (HB 3388 and SB 1885) concluded by noting that, in addition to the federal government, states and the municipalities within them also play an important role in regulating road safety.These numerous functions involve, among others, designing and maintaining roads, setting and enforcing traffic laws, licensing and punishing drivers, registering and inspecting vehicles, requiring and regulating automotive insurance, and enabling victims to recover from the drivers or manufacturers responsible for their injuries.

Unfortunately, the Senate bill could preempt many of these functions. The House bill contains modest preemption language and a savings clause that admirably tries to clarify the line between federal and state roles. The Senate bill, in contrast, currently contains a breathtakingly broad preemption provision that was proposed in committee markup by, curiously, a Democratic senator.

(I say “currently” for two reasons. First, a single text of the bill is not available online; only the original text plus the marked-up texts for the Senate Commerce Committee’s amendments to that original have been posted. Second, whereas HB 3388 has passed the full House, SB 1885 is still making its way through the Senate.)

Under one of these amendments to the Senate bill, “[n]o State or political subdivision of a State may adopt, maintain, or enforce any law, rule, or standard regulating the design, construction, or performance of a highly automated vehicle or automated driving system with respect to any of the safety evaluation report subject areas.” These areas are system safety, data recording, cybersecurity, human-machine interface, crashworthiness, capabilities, post-crash behavior, accounting for applicable laws, and automation function.

A savings provision like the one in the House bill was in the original Senate bill but apparently dropped in committee.

A plain reading of this language suggests that all kinds of state and local laws would be void in the context of automated driving. Restrictions on what kind of data can be collected by motor vehicles? Fine for conventional driving, but preempted for automated driving. Penalties for speeding? Fine for conventional driving, but preempted for automated driving. Deregistration of an unsafe vehicle? Same.

The Senate language could have an even more subtly dramatic effect on state personal injury law. Under existing federal law, FMVSS compliance “does not exempt a person from liability at common law.” (The U.S. Supreme Court has fabulously muddied what this provision actually means by, in two cases, reaching essentially opposite conclusions about whether a jury could find a manufacturer liable under state law for injuries caused by a vehicle design that was consistent with applicable FMVSS.)

The Senate bill preserves this statutory language (whatever it means) and even adds a second sentence providing that “nothing” in the automated driving preemption section “shall exempt a person from liability at common law or under a State statute authorizing a civil remedy for damages or other monetary relief.”

Although this would seem to reinforce the power of a jury to determine what is reasonable in a civil suit, the Senate bill makes this second sentence “subject to” the breathtakingly broad preemption language described above. On its plain meaning, this language accordingly restricts rather than respects state tort and product liability law.

This is confusing (whether intentionally or unintentionally), so consider a stylized illustration:

1) You may not use the television.

2) Subject to (1), you may watch The Simpsons.

This language probably bars you from watching The Simpsons (at least on the television). If the intent were instead to permit you to do so, the language would be:

1) You may not use the television.

2) Notwithstanding (1), you may watch The Simpsons.

The amendment as proposed could have said “notwithstanding” instead of “subject to.” It did not.

I do not know the intent of the senators who voted for this automated driving bill and for this amendment to it. They may have intended a result other than the one suggested by their language. Indeed, they may have even addressed these issues without recording the result in the documents subsequently released. If so, they should make these changes, or they should make their changes public.

And if not, everyone from Congress to City Hall should consider what this massive preemption would mean.

Happy Halloween!

Happy Halloween everyone! Here’s a selection of this year’s robot videos and tweets to get you in the mood.


Automated Ball Return System For Driving Ranges

Automated Managed Services roll out their upgraded automated ball return system, which handles ball washing and transportation back to the dispenser of golf balls

Established in late 2013, Automated Managed Services (AMS) have been offering driving range robots as an outfield maintenance solution to golf facilities. Their increasing success continues to reshape the idea of what golf maintenance should look and be like, as they rollout their newly redesigned ball return system across new and previous AMS locations.

The automated ball return system is responsible for the washing and transportation back to the dispenser of golf balls. It works in conjunction with the robot ballpicker that goes out and collects the balls out on the outfield. Once the robot is full it returns to its base and drops them into the return system. This process is fully automatic, from the time the balls are collected to being transported back to the dispenser, no human interaction is involved.

The design itself consists of a stainless steel ball drop zone that is shaped like half a diamond. This is installed into the ground and it is what the robot drops the balls into. The half diamond shape allows the balls to be funnelled towards the centre, at the base of the drop zone container is a slider that moves back and forth. With each back and forth motion the balls drop into u-bend shaped cage, this allows any debris such as small stones to fall away. Leaving the balls to roll into a connected green transportation pipe, where compressed air pushes the balls along back to the ball dispenser. During this transportation process water is introduced and the balls are cleaned. The return system is controlled via control panel that is usually located alongside the ball dispenser unit as well as the air compressor for the transportation pipe.

The design and development of the new system was undertaken by the owner of AMS Philip Sear and his technical director Sam Daybell. Philip had this to say about the ball return system:

“Research and development are a key component of our technology infrastructure, so we always strive to improve our products and services. With this in mind the new design is definitely more efficient in processing the balls and returning them to the dispenser. An example of this can be seen in the modification on how we use water in the system, we decided to only introduce water into transportation pipe. After previously also having it in the ball drop zone itself, this ensures water is used more resourcefully along with the balls being cleaned effectively. Overall we are very pleased with the new design as it continues our sustainability in offering a solution that streamlines resources and is cost-effective for our clients”

The new return system is currently being installed at FourAshes Golf Centre based in Solihull, who have been utilising robot technology at their facility for the past 4 years.  It is also part of a new installation being undertaken at Grimsby Golf Club and was installed at High Legh Golf Club based in Knutsford.

About AMS Robot Technology
Automated Managed Services provides golf ball and grass management for driving range facilities, designed to help to streamline resources, reduce costs and improve the overall health of golf driving range outfields.If you would like more information about the AMS’s Outfield Robots, please contact:

Natalie St Hill
Tel: 01462 676 222
natalie@automeatedmanagedservices.com
www.automatedmanagedservices.com

The post Automated Ball Return System For Driving Ranges appeared first on Roboticmagazine.

Can artificial intelligence learn to scare us?

Just in time for Halloween, a research team from the MIT Media Lab’s Scalable Cooperation group has introduced Shelley: the world’s first artificial intelligence-human horror story collaboration.

Shelley, named for English writer Mary Shelley — best known as the author of “Frankenstein: or, the Modern Prometheus” — is a deep-learning powered artificial intelligence (AI) system that was trained on over 140,000 horror stories on Reddit’s infamous r/nosleep subreddit. She lives on Twitter, where every hour, @shelley_ai tweets out the beginning of a new horror story and the hashtag #yourturn to invite a human collaborator. Anyone is welcome to reply to the tweet with the next part of the story, then Shelley will reply again with the next part, and so on. The results are weird, fun, and unpredictable horror stories that represent both creativity and collaboration — traits that explore the limits of artificial intelligence and machine learning.

“Shelley is a combination of a multi-layer recurrent neural network and an online learning algorithm that learns from crowd’s feedback over time,” explains Pinar Yanardhag, the project’s lead researcher. “The more collaboration Shelley gets from people, the more and scarier stories she will write.”

Shelley starts stories based on the AI’s own learning dataset, but she responds directly to additions to the story from human contributors — which, in turn, adds to her knowledge base. Each completed story is then collected on the Shelley project website.

“Shelley’s creative mind has no boundaries,” the research team says. “She writes stories about a pregnant man who woke up in a hospital, a mouth on the floor with a calm smile, an entire haunted town, a faceless man on the mirror anything is possible!”

One final note on Shelley: The AI was trained on a subreddit filled with adult content, and the researchers have limited control over her — so parents beware.

Robohub Podcast #246: Smart Swarms, with Vijay Kumar



In this episode, Jack Rasiel interviews Vijay Kumar, Professor and Dean of Engineering at the University of Pennsylvania.  Kumar discusses the guiding ideas behind his research on micro unmanned aerial vehicles, gives his thoughts on the future of robotics in the lab and field, and speaks about setting realistic expectations for robotics technology.

 

Vijay Kumar

Vijay Kumar is the Nemirovsky Family Dean of Penn Engineering with appointments in the Departments of Mechanical Engineering and Applied Mechanics, Computer and Information Science, and Electrical and Systems Engineering at the University of Pennsylvania.

Dr. Kumar received his Bachelor of Technology degree from the Indian Institute of Technology, Kanpur and his Ph.D. from The Ohio State University in 1987. He has been on the Faculty in the Department of Mechanical Engineering and Applied Mechanics with a secondary appointment in the Department of Computer and Information Science at the University of Pennsylvania since 1987. In his time at the university, Dr. Kumar has held numerous positions including director of the GRASP Laboratory, Chairman of the Department of Mechanical Engineering and Applied Mechanics, and Deputy Dean for Education in the School of Engineering and Applied Science. From 2012 to 2013, he served as the assistant director of robotics and cyber physical systems at the White House Office of Science and Technology Policy.

 

Links

 

 

Congress’ automated driving bills are both more and less than they seem

Bills being considered by Congress deserve our attention—but not our full attention. To wit: When it comes to safety-related regulation of automated driving, existing law is at least as important as the bills currently in Congress (HB 3388 and SB 1885). Understanding why involves examining all the ways that the developer of an automated driving system might deploy its system in accordance with federal law as well as all the ways that governments might regulate that system. And this examination reveals some critical surprises.

As automated driving systems get closer to public deployment, their developers are closely evaluating how the full set of Federal Motor Vehicle Safety Standards (FMVSS) will apply to these systems and to the vehicles on which they are installed. Rather than specifying a comprehensive regulatory framework, these standards impose requirements on only some automotive features and functions. Furthermore, manufacturers of vehicles and of components thereof self-certify that their products comply with these standards. In other words, unlike its European counterparts (and a small number of federal agencies overseeing products deemed more dangerous than motor vehicles), the National Highway Traffic Safety Administration (NHTSA) does not prospectively approve most of the products it regulates.

There are at least seven (!) ways that the developer of an automated driving system could conceivably navigate this regulatory regime.

First, the developer might design its automated driving system to comply with a restrictive interpretation of the FMVSS. The attendant vehicle would likely have conventional braking and steering mechanisms as well as other accoutrements for an ordinary human driver. (These conventional mechanisms could be usable, as on a vehicle with only part-time automation, or they might be provided solely for compliance.) NHTSA implied this approach in its 2016 correspondence with Google, while another part of the US Department of Transportation even highlighted those specific FMVSS provisions that a developer would need to design around. Once the developer self-certifies that its system in fact complies with the FMVSS, it can market it.

Second, the developer might ask NHTSA to clarify the agency’s understanding of these provisions with a view toward obtaining a more accommodating interpretation. Previously—and, more to the point, under the previous administration—NHTSA was somewhat restrictive in its interpretation, but a new chief counsel might reach a different conclusion about whether and how the existing standards apply to automated driving. In that case, the developer could again simply self-certify that its system indeed complies with the FMVSS.

Third, the developer might petition NHTSA to amend the FMVSS to more clearly address (or expressly abstain from addressing) automated driving systems. This rulemaking process would be lengthy (measured in years rather than months), but a favorable result would give the developer even more confidence in self-certifying its system.

Fourth, the developer could lobby Congress to shorten this process—or preordain the result—by expressly accommodating automated driving systems in a statute rather than in an agency rule. This is not, by the way, what the bills currently in Congress would do.

Fifth, the developer could request that NHTSA exempt some of its vehicles from portions of the FMVSS. This exemption process, which is prospective approval by another name, requires the applicant to demonstrate that the safety level of its feature or vehicle “at least equals the safety level of the standard.” Under existing law, the developer could exempt no more than 2,500 new vehicles per year. Notably, however, this could include heavy trucks as well as passenger cars.

Sixth, the developer could initially deploy its vehicles “solely for purposes of testing or evaluation” without self-certifying that those vehicles comply with the FMVSS. Although this exception is available only to established automotive manufacturers, a new or recent entrant could partner with or outright buy one of the companies in that category. Many kinds of large-scale pilot and demonstration projects could be plausibly described as “testing or evaluation,” particularly by companies that are comfortable losing money (or comfortable describing their services as “beta”) for years on end.

Seventh, the developer could ignore the FMVSS altogether. Under federal law, “a person may not manufacture for sale, sell, offer for sale, introduce or deliver for introduction in interstate commerce, or import into the United States, any [noncomplying] motor vehicle or motor vehicle equipment.” But under the plain meaning of this provision (and a related definition of “interstate commerce”), a developer could operate a fleet of vehicles equipped with its own automated driving system within a state without certifying that those vehicles comply with the FMVSS.

This is the background law against which Congress might legislate—and against which its bills should be evaluated.

Both bills would dramatically expand the number of exemptions that NHTSA could grant to each manufacturer, eventually reaching 100,000 per year in the House version. Some critics of the bills have suggested that this would give free rein to manufactures to deploy tens of thousands of automated vehicles without any prior approval.

But considering this provision in context provides two key insights. First, automated driving developers may already be able to lawfully deploy tens of thousands of their vehicles without any prior approval—by designing them to comply with the FMVSS, by claiming testing or evaluation, or by deploying an in-state service. Second, the exemption process gives NHTSA far more power than it otherwise has: The applicant must convince the agency to affirmatively permit it to market its system.

Both bills would also require the manufacturer of an automated driving system to submit a “safety evaluation report” to NHTSA that “describes how the manufacturer is addressing the safety of such vehicle or system.” This requirement would formalize the safety assessment letters that NHTSA encouraged in its 2016 and 2017 automated vehicle policies. These three frameworks all evoke my earlier proposal for what I call the “public safety case,” wherein an automated driving developer tells the rest of us what they are doing, why they think it is reasonably safe, and why we should believe them.

Unsurprisingly, I think this is a fine idea. It encourages innovation in safety assurance and regulation, informs regulators, and—if disclosure is meaningful—helps educate the public at large. Congress could strengthen these provisions as currently drafted, and it could give NHTSA the resources needed to effectively engage with these reports. Regardless, in evaluating the bills, it is important to understand that these provisions increase rather than decrease what an automated driving system developer must do under federal law. They are an addition rather than an alternative to each of the seven pathways described above.

Both bills would also exclude heavy trucks and buses from their definitions of automated vehicle. This exclusion, added at the behest of labor groups concerned about the eventual implications of commercial truck automation, means that NHTSA cannot exempt tens of thousands of heavy vehicles per manufacturer from a safety standard. But each truck manufacturer can still seek to exempt up to 2,500 vehicles per year—if such an exemption is even required. And, depending on how language relating to the safety evaluation reports is interpreted, this exemption might even relieve automated truck manufacturers of the obligation to submit these reports.

Finally, these bills largely preserve NHTSA’s existing regulatory authority—and that authority involves much more than making rules and granting exemptions to those rules. Crucially, the agency can conduct investigations and pursue recalls—even if a vehicle fully complies with the applicable FMVSS. This is because ensuring motor vehicle safety requires more than satisfying specific safety standards. And this broader definition of safety—“the performance of a motor vehicle or motor vehicle equipment in a way that protects the public against unreasonable risk of accidents occurring because of the design, construction, or performance of a motor vehicle, and against unreasonable risk of death or injury in an accident, and includes nonoperational safety of a motor vehicle”—gives NHTSA great power.

States and the municipalities within them also play an important role in regulating road safety—and my next post considers the effect of the Senate bill in particular on this state and local authority.

New RoboBee flies, dives, swims, and explodes out the of water

New, hybrid RoboBee can fly, dive into water, swim, propel itself back out of water, and safely land. The RoboBee is retrofitted with four buoyant and a central gas collection chamber. Once the RoboBee swims to the surface, an electrolytic plate in the chamber converts water into oxyhydrogen, a combustible gas fuel. Credit: Wyss Institute at Harvard University

By Leah Burrows

We’ve seen RoboBees that can fly, stick to walls, and dive into water. Now, get ready for a hybrid RoboBee that can fly, dive into water, swim, propel itself back out of water, and safely land.

New floating devices allow this multipurpose air-water microrobot to stabilize on the water’s surface before an internal combustion system ignites to propel it back into the air.

This latest-generation RoboBee, which is 1,000 times lighter than any previous aerial-to-aquatic robot, could be used for numerous applications, from search-and-rescue operations to environmental monitoring and biological studies.

The research is described in Science Robotics. It was led by a team of scientists from the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). 

“This is the first microrobot capable of repeatedly moving in and through complex environments,” says Yufeng Chen, Ph.D., currently a Postdoctoral Fellow at the Wyss Institute who was a graduate student in the Microrobotics Lab at SEAS when the research was conducted and is the first author of the paper. “We designed new mechanisms that allow the vehicle to directly transition from water to air, something that is beyond what nature can achieve in the insect world.”

Designing a millimeter-sized robot that moves in and out of water has numerous challenges. First, water is 1,000 times denser than air, so the robot’s wing flapping speed will vary widely between the two mediums. If the flapping frequency is too low, the RoboBee can’t fly. If it’s too high, the wing will snap off in the water.

By combining theoretical modeling and experimental data, the researchers found the Goldilocks combination of wing size and flapping rate, scaling the design to allow the bee to operate repeatedly in both air and water. Using this multimodal locomotive strategy, the robot to flaps its wings at 220 to 300 hertz in air and nine to 13 hertz in water.

Another major challenge the team had to address: at the millimeter scale, the water’s surface might as well be a brick wall. Surface tension is more than 10 times the weight of the RoboBee and three times its maximum lift. Previous research demonstrated how impact and sharp edges can break the surface tension of water to facilitate the RoboBee’s entry, but the question remained: How does it get back out again?

To solve that problem, the researchers retrofitted the RoboBee with four buoyant outriggers — essentially robotic floaties — and a central gas collection chamber. Once the RoboBee swims to the surface, an electrolytic plate in the chamber converts water into oxyhydrogen, a combustible gas fuel.

“Because the RoboBee has a limited payload capacity, it cannot carry its own fuel, so we had to come up with a creative solution to exploit resources from the environment,” says Elizabeth Farrell Helbling, graduate student in the Microrobotics Lab and co-author of the paper. “Surface tension is something that we have to overcome to get out of the water, but is also a tool that we can utilize during the gas collection process.”

The gas increases the robot’s buoyancy, pushing the wings out of the water, and the floaties stabilize the RoboBee on the water’s surface. From there, a tiny, novel sparker inside the chamber ignites the gas, propelling the RoboBee out of the water. The robot is designed to passively stabilize in air, so that it always lands on its feet.

“By modifying the vehicle design, we are now able to lift more than three times the payload of the previous RoboBee,” says Chen. “This additional payload capacity allowed us to carry the additional devices including the gas chamber, the electrolytic plates, sparker, and buoyant outriggers, bringing the total weight of the hybrid robot to 175 miligrams, about 90mg heavier than previous designs. We hope that our work investigating tradeoffs like weight and surface tension can inspire future multi-functional microrobots – ones that can move on complex terrains and perform a variety of tasks.”

Because of the lack of onboard sensors and limitations in the current motion-tracking system, the RoboBee cannot yet fly immediately upon propulsion out of water but the team hopes to change that in future research.

“The RoboBee represents a platform where forces are different than what we – at human scale – are used to experiencing,” says Wyss Core Faculty Member Robert Wood, Ph.D., who is also the Charles River Professor of Engineering and Applied Sciences at Harvard and senior author of the paper. “While flying the robot feels as if it is treading water; while swimming it feels like it is surrounded by molasses. The force from surface tension feels like an impenetrable wall. These small robots give us the opportunity to explore these non-intuitive phenomena in a very rich way.”

The paper was co-authored by Hongqiang Wang, Ph.D., Postdoctoral Fellow at the Wyss Institute and SEAS; Noah Jafferis, Ph.D., Postdoctoral Fellow at the Wyss Institute; Raphael Zufferey, Postgraduate Researcher at Imperial College, London; Aaron Ong, Mechanical Engineer at the University of California, San Diego and former member of the Microrobotics Lab; Kevin Ma, Ph.D., Postdoctoral Fellow at the Wyss Institute; Nicholas Gravish, Ph.D., Assistant Professor at the University of California, San Diego and former member of the Microrobotics Lab; Pakpong Chirarattananon, Ph.D., Assistant Professor at the City University of Hong Kong and former member of the Microrobotics Lab; and Mirko Kovac, Ph.D., Senior Lecturer at Imperial College, London and former member of the Microrobotics Lab and Wyss Institute. It was supported by the National Science Foundation and the Wyss Institute for Biologically Inspired Engineering.

Page 380 of 400
1 378 379 380 381 382 400