Archive 10.11.2022

Page 9 of 64
1 7 8 9 10 11 64

Robots are taking over jobs, but not at the rate you might think

It's easy to believe that robots are stealing jobs from human workers and drastically disrupting the labor market; after all, you've likely heard that chatbots make more efficient customer service representatives and that computer programs are tracking and moving packages without the use of human hands.

The original “I, Robot” had a Frankenstein complex

Eando Binder’s Adam Link scifi series predates Isaac Asimov’s more famous robots, posing issues in trust, control, and intellectual property.

Read more about these challenges in my Science Robotics article here.

And yes, there’s a John Wick they-killed-my-dog scene in there too.

Snippet for the article with some expansion:

In 1939, Eando Binder began a short story cycle about a robot named Adam Link. The first story in Binder’s series was titled “I, Robot.” That clever phrase would be recycled by Isaac Asimov’s publisher (against Asimov’s wishes) for his famous short story cycle that started in 1940 about the Three Laws of Robotics. But the Binder series had another influence on Asimov: the stories explicitly related Adam’s poor treatment to how humans reacted to the Creature in Frankenstein. (After the police killed his dog- did I mention John Wick?- and put him in jail, Adam conveniently finds a copy of Mary Shelley’s Frankenstein and the penny drops on why everyone is so mean to him…) In response, Asimov coined the term “the Frankenstein Complex” in his stories[1], with his characters stating that Three Laws of Robotics gave humans the confidence in robots to overcome this neurosis.

Note that the Frankenstein Complex is different from the Uncanny Valley; in the Uncanny Valley, the robot is creepy because it almost looks and moves like a human or animal but not quite, in the Frankenstein Complex people believe that intelligent robots regardless of what they look like will rise up against their creators.

Whether humans really have a Frankenstein Complex is a source of endless debate. Frederic Kaplan in a seminal paper presented the baseline assessment of the cultural differences and the role of popular media in trust of robots that everyone still uses[2]. Humanoid robotics researchers even have developed a formal measure of a user’s perception of the Frankenstein Complex.[3] So that group of HRI researchers believes the Frankenstein Complex is a real phenomena. But Binder’s Adam Link story cycle is also worth reexamining because it foresaw two additional challenges for robots and society that Asimov, and other early writers, did not: what is the appropriate form of control and can a robot own intellectual property.

You can get the Adam Link stories from the web as individual stories published in the online back issues of Amazing Stories but it is probably easier to get the story collection here. Binder did a fix-up novel where he organized the stories to form a chronology and added segue ways between stories.

If you’d like to learn more about

References

[1] Frankenstein Monster, Encyclopedia of Science Fiction, https://sf-encyclopedia.com/entry/frankenstein_monster, accessed July 28, 2022

[2] F. Kaplan, “Who is afraid of the humanoid? Investigating cultural differences in the acceptance of robots,” International Journal of Humanoid Robotics, 1–16 (2004)

[3] Syrdal, D.S., Nomura, T., Dautenhahn, K. (2013). The Frankenstein Syndrome Questionnaire – Results from a Quantitative Cross-Cultural Survey. In: Herrmann, G., Pearson, M.J., Lenz, A., Bremner, P., Spiers, A., Leonards, U. (eds) Social Robotics. ICSR 2013. Lecture Notes in Computer Science(), vol 8239. Springer, Cham. https://doi.org/10.1007/978-3-319-02675-6_27

The original “I, Robot” had a Frankenstein complex

Eando Binder’s Adam Link scifi series predates Isaac Asimov’s more famous robots, posing issues in trust, control, and intellectual property.

Read more about these challenges in my Science Robotics article here.

And yes, there’s a John Wick they-killed-my-dog scene in there too.

Snippet for the article with some expansion:

In 1939, Eando Binder began a short story cycle about a robot named Adam Link. The first story in Binder’s series was titled “I, Robot.” That clever phrase would be recycled by Isaac Asimov’s publisher (against Asimov’s wishes) for his famous short story cycle that started in 1940 about the Three Laws of Robotics. But the Binder series had another influence on Asimov: the stories explicitly related Adam’s poor treatment to how humans reacted to the Creature in Frankenstein. (After the police killed his dog- did I mention John Wick?- and put him in jail, Adam conveniently finds a copy of Mary Shelley’s Frankenstein and the penny drops on why everyone is so mean to him…) In response, Asimov coined the term “the Frankenstein Complex” in his stories[1], with his characters stating that Three Laws of Robotics gave humans the confidence in robots to overcome this neurosis.

Note that the Frankenstein Complex is different from the Uncanny Valley; in the Uncanny Valley, the robot is creepy because it almost looks and moves like a human or animal but not quite, in the Frankenstein Complex people believe that intelligent robots regardless of what they look like will rise up against their creators.

Whether humans really have a Frankenstein Complex is a source of endless debate. Frederic Kaplan in a seminal paper presented the baseline assessment of the cultural differences and the role of popular media in trust of robots that everyone still uses[2]. Humanoid robotics researchers even have developed a formal measure of a user’s perception of the Frankenstein Complex.[3] So that group of HRI researchers believes the Frankenstein Complex is a real phenomena. But Binder’s Adam Link story cycle is also worth reexamining because it foresaw two additional challenges for robots and society that Asimov, and other early writers, did not: what is the appropriate form of control and can a robot own intellectual property.

You can get the Adam Link stories from the web as individual stories published in the online back issues of Amazing Stories but it is probably easier to get the story collection here. Binder did a fix-up novel where he organized the stories to form a chronology and added segue ways between stories.

If you’d like to learn more about

References

[1] Frankenstein Monster, Encyclopedia of Science Fiction, https://sf-encyclopedia.com/entry/frankenstein_monster, accessed July 28, 2022

[2] F. Kaplan, “Who is afraid of the humanoid? Investigating cultural differences in the acceptance of robots,” International Journal of Humanoid Robotics, 1–16 (2004)

[3] Syrdal, D.S., Nomura, T., Dautenhahn, K. (2013). The Frankenstein Syndrome Questionnaire – Results from a Quantitative Cross-Cultural Survey. In: Herrmann, G., Pearson, M.J., Lenz, A., Bremner, P., Spiers, A., Leonards, U. (eds) Social Robotics. ICSR 2013. Lecture Notes in Computer Science(), vol 8239. Springer, Cham. https://doi.org/10.1007/978-3-319-02675-6_27

New VR system lets you share sights on the move without causing VR sickness

Researchers from Tokyo Metropolitan University have engineered a virtual reality (VR) remote collaboration system which lets users on Segways share not only what they see but also the feeling of acceleration as they move. Riders equipped with cameras and accelerometers can feedback their sensations to a remote user on a modified wheelchair wearing a VR headset. User surveys showed significant reduction in VR sickness, promising a better experience for remote collaboration activities.

General purpose robots should not be weaponized: An open letter to the robotics industry and our communities

Over the course of the past year Open Robotics has taken time from our day-to-day efforts to work with our colleagues in the field to consider how the technology we develop could negatively impact society as a whole. In particular we were concerned with the weaponization of mobile robots. After a lot of thoughtful discussion, deliberation, and debate with our colleagues at organizations like Boston Dynamics, Clearpath Robotics, Agility Robotics, AnyBotics, and Unitree, we have co-authored and signed an open letter to the robotics community entitled, “General Purpose Robots Should Not Be Weaponized.” You can read the letter, in its entirety, here. Additional media coverage of the letter can be found in Axios, and The Robot Report.

The letter codifies internal policies we’ve had at Open Robotics since our inception and we think it captures the sentiments of much of the ROS community. For our part, we have pledged that we will not weaponize mobile robots, and we do not support others doing so either. We believe that the weaponization of robots raises serious ethical issues and harms public trust in technologies that can have tremendous benefits to society. This is but a first step, and we look forward to working with policy makers, the robotics community, and the general public, to continue to promote the ethical use of robots and prohibit their misuse. This is but one of many discussions that must happen between robotics professionals, the general public, and lawmakers about advanced technologies, and quite frankly, we think it is long overdue.

Due to the permissive nature of the licenses we use for ROS, Gazebo, and our other projects, it is difficult, if not impossible, for us to limit the use of the technology we develop to build weaponized systems. However, we do not condone such efforts, and we will have no part in directly assisting those who do with our technical expertise or labor. This has been our policy from the start, and will continue to be our policy. We encourage the ROS community to take a similar stand and to work with their local lawmakers to prevent the weaponization of robotic systems. Moreover, we hope the entire ROS community will take time to reflect deeply on the ethical implications of their work, and help others better understand both the positive and negative outcomes that are possible in robotics.

How shoring up drones with artificial intelligence helps surf lifesavers spot sharks at the beach

A close encounter between a white shark and a surfer. Author provided.

By Cormac Purcell (Adjunct Senior Lecturer, UNSW Sydney) and Paul Butcher (Adjunct Professor, Southern Cross University)

Australian surf lifesavers are increasingly using drones to spot sharks at the beach before they get too close to swimmers. But just how reliable are they?

Discerning whether that dark splodge in the water is a shark or just, say, seaweed isn’t always straightforward and, in reasonable conditions, drone pilots generally make the right call only 60% of the time. While this has implications for public safety, it can also lead to unnecessary beach closures and public alarm.

Engineers are trying to boost the accuracy of these shark-spotting drones with artificial intelligence (AI). While they show great promise in the lab, AI systems are notoriously difficult to get right in the real world, so remain out of reach for surf lifesavers. And importantly, overconfidence in such software can have serious consequences.

With these challenges in mind, our team set out to build the most robust shark detector possible and test it in real-world conditions. By using masses of data, we created a highly reliable mobile app for surf lifesavers that could not only improve beach safety, but help monitor the health of Australian coastlines.

White shark being observed by a drone.A white shark being tracked by a drone. Author provided.

Detecting dangerous sharks with drones

The New South Wales government has invested more than A$85 million in shark mitigation measures over the next four years. Of all approaches on offer, a 2020 survey showed drone-based shark surveillance is the public’s preferred method to protect beach-goers.

The state government has been trialling drones as shark-spotting tools since 2016, and with Surf Life Saving NSW since 2018. Trained surf lifesaving pilots fly the drone over the ocean at a height of 60 metres, watching the live video feed on portable screens for the shape of sharks swimming under the surface.

Identifying sharks by carefully analysing the video footage in good conditions seems easy. But water clarity, sea glitter (sea-surface reflection), animal depth, pilot experience and fatigue all reduce the reliability of real-time detection to a predicted average of 60%. This reliability falls further when conditions are turbid.

Pilots also need to confidently identify the species of shark and tell the difference between dangerous and non-dangerous animals, such as rays, which are often misidentified.

Identifying shark species from the air.

AI-driven computer vision has been touted as an ideal tool to virtually “tag” sharks and other animals in the video footage streamed from the drones, and to help identify whether a species nearing the beach is cause for concern.

AI to the rescue?

Early results from previous AI-enhanced shark-spotting systems have suggested the problem has been solved, as these systems report detection accuracies of over 90%.

But scaling these systems to make a real-world difference across NSW beaches has been challenging.

AI systems are trained to locate and identify species using large collections of example images and perform remarkably well when processing familiar scenes in the real world.

However, problems quickly arise when they encounter conditions not well represented in the training data. As any regular ocean swimmer can tell you, every beach is different – the lighting, weather and water conditions can change dramatically across days and seasons.

Animals can also frequently change their position in the water column, which means their visible characteristics (such as their outline) changes, too.

All this variation makes it crucial for training data to cover the full gamut of conditions, or that AI systems be flexible enough to track the changes over time. Such challenges have been recognised for years, giving rise to the new discipline of “machine learning operations”.

Essentially, machine learning operations explicitly recognises that AI-driven software requires regular updates to maintain its effectiveness.

Examples of the drone footage used in our huge dataset.

Building a better shark spotter

We aimed to overcome these challenges with a new shark detector mobile app. We gathered a huge dataset of drone footage, and shark experts then spent weeks inspecting the videos, carefully tracking and labelling sharks and other marine fauna in the hours of footage.

Using this new dataset, we trained a machine learning model to recognise ten types of marine life, including different species of dangerous sharks such as great white and whaler sharks.

And then we embedded this model into a new mobile app that can highlight sharks in live drone footage and predict the species. We worked closely with the NSW government and Surf Lifesaving NSW to trial this app on five beaches during summer 2020.

Drone flying at a beach.A drone in surf lifesaver NSW livery preparing to go on patrol. Author provided.

Our AI shark detector did quite well. It identified dangerous sharks on a frame-by-frame basis 80% of the time, in realistic conditions.

We deliberately went out of our way to make our tests difficult by challenging the AI to run on unseen data taken at different times of year, or from different-looking beaches. These critical tests on “external data” are often omitted in AI research.

A more detailed analysis turned up common-sense limitations: white, whaler and bull sharks are difficult to tell apart because they look similar, while small animals (such as turtles and rays) are harder to detect in general.

Spurious detections (like mistaking seaweed as a shark) are a real concern for beach managers, but we found the AI could easily be “tuned” to eliminate these by showing it empty ocean scenes of each beach.

Seaweed identified as sharks.Example of where the AI gets it wrong – seaweed identified as sharks. Author provided.

The future of AI for shark spotting

In the short term, AI is now mature enough to be deployed in drone-based shark-spotting operations across Australian beaches. But, unlike regular software, it will need to be monitored and updated frequently to maintain its high reliability of detecting dangerous sharks.

An added bonus is that such a machine learning system for spotting sharks would also continually collect valuable ecological data on the health of our coastline and marine fauna.

In the longer term, getting the AI to look at how sharks swim and using new AI technology that learns on-the-fly will make AI shark detection even more reliable and easy to deploy.

The NSW government has new drone trials for the coming summer, testing the usefulness of efficient long-range flights that can cover more beaches.

AI can play a key role in making these flights more effective, enabling greater reliability in drone surveillance, and may eventually lead to fully-automated shark-spotting operations and trusted automatic alerts.

The authors acknowledge the substantial contributions from Dr Andrew Colefax and Dr Andrew Walsh at Sci-eye.The Conversation

This article appeared in The Conversation.

17 Robots, Teams From Around the World Vie for $8m in Prizes, November 4-5 in Long Beach, CA

The ANA Avatar XPRIZE challenges teams to develop physical, human-operated robotic avatar systems that can execute tasks and replicate a human’s senses, actions and presence to a remote location in real-time, leading to a more connected world.
Page 9 of 64
1 7 8 9 10 11 64