How to Choose the Right Sensor in Ambient Conditions
Using vibrations to control a swarm of tiny robots
Robots are taking over jobs, but not at the rate you might think
BOWE GROUP leads an $8.2M investment round in robot software innovator MOV.AI
The original “I, Robot” had a Frankenstein complex
Eando Binder’s Adam Link scifi series predates Isaac Asimov’s more famous robots, posing issues in trust, control, and intellectual property.
Read more about these challenges in my Science Robotics article here.
And yes, there’s a John Wick they-killed-my-dog scene in there too.
Snippet for the article with some expansion:
In 1939, Eando Binder began a short story cycle about a robot named Adam Link. The first story in Binder’s series was titled “I, Robot.” That clever phrase would be recycled by Isaac Asimov’s publisher (against Asimov’s wishes) for his famous short story cycle that started in 1940 about the Three Laws of Robotics. But the Binder series had another influence on Asimov: the stories explicitly related Adam’s poor treatment to how humans reacted to the Creature in Frankenstein. (After the police killed his dog- did I mention John Wick?- and put him in jail, Adam conveniently finds a copy of Mary Shelley’s Frankenstein and the penny drops on why everyone is so mean to him…) In response, Asimov coined the term “the Frankenstein Complex” in his stories[1], with his characters stating that Three Laws of Robotics gave humans the confidence in robots to overcome this neurosis.
Note that the Frankenstein Complex is different from the Uncanny Valley; in the Uncanny Valley, the robot is creepy because it almost looks and moves like a human or animal but not quite, in the Frankenstein Complex people believe that intelligent robots regardless of what they look like will rise up against their creators.
Whether humans really have a Frankenstein Complex is a source of endless debate. Frederic Kaplan in a seminal paper presented the baseline assessment of the cultural differences and the role of popular media in trust of robots that everyone still uses[2]. Humanoid robotics researchers even have developed a formal measure of a user’s perception of the Frankenstein Complex.[3] So that group of HRI researchers believes the Frankenstein Complex is a real phenomena. But Binder’s Adam Link story cycle is also worth reexamining because it foresaw two additional challenges for robots and society that Asimov, and other early writers, did not: what is the appropriate form of control and can a robot own intellectual property.
You can get the Adam Link stories from the web as individual stories published in the online back issues of Amazing Stories but it is probably easier to get the story collection here. Binder did a fix-up novel where he organized the stories to form a chronology and added segue ways between stories.
If you’d like to learn more about
- robot control and what is appropriate for what types of tasks, see Introduction to AI Robotics, second edition
- Asimov’s Three Laws and whether they really would prevent the Frankenstein Complex, see Learn AI and Human-Robot Interaction from Asimov’s I, Robot Stories, and links about Asimov
References
[1] Frankenstein Monster, Encyclopedia of Science Fiction, https://sf-encyclopedia.com/entry/frankenstein_monster, accessed July 28, 2022
[2] F. Kaplan, “Who is afraid of the humanoid? Investigating cultural differences in the acceptance of robots,” International Journal of Humanoid Robotics, 1–16 (2004)
[3] Syrdal, D.S., Nomura, T., Dautenhahn, K. (2013). The Frankenstein Syndrome Questionnaire – Results from a Quantitative Cross-Cultural Survey. In: Herrmann, G., Pearson, M.J., Lenz, A., Bremner, P., Spiers, A., Leonards, U. (eds) Social Robotics. ICSR 2013. Lecture Notes in Computer Science(), vol 8239. Springer, Cham. https://doi.org/10.1007/978-3-319-02675-6_27
The original “I, Robot” had a Frankenstein complex
Eando Binder’s Adam Link scifi series predates Isaac Asimov’s more famous robots, posing issues in trust, control, and intellectual property.
Read more about these challenges in my Science Robotics article here.
And yes, there’s a John Wick they-killed-my-dog scene in there too.
Snippet for the article with some expansion:
In 1939, Eando Binder began a short story cycle about a robot named Adam Link. The first story in Binder’s series was titled “I, Robot.” That clever phrase would be recycled by Isaac Asimov’s publisher (against Asimov’s wishes) for his famous short story cycle that started in 1940 about the Three Laws of Robotics. But the Binder series had another influence on Asimov: the stories explicitly related Adam’s poor treatment to how humans reacted to the Creature in Frankenstein. (After the police killed his dog- did I mention John Wick?- and put him in jail, Adam conveniently finds a copy of Mary Shelley’s Frankenstein and the penny drops on why everyone is so mean to him…) In response, Asimov coined the term “the Frankenstein Complex” in his stories[1], with his characters stating that Three Laws of Robotics gave humans the confidence in robots to overcome this neurosis.
Note that the Frankenstein Complex is different from the Uncanny Valley; in the Uncanny Valley, the robot is creepy because it almost looks and moves like a human or animal but not quite, in the Frankenstein Complex people believe that intelligent robots regardless of what they look like will rise up against their creators.
Whether humans really have a Frankenstein Complex is a source of endless debate. Frederic Kaplan in a seminal paper presented the baseline assessment of the cultural differences and the role of popular media in trust of robots that everyone still uses[2]. Humanoid robotics researchers even have developed a formal measure of a user’s perception of the Frankenstein Complex.[3] So that group of HRI researchers believes the Frankenstein Complex is a real phenomena. But Binder’s Adam Link story cycle is also worth reexamining because it foresaw two additional challenges for robots and society that Asimov, and other early writers, did not: what is the appropriate form of control and can a robot own intellectual property.
You can get the Adam Link stories from the web as individual stories published in the online back issues of Amazing Stories but it is probably easier to get the story collection here. Binder did a fix-up novel where he organized the stories to form a chronology and added segue ways between stories.
If you’d like to learn more about
- robot control and what is appropriate for what types of tasks, see Introduction to AI Robotics, second edition
- Asimov’s Three Laws and whether they really would prevent the Frankenstein Complex, see Learn AI and Human-Robot Interaction from Asimov’s I, Robot Stories, and links about Asimov
References
[1] Frankenstein Monster, Encyclopedia of Science Fiction, https://sf-encyclopedia.com/entry/frankenstein_monster, accessed July 28, 2022
[2] F. Kaplan, “Who is afraid of the humanoid? Investigating cultural differences in the acceptance of robots,” International Journal of Humanoid Robotics, 1–16 (2004)
[3] Syrdal, D.S., Nomura, T., Dautenhahn, K. (2013). The Frankenstein Syndrome Questionnaire – Results from a Quantitative Cross-Cultural Survey. In: Herrmann, G., Pearson, M.J., Lenz, A., Bremner, P., Spiers, A., Leonards, U. (eds) Social Robotics. ICSR 2013. Lecture Notes in Computer Science(), vol 8239. Springer, Cham. https://doi.org/10.1007/978-3-319-02675-6_27
Vacuuming-up rare metals from the deep sea floor
Automatic Forest Fire Detection System With AI Enables Early and Efficient Fire Fighting
The pursuit of AI education—past, present, and future
The pursuit of AI education—past, present, and future
New VR system lets you share sights on the move without causing VR sickness
General purpose robots should not be weaponized: An open letter to the robotics industry and our communities
Over the course of the past year Open Robotics has taken time from our day-to-day efforts to work with our colleagues in the field to consider how the technology we develop could negatively impact society as a whole. In particular we were concerned with the weaponization of mobile robots. After a lot of thoughtful discussion, deliberation, and debate with our colleagues at organizations like Boston Dynamics, Clearpath Robotics, Agility Robotics, AnyBotics, and Unitree, we have co-authored and signed an open letter to the robotics community entitled, “General Purpose Robots Should Not Be Weaponized.” You can read the letter, in its entirety, here. Additional media coverage of the letter can be found in Axios, and The Robot Report.
The letter codifies internal policies we’ve had at Open Robotics since our inception and we think it captures the sentiments of much of the ROS community. For our part, we have pledged that we will not weaponize mobile robots, and we do not support others doing so either. We believe that the weaponization of robots raises serious ethical issues and harms public trust in technologies that can have tremendous benefits to society. This is but a first step, and we look forward to working with policy makers, the robotics community, and the general public, to continue to promote the ethical use of robots and prohibit their misuse. This is but one of many discussions that must happen between robotics professionals, the general public, and lawmakers about advanced technologies, and quite frankly, we think it is long overdue.
Due to the permissive nature of the licenses we use for ROS, Gazebo, and our other projects, it is difficult, if not impossible, for us to limit the use of the technology we develop to build weaponized systems. However, we do not condone such efforts, and we will have no part in directly assisting those who do with our technical expertise or labor. This has been our policy from the start, and will continue to be our policy. We encourage the ROS community to take a similar stand and to work with their local lawmakers to prevent the weaponization of robotic systems. Moreover, we hope the entire ROS community will take time to reflect deeply on the ethical implications of their work, and help others better understand both the positive and negative outcomes that are possible in robotics.
How shoring up drones with artificial intelligence helps surf lifesavers spot sharks at the beach
By Cormac Purcell (Adjunct Senior Lecturer, UNSW Sydney) and Paul Butcher (Adjunct Professor, Southern Cross University)
Australian surf lifesavers are increasingly using drones to spot sharks at the beach before they get too close to swimmers. But just how reliable are they?
Discerning whether that dark splodge in the water is a shark or just, say, seaweed isn’t always straightforward and, in reasonable conditions, drone pilots generally make the right call only 60% of the time. While this has implications for public safety, it can also lead to unnecessary beach closures and public alarm.
Engineers are trying to boost the accuracy of these shark-spotting drones with artificial intelligence (AI). While they show great promise in the lab, AI systems are notoriously difficult to get right in the real world, so remain out of reach for surf lifesavers. And importantly, overconfidence in such software can have serious consequences.
With these challenges in mind, our team set out to build the most robust shark detector possible and test it in real-world conditions. By using masses of data, we created a highly reliable mobile app for surf lifesavers that could not only improve beach safety, but help monitor the health of Australian coastlines.