Category Robotics Classification

Page 483 of 520
1 481 482 483 484 485 520

What’s all the fuss about AI, robotics and China?

In the constantly changing landscape of today’s global digital workspace, AI’s presence grows in almost every industry. Retail giants like Amazon and Alibaba are using algorithms written by machine learning software to add value to the customer experience. Machine learning is also prevalent in the new Service Robotics world as robots transition from blind, dumb and caged to mobile and perceptive.

Competition is particularly focused between the US and China even though other countries and global corporations have large AI programs as well. The competition is real, fierce and dramatic. Talent is hard to find and costly. It’s a complex field that few fully understand, consequently the talent pool is limited. Grabs of key players and companies headline the news every few days. “Apple hires away Google’s chief of search and AI.” “Amazon acquires AI cybersecurity startup.” “IBM invests millions into MIT AI research lab.” “Oracle acquires Zenedge.” “Ford acquires auto tech startup Argo AI.” “Baidu hires three world-renowned artificial intelligence scientists.”

Media, partly from the complexity of the subject, and partly from lack of knowledge, frighten people with scare headlines about misuse and autonomous weaponry. They exaggerate the competition into a hotly contested war for mastery of the field. It’s not really a “war” but it is dramatic and it’s playing out right now on many levels: immigration law, intellectual property transgressions, trade war fears, labor cost and availability challenges, and unfair competitive practices as well as technological breakthroughs and lower costs enabling experimentation and testing.

Two recent trends have sparked widespread use of machine learning: the availability of massive amounts of training data, and powerful and efficient parallel computing.  GPUs are parallel processors and are used to train these deep neural networks. GPUs do so in less time, using far less datacenter infrastructure than non-parallel-processing super-computers.

Service and mobile robots often need to have all their computing power onboard as compared to stationary robots with control systems in separate nearby boxes. Sometimes onboard computing involves multiple processors; other times it necessitates super-computing power such as offered by chip makers that offer parallel processing and super-computer speeds. Nvidia’s Jetson chip, Isaac lab, and toolset are an example.

Nvidia

The recent Nvidia GPU Developers Conference held in San Jose last month highlighted Nvidia’s goal to capture the robotics AI market. They’ve set up an SDK and lab to help robotics companies capture and learn from the amount of data they are processing as they go about their tasks in mobility and vision processing.

Nvidia’s Jetson GPU, SDK, toolset and simulation platform are designed to help roboticists build and test robotics applications and simultaneously manage all the various onboard processes such as perception, navigation and manipulation. As a demonstration of the breath of capabilities in their toolset, Nvidia had a delivery robot to cart around objects at the show.

Nvidia is offering libraries, SDK, APIs, an open source deep learning accelerator, and other tools to encourage the use by robot makers for them to incorporate Nvidia chips into their products. Nvidia sees this as a future source of revenue. Right now it is mostly all research and experimentation.

Examples of deep learning in robotics

In a recent CBInsights graphic categorizing the 2018 AI 100, 12 companies were highlighted in the robotics and auto technology sectors. Note from the Venn Diagram that not all AI companies are involved with robotics (in fact, most aren’t – there were 2,000+ startups in the pool of companies from which the 100 were chosen). The same is true for robotics.

Here are four use cases of robot companies using AI chips in their products:

  1. Cobalt Robotics – Says CEO and Co-founder Travis Deyle, “Cobalt uses a high-end NVidia GPU (a 1080 variant) directly on the robot.  We do a lot of processing locally (e.g. anomaly detection, person detection, etc) using a host of libraries: CUDA, TensorFlow, and various computer vision libraries. The algorithms running on the robot are just the tip of the iceberg. The on-robot detectors and classifiers are tuned to be very sensitive; upon detection, data is transmitted to the internet and runs through an extensive cloud-based machine learning pipeline and ultimately flags a remote human specialist for additional input and high-level decision making.  The cloud-based pipeline also makes use of deep-learning processing power, which is likely powered by NVidia as well.”
  2. Bossa Nova Robotics – Walmart is partnering with San Francisco-based robotics Bossa Nova on robots that roam the grocery and health products aisles of Walmart stores, auditing shelves and then sending data back to employees to ensure that missing items are restocked, as well as locating incorrect prices and wrong or missing labels. Bossa Nova’s Walmart robots house three Nvidia GPUs: one for navigation and mapping; another for perception and image stitching (it’s viewing 6′ of shelving at 2 mph); and for computing and analyzing what it’s seeing and turning that info into actionable restocking reports.
  3. Fetch Robotics – Fetch Robotics’ automated material transports and Fetch’s new data survey line of AMRs, all, in addition to navigation, collision avoidance and mapping, collect data continuously and consistently. When the robots recharge themselves, all the stored collected data is uploaded to the cloud for post-processing and analytics.
  4. TUSimple (CN) – Beijing-based TuSimple’s truck driving technology is focused on the middle mile, ie, the need for transporting container boxes from one hub to another. Along the way TUSimple trucks are able to detect and track objects at distances of greater than 300 meters through advanced sensor fusion that combines data from multiple cameras using decimeter-level localization technology. Simultaneously, the truck’s decision-making system dynamically adapts to road conditions including changing lanes and adjusting driving speeds. TuSimple uses NVIDIA GPUs, NVIDIA DRIVE PX 2, Jetson TX2, CUDA, TensorRT and cuDNN in its autonomous driving solution.

The China factor

Twelve years ago, as a national long-term strategic goal, China crafted 5-year plans with specific goals to encourage the use of robots in manufacturing to enhance quality and reduce the need for unskilled labor, and to establish the manufacture of robots in-country to reduce the reliance on foreign suppliers. After three successive well funded and fully incentivized 5-year robotics plans, one can easily see the transformation: robot and component manufacturers have grown from fewer than 10 to more than 700 while companies using robots in their manufacturing and material handling process have grown similarly.

[NOTE: During the same period, America implemented various manufacturing initiatives involving robotics, however none were comparably funded or, more importantly, continuously funded over time.]

Recently China turned its focus to artificial intelligence. Specifically, they’ve set out a three-pronged plan to catch up by 2020, achieve mid-term parity in autonomous vehicles, image recognition and, perhaps, simultaneous translation by 2025, and lead the world in AI and machine learning by 2030.

Western companies doing business in China have been plagued by intellectual property thievery, copying and reverse engineering, and heavy-handed partnerships and joint ventures where IP must be given to the Chinese venture. Steve Dickinson, a lawyer with Harris | Bricken, a Seattle law firm whose slogan is “Tough Markets; Bold Lawyers,”  wrote:

“With respect to appropriating the technology and then selling it back into the developed market from which it came: that is of course the Chinese strategy. It is the strategy of businesses in every developing country. The U.S. followed this approach during the entire 19th and early 20th centuries. Japan and Korea and Taiwan did it with great success in the post WWII era. That is how technical progress is made.”

“It is clear that appropriating foreign AI technology is the goal of every Chinese company operating in this sector [robotics, e-commerce, logistics and manufacturing]. For that reason, all foreign entities that work with Chinese companies in any way must be aware of the significant risk and must take the steps required to protect themselves.”

What is really clear is that where data in large quantity is available, as in China, and where speed is normal and privacy is nil, as in China, AI techniques such as machine and deep learning can thrive and achieve remarkable results at breakneck speed. That’s what is happening right now in China.

Bottom line:

Growth in the service robotics sector is still a promise more than a reality and there is a pressing need to deliver on those promises. We have seen tremendous progress on processors, sensors, cameras and communications but so far the integration is lacking. One roboticist characterized the integration of all that data as a need for a “reality sensor”, i.e., a higher-level indicator of what is being seen or processed. If the sensors pick up a series of pixels that are interpreted to be a person, and the processing determines its motion to be intersecting with your robot, it would be helpful to know whether it’s a pedestrian, a policeman, a fireman, a sanitation worker, a construction worker, a surveyor, etc. That information would help refine the prediction and your actions. It would add reality to image processing and visual perception.

Even as the ratio of development in hardware to software shifts more toward software, there are still many challenges to overcome. Henrik Christensen, the director of the Institute for Contextual Robotics at the University of California San Diego, cited a few of those challenges:

  • Better end-effectors / hands. We still only have very limited capability hands and they are WAY too expensive
  • The user interfaces for most robots are still very limited, eg, different robots have different chargers
  • The cost of integrating systems is very high. We need much better plug-n-play systems
  • We see lots of use of AI / deep learning but in most cases without performance guarantees; not a viable long-term solution until things improve

One often forgets the science involved in robotics, embedded AI, and the many challenges remaining until we have a functional fully-capable, fully-interactive service robot.

Artificial intelligence in action

Aude Oliva (right), a principal research scientist at the Computer Science and Artificial Intelligence Laboratory and Dan Gutfreund (left), a principal investigator at the MIT–IBM Watson AI Laboratory and a staff member at IBM Research, are the principal investigators for the Moments in Time Dataset, one of the projects related to AI algorithms funded by the MIT–IBM Watson AI Laboratory.
Photo: John Mottern/Feature Photo Service for IBM

By Meg Murphy
A person watching videos that show things opening — a door, a book, curtains, a blooming flower, a yawning dog — easily understands the same type of action is depicted in each clip.

“Computer models fail miserably to identify these things. How do humans do it so effortlessly?” asks Dan Gutfreund, a principal investigator at the MIT-IBM Watson AI Laboratory and a staff member at IBM Research. “We process information as it happens in space and time. How can we teach computer models to do that?”

Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. Launched last fall, the lab connects MIT and IBM researchers together to work on AI algorithms, the application of AI to industries, the physics of AI, and ways to use AI to advance shared prosperity.

The Moments in Time dataset is one of the projects related to AI algorithms that is funded by the lab. It pairs Gutfreund with Aude Oliva, a principal research scientist at the MIT Computer Science and Artificial Intelligence Laboratory, as the project’s principal investigators. Moments in Time is built on a collection of 1 million annotated videos of dynamic events unfolding within three seconds. Gutfreund and Oliva, who is also the MIT executive director at the MIT-IBM Watson AI Lab, are using these clips to address one of the next big steps for AI: teaching machines to recognize actions.

Learning from dynamic scenes

The goal is to provide deep-learning algorithms with large coverage of an ecosystem of visual and auditory moments that may enable models to learn information that isn’t necessarily taught in a supervised manner and to generalize to novel situations and tasks, say the researchers.

“As we grow up, we look around, we see people and objects moving, we hear sounds that people and object make. We have a lot of visual and auditory experiences. An AI system needs to learn the same way and be fed with videos and dynamic information,” Oliva says.

For every action category in the dataset, such as cooking, running, or opening, there are more than 2,000 videos. The short clips enable computer models to better learn the diversity of meaning around specific actions and events.

“This dataset can serve as a new challenge to develop AI models that scale to the level of complexity and abstract reasoning that a human processes on a daily basis,” Oliva adds, describing the factors involved. Events can include people, objects, animals, and nature. They may be symmetrical in time — for example, opening means closing in reverse order. And they can be transient or sustained.

Oliva and Gutfreund, along with additional researchers from MIT and IBM, met weekly for more than a year to tackle technical issues, such as how to choose the action categories for annotations, where to find the videos, and how to put together a wide array so the AI system learns without bias. The team also developed machine-learning models, which were then used to scale the data collection. “We aligned very well because we have the same enthusiasm and the same goal,” says Oliva.

Augmenting human intelligence

One key goal at the lab is the development of AI systems that move beyond specialized tasks to tackle more complex problems and benefit from robust and continuous learning. “We are seeking new algorithms that not only leverage big data when available, but also learn from limited data to augment human intelligence,” says Sophie V. Vandebroek, chief operating officer of IBM Research, about the collaboration.

In addition to pairing the unique technical and scientific strengths of each organization, IBM is also bringing MIT researchers an influx of resources, signaled by its $240 million investment in AI efforts over the next 10 years, dedicated to the MIT-IBM Watson AI Lab. And the alignment of MIT-IBM interest in AI is proving beneficial, according to Oliva.

“IBM came to MIT with an interest in developing new ideas for an artificial intelligence system based on vision. I proposed a project where we build data sets to feed the model about the world. It had not been done before at this level. It was a novel undertaking. Now we have reached the milestone of 1 million videos for visual AI training, and people can go to our website, download the dataset and our deep-learning computer models, which have been taught to recognize actions.”

Qualitative results so far have shown models can recognize moments well when the action is well-framed and close up, but they misfire when the category is fine-grained or there is background clutter, among other things. Oliva says that MIT and IBM researchers have submitted an article describing the performance of neural network models trained on the dataset, which itself was deepened by shared viewpoints. “IBM researchers gave us ideas to add action categories to have more richness in areas like health care and sports. They broadened our view. They gave us ideas about how AI can make an impact from the perspective of business and the needs of the world,” she says.

This first version of the Moments in Time dataset is one of the largest human-annotated video datasets capturing visual and audible short events, all of which are tagged with an action or activity label among 339 different classes that include a wide range of common verbs. The researchers intend to produce more datasets with a variety of levels of abstraction to serve as stepping stones toward the development of learning algorithms that can build analogies between things, imagine and synthesize novel events, and interpret scenarios.

In other words, they are just getting started, says Gutfreund. “We expect the Moments in Time dataset to enable models to richly understand actions and dynamics in videos.”

Drones will soon decide who to kill

The US Army recently announced that it is developing the first drones that can spot and target vehicles and people using artificial intelligence (AI). This is a big step forward. Whereas current military drones are still controlled by people, this new technology will decide who to kill with almost no human involvement.

BitFlow Predicts Vision-Guided Robotics to Become Major Disruptive Force in Global Manufacturing

As the plant floor has become more digitally connected, the relationship between robots and machine vision has merged into a single, seamless platform, setting the stage for a new generation of more responsive vision-driven robotic systems.

Robots that can learn like humans

Researchers say that artificial intelligence (AI) is now superior to human intelligence in supervised learning using vast amounts of labeled data to perform specific tasks. However, it is considered difficult to realize human-like intelligence using only supervised learning because all supervised labels cannot be obtained for all the sensory information required by robots.

Samantha’s suffering—why sex machines should have rights too

Late in 2017 at a tech fair in Austria, a sex robot was "molested" repeatedly and left in a "filthy" state. The robot, named Samantha, received a barrage of male attention, which resulted in her sustaining two broken fingers. This incident confirms worries that the possibility of fully functioning sex robots raises both tantalising possibilities for human desire (by mirroring human/sex-worker relationships), as well as serious ethical questions.

Automated prep of MS-sensitive fluorescently labeled N-Glycans with a pipetting robot

A new original research report available ahead-of-print at SLAS Technology demonstrates the semi-automation of a GlycoWorks RapiFluor-MS (RFMS) Kit using a pipetting robot to improve life sciences research productivity. This robotic platform uses standard manual pipettors and an optically guided arm to facilitate the automation of manual procedures, reducing the time researchers spend at the lab bench, and mimicking, as closely as possible, the results obtained when using the manual GlycoWorks RFMS protocol.

Comparing the Uber and Tesla fatalities with a table

The Uber car and Tesla’s autopilot, both in the news for fatalities are really two very different things. This table outlines the difference. Also, see below for some new details on why the Tesla crashed and more.

Uber ATG Test Tesla Autopilot
A prototype full robocar capable of unmanned operations on city streets. A driver assist system for highways and expressways
Designed for taxi service Designed for privately owned and driven cars
A full suite of high end roobcar sensors including LIDAR Productive automotive sensors, cameras and radar.
1 pedestran fatality, other accidents unknown Fatalities in Florida, China, California, other serious crashes without injury
Approximately 3 million miles of testing Late 2016: 300M miles, 1.3B miles data gathering.
A prototype in testing which needs a human safety driver monitoring it A production product overseen by the customer
Designed to handle everything it might encounter on the road Designed to handle only certain situations. Users are expressly warned it doesn’t handle major things like cross traffic, stop signs and traffic lights.
Still in an early state, needing intervention every 13 miles on city streets In production and needing intervention rarely on highways but if you tried to drive it on city streets it would need it very frequently
Needs a state licence for testing with rules requring safety drivers No government regulation needed, similar to the adpative cruise control that it is based on
Only Uber employees can get behind the wheel Anybody can be behind the wheel
Vehicle failed in manner outside its design constraints — it should have readily detected and stopped for the pedestrian Vehicles had incidents in ways expected under their design constraints
Vehicle was trusted too much by safety driver, took eyes off road for 5 seconds Vehicles trusted too much by drivers, took eyes off road for 6 seconds or longer
Safety drivers get 3 weeks training, fired if caught using a phone No training or punishments for customers, though manual and screen describe proper procedures for operating
Safety driver recorded with camera, no warnings by software of inattention Tesla drivers get visible, then audibile alerts if they take hands off the wheel for too long
Criticism that solo safety driver job is too hard, that inattention will happen Criticism that drivers are overtrusting the system, regularly not looking at the road
Killed a bystander, though it had right of way Killed customers who were ignoring monitoring requirements
NTSB Investigating NTSB Investigating

Each company faces a different challenge to fix its problems. For Uber, they need to improve the quality of their self-drive software so that such a basic failure as we saw here is extremely unlikely. Perhaps even more importantly, they need to revamp their safety driver system so that safety driver alertness is monitored and assured, including going back to two safety drivers in all situations. Further, they should consider some “safety driver assist” technology, such as the use of the system in the Volvo (or some other aftermarket system) to provide alerts to the safety drivers if it looks like something is going wrong. That’s not trivial — if the system beeps too much it gets ignored, but it can be done.

For Tesla, they face a more interesting challenge. Their claim is that in spite of the accidents, the autopilot is still a net win. That because people who drive properly with autopilot have half the accidents of people who drive without it, the total number of accidents is still lower, even if you include the accidents, including these fatalities, which come to those who disregard the warnings about how to properly use it.

That people disregard those warnings is obvious and hard to stop. Tesla argues, however, that turning off Autopilot because of them would make Telsa driving and the world less safe. For them, options exist to make people drive diligently with the autopilot, but they must not make the autopilot so much less pleasant such that people decide to not use it even properly. That would actually make driving less safe if enough people did that.

Why the Tesla crashed

A theory, now given credence by some sample videos, suggests the Telsa was confused by the white lines which divide the road at an off-ramp, the expanding triangle known as the “gore.” As the triangle expands, a simple system might think they were the borders of a lane. Poor lane marking along the gore might make the vehicle even think the new “lane” is a continuation of the lane the car is in, making the car try to drive the lane — right into the barrier.

This video made by Tesla owners near Indiana, shows a Telsa doing this when the right line of the gore is very washed out compared to the left. At 85/101 (the recent Tesla crash) the lines are mostly stronger but there is a 30-40 foot gap in the right line which perhaps could trick a car into entering and following the gore. The gore at 85/101 also is lacking the chevron “do not drive here” stripes often found at these gores. It is not good at stationary objects like the crumple barrier, but its warning stripes are something that should be in its classification database.

Once again, the Tesla is just a smart cruise control. It is going to make mistakes like this, which is why they tell you you have to keep watching. Perhaps crashes like this will make people do that.

The NTSB is angry that Tesla released any information. I was not aware they frowned on this. This may explain Uber’s silence during the NTSB investigation there.

3 initial thoughts on Ready Player One

The long-anticipated, Steven Spielberg-helmed Ready Player One has just been released in UK cinemas this week, and as a film of obvious interest to DreamingRobots and Cyberselves everywhere, we went along to see what the Maestro of the Blockbuster has done with Ernest Cline’s 2011 novel (which the author himself helped to adapt to the screen).

We went in with a lot of questions, not least of which included:

  • How would Cline & Spielberg update the material? (in terms of VR technology, 2011 is so 2011. )
  • How would the film engage with the modern politics of the Internet and gaming?
  • How would Spielberg use the most up-to-date cinematic techniques and effects to enhance the film? (would this be another game changer?)
  • What would the film have to say about our future? the future of gaming? of our interconnectedness? social media? what would the film have to say about the future of humanity itself?

A one-time viewing and a next-day review are, of course, too early to answer such big questions with any certainty. Fortunately, however you feel about the film itself, it will reward many multiple viewings on DVD as even the most unsatisfied viewer won’t be able to resist pausing the action frame-by-frame to catch all the references and fleeting glimpses of their favourite video game characters of the past.

But for now, here are 3 initial responses for discussion/debate:

1. Ready Player One is a morality tale about corporate power and the Internet

Cline’s original novel was very much a paean to plucky independent gamers resisting the ruthless greed and world-conquering ambition of the Corporate Villain (while simultaneously, strangely, lionising the richest and most world-conquering of them all, James Halliday, the Gates-Jobs figure transformed here into the benevolent deus ex machina that built his trillions on creating the OASIS).  The film remains true to Cline’s vision, and perhaps even heightens this black-and-white, goodie-versus-baddie (IOI), with a brilliantly cast Ben (‘Commander Krennic’) Mendelsohn and a tragically under-used Hannah John-Kamen heading an army of faceless corporate infantry.

But while this wouldn’t have been at the forefront of Cline’s thinking in 2011, it is impossible to watch this film now, today, and not think of the erosion of net neutrality that was set in motion by the FCC’s December 2017 decision and, more recently, the exposure of Facebook’s data breach by Cambridge Analytica, which has finally woken more people up to the reality of mass surveillance and what personal data corporate giants have and how it might be misused.

There is little chance that Spielberg and Cline had either of these potential dangers in mind when the film went into production. And such issues shouldn’t be vastly oversimplified in real journalism, but storytelling is always a good way to make people understand complex issues and motivate them to action, and if RPO‘s simple story of goodies and baddies can become a cultural rallying-point for the dangerous mix of unchecked capitalism and our social interconnectedness, then that is a Good Thing

2. Spielberg’s film goes a certain way into correcting some of the problems of the original novel (though could have gone further).

Through no real fault of the author, opinions on Cline’s once much-lauded book were revised, post-#gamergate, and what was once seen as an innocent tale of a young (definitely boy) geek’s nostalgic travels from social outsider to saviour of the world (cf. also Cline’s Armada) came to be seen by some instead as a symptom of everything behind the vile misogyny of white male gamers, backlashing out at anyone that didn’t see how they were the best and most privileged of all people on this earth.

Let’s be clear: the gender politics in the film are far from ideal. How is it, for example, notes another reviewer, that two of the main female protagonists are so ignorant of basic Halliday-lore? And there is still a bit too much of White Boy Champion of the World in even this version of Cline’s tale. However, having said that, other critics, too, have noticed a much-improved gender consciousness in the film.

But what is clear from Spielberg’s offering is that women are as much a part of gaming culture as men, and have every right to occupy the same space, and anyone who thinks otherwise can be gone. Without wanting to give anything away, it is enough to note that Art3mis is a legend in the OASIS, a skilled gamer that Parzifal worships, and that one of the OASIS’s best designers/builders (or programmer) is also a woman. Outside of the VR world, the real women behind the avatars are among the best-drawn characters (albeit in a film not overburdened with character depth, but then this is a Spielberg popcorn speciality, not one of his Oscar worthies). Both Olivia Cooke and Lena Waithe are given space to live and to be (the former, in particular, being a much more interesting protagonist the poor Wade Watts, who really is little more than his avatar), and as previously mentioned, John-Kamen is a much more frightening villain than Mendelsohn’s corporate puppet.

This film shouldn’t be heralded as a feminist triumph or a shining example of post-Weinstein Hollywood, but it is a step in the right direction, and it might mean a few more people can forgive Cline for the white-boy wank-fest that they perceive (not without some good reason) the original novel to be.

3. Despite some nods to progressive politics, the film holds deeply conservative views on human nature.

A big attraction of the novel and the excitement of the film, for DreamingRobots and Cyberselves, was the way the novel created worlds in a new reality, and explored the ideas of what humans could become in such spaces no longer bound by the physical limitations of our birth. It’s what we’re looking at with our experiments in VR and teleoperative technologies, and we ask the questions: what happens to human beings when we can be transformed by such technologies? What might our posthuman future look like?

The film does not ask these questions. In this respect, again, the film does not deviate from the original novel. The novel, for all its creativity in imagining such virtual realities, before they were fully realised in real-world technology, was still very much about recognisably human worlds. The film actually regresses to a vision of human experience where the worlds of flesh-reality and virtual-reality are more clearly demarcated. In the book, there was at least a certain bleeding between these two worlds, as events in the virtual world could have consequences in the real world and vice versa. In the film, however, only real-world events have impact on the virtual world. Events in the virtual world do not impact upon the real, and the two storylines, the two battles between goodies and baddies in the virtual and real worlds, are clearly separate. (Highlighted by the fact that there are distinct villains for each location: John-Kamen’s F’Nale Zandor never enters the virtual world, while T.J. Miller’s I-R0k exists only in the virtual. Only Mendelsohn’s Sorrento is the only villain crossing that boundary.)


Spielberg’s vision of 2045 is clearly dystopian: you can see it in the ‘Stacks’, where so many impoverished are forced to live, the utter dominance of mega-corporations, and the inability (or unwillingness) of the state to provide for or protect its citizens. But while so many of the citizens of 2045 take refuge in the paradise that is the OASIS, Spielberg makes it clear that this world is merely a symptom of the dystopian world of the flesh. The opium of these alienated masses, in fact, amplifies the miserable situation of these people. We’re supposed to pity the people we see, caged in their headsets, who can’t play tennis on a real tennis court, or dance in a real nightclub, or find love wherever real people find love.

This is clear at the film’s conclusion, but as we don’t want to give away spoilers, we’ll leave it for you to see for yourselves. But what is evident throughout is that the virtual world should only be a place where gamers go to play – it is not a place where humans can live. And it is only in the world of flesh that humans can really, successfully exist. Again, this is evident in Cline’s novel: ‘That was when I realized, as terrifying and painful as reality can be, it’s also the only place where you can find true happiness. Because reality is real.’

As one reviewer has so succinctly put it:

But here’s the thing. Ready Player One is a tragedy. What seems like a fun adventure movie is actually a horror movie with a lot to say about the way we live now, the way we might live in the future, and the pitfalls and perils of loving video games too much. This is Spielberg reflecting on the culture he helped create, and telling the audience he made mistakes.

The only objection I have to the above quotation is the idea that the film has a lot to say about the way we might live in the future. Because our future will most certainly be posthuman, and this film cannot shake its humanist origins, and its deeply conservative understandings of how we might use technology. In this film, that posthuman being, and the technology that enables it, is as much of a threat to human life as a Great White shark or rampaging dinosaurs.

The film, therefore, cannot at all accommodate what will be the most imperative issues for human beings in the very near future. Such a binary understanding comes straight from the classic humanist guidebook: fantasy is fine, technology can be fun, but what’s real is what’s real, and what is human is human. That meddling in human’s true nature can never bring us happiness, and it is only by eschewing anything external to our true nature can we be truly happy, or truly human, are the usual humanist scaremongering about technology that we’ve seen time and again, since Mary Shelley’s classic Frankenstein did so much to create our present fantasies.

Nevermind that such a worldview ignores the fact that there has never been such a creature, a human being unimpacted by technology. Nevermind, too, that Spielberg’s entire cinematic oeuvre is fantastically, stubbornly, deeply and, sometimes, beautifully humanist (even when, or perhaps especially when, he’s telling stories about big fish or aliens). It is nevertheless a disappointment that such an opportunity, that such a potentially transformative film about the future and how we can be re-shaped by technology, plays it safe and retreats to a nostalgia for a kind of human being that is increasingly becoming obsolete. It would have been nice if Ready Player One was a great film about posthumanism, addressing the vital issues about technology that we are increasingly facing. But alas Perhaps we should dive back into Spielberg’s catalogue and watch A.I. 

Having said that, Ready Player One is a fun film and we will be taking our children to see it ironically, perhaps, for the message that games are fun but sometimes yes, you do need to turn them off. (It is definitely worth its 12 Certificate, though, so parents of younger children be warned. And of course we’ll buy it on DVD, to catch another glimpse of our favourite gaming characters.)

(Which films do you think better address our posthuman future? Suggestions below, please!)

Page 483 of 520
1 481 482 483 484 485 520