Page 2 of 2
1 2

Fish fins are teaching us the secret to flexible robots and new shape-changing materials

By Francois Barthelat

Flying fish use their fins both to swim and glide through the air. Smithsonian Institution/Flickr

The big idea

Segmented hinges in the long, thin bones of fish fins are critical to the incredible mechanical properties of fins, and this design could inspire improved underwater propulsion systems, new robotic materials and even new aircraft designs.

A pink and pale colored fish tail with thin lines radiating out from the base.
The thin lines in the tail of this red snapper are rays that allow the fish to control the shape and stiffness of its fins.
Francois Barthelat, CC BY-ND

Fish fins are not simple membranes that fish flap right and left for propulsion. They probably represent one of the most elegant ways to interact with water. Fins are flexible enough to morph into a wide variety of shapes, yet they are stiff enough to push water without collapsing.

The secret is in the structure: Most fish have rays – long, bony spikes that stiffen the thin membranes of collagen that make up their fins. Each of these rays is made of two stiff rows of small bone segments surrounding a softer inner layer. Biologists have long known that fish can change the shape of their fins using muscles and tendons that push or pull on the base of each ray, but very little research has been done looking specifically at the mechanical benefits of the segmented structure.


A pufferfish uses its small but efficient fins to swim against, and maneuver in, a strong current.

To study the mechanical properties of segmented rays, my colleagues and I used theoretical models and 3D-printed fins to compare segmented rays with rays made of a non-segmented flexible material.

We showed that the numerous small, bony segments act as hinge points, making it easy to flex the two bony rows in the ray side to side. This flexibility allows the muscles and tendons at the base of rays to morph a fin using minimal amounts of force. Meanwhile, the hinge design makes it hard to deform the ray along its length. This prevents fins from collapsing when they are subjected to the pressure of water during swimming. In our 3D-printed rays, the segmented designs were four times easier to morph than continuous designs while maintaining the same stiffness.

Photos of a straight ray and a bent ray showing how pulling on one half and pushing on the other half of a ray will make it bend.
The segmented nature of fish fin rays allows them to be easily morphed by pulling at the bottom of the ray.
Francois Barthelat, CC BY-ND

Why it matters

Morphing materials – materials whose shape can be changed – come in two varieties. Some are very flexible – like hydrogels – but these materials collapse easily when you subject them to external forces. Morphing materials can also be very stiff – like some aerospace composites – but it takes a lot of force to make small changes in their shape.

Image showing how 3D printed continuous and segmented fin rays bend.
It requires much more force to control the shape of a continuous 3D-printed ray (top two images) than to morph a segmented ray (bottom two images).
Francois Barthelat, CC BY-ND

The segmented structure design of fish fins overcomes this functional trade-off by being highly flexible as well as strong. Materials based on this design could be used in underwater propulsion and improve the agility and speed of fish-inspired submarines. They could also be incredibly valuable in soft robotics and allow tools to change into a wide variety of shapes while still being able to grasp objects with a lot of force. Segmented ray designs could even benefit the aerospace field. Morphing wings that could radically change their geometry, yet carry large aerodynamic forces, could revolutionize the way aircraft take off, maneuver and land.

What still isn’t known

While this research goes a long way in explaining how fish fins work, the mechanics at play when fish fins are bent far from their normal positions are still a bit of a mystery. Collagen tends to get stiffer the more deformed it gets, and my colleagues and I suspect that this stiffening response – together with how collagen fibers are oriented within fish fins – improves the mechanical performance of the fins when they are highly deformed.

What’s next

I am fascinated by the biomechanics of natural fish fins, but my ultimate goal is to develop new materials and devices that are inspired by their mechanical properties. My colleagues and I are currently developing proof-of-concept materials that we hope will convince a broader range of engineers in academia and the private sector that fish fin-inspired designs can provide improved performance for a variety of applications.

The Conversation

Francois Barthelat does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article appeared in The Conversation.

The social animals that are inspiring new behaviours for robot swarms

By Edmund Hunt, University of Bristol

From flocks of birds to fish schools in the sea, or towering termite mounds, many social groups in nature exist together to survive and thrive. This cooperative behaviour can be used by engineers as “bio-inspiration” to solve practical human problems, and by computer scientists studying swarm intelligence.

“Swarm robotics” took off in the early 2000s, an early example being the “s-bot” (short for swarm-bot). This is a fully autonomous robot that can perform basic tasks including navigation and the grasping of objects, and which can self-assemble into chains to cross gaps or pull heavy loads. More recently, “TERMES” robots have been developed as a concept in construction, and the “CoCoRo” project has developed an underwater robot swarm that functions like a school of fish that exchanges information to monitor the environment. So far, we’ve only just begun to explore the vast possibilities that animal collectives and their behaviour can offer as inspiration to robot swarm design.

Swarm behaviour in birds – or robots designed to mimic them?
EyeSeeMicrostock/Shutterstock

Robots that can cooperate in large numbers could achieve things that would be difficult or even impossible for a single entity. Following an earthquake, for example, a swarm of search and rescue robots could quickly explore multiple collapsed buildings looking for signs of life. Threatened by a large wildfire, a swarm of drones could help emergency services track and predict the fire’s spread. Or a swarm of floating robots (“Row-bots”) could nibble away at oceanic garbage patches, powered by plastic-eating bacteria.

A future where floating robots powered by plastic-eating bacteria could tackle ocean waste.
Shutterstock

Bio-inspiration in swarm robotics usually starts with social insects – ants, bees and termites – because colony members are highly related, which favours impressive cooperation. Three further characteristics appeal to researchers: robustness, because individuals can be lost without affecting performance; flexibility, because social insect workers are able to respond to changing work needs; and scalability, because a colony’s decentralised organisation is sustainable with 100 workers or 100,000. These characteristics could be especially useful for doing jobs such as environmental monitoring, which requires coverage of huge, varied and sometimes hazardous areas.

Social learning

Beyond social insects, other species and behavioural phenomena in the animal kingdom offer inspiration to engineers. A growing area of biological research is in animal cultures, where animals engage in social learning to pick up behaviours that they are unlikely to innovate alone. For example, whales and dolphins can have distinctive foraging methods that are passed down through the generations. This includes forms of tool use – dolphins have been observed breaking off marine sponges to protect their beaks as they go rooting around for fish, like a person might put a glove over a hand.

Bottlenose dolphin playing with a sponge. Some have learned to use them to help them catch fish.
Yann Hubert/Shutterstock

Forms of social learning and artificial robotic cultures, perhaps using forms of artificial intelligence, could be very powerful in adapting robots to their environment over time. For example, assistive robots for home care could adapt to human behavioural differences in different communities and countries over time.

Robot (or animal) cultures, however, depend on learning abilities that are costly to develop, requiring a larger brain – or, in the case of robots, a more advanced computer. But the value of the “swarm” approach is to deploy robots that are simple, cheap and disposable. Swarm robotics exploits the reality of emergence (“more is different”) to create social complexity from individual simplicity. A more fundamental form of “learning” about the environment is seen in nature – in sensitive developmental processes – which do not require a big brain.

‘Phenotypic plasticity’

Some animals can change behavioural type, or even develop different forms, shapes or internal functions, within the same species, despite having the same initial “programming”. This is known as “phenotypic plasticity” – where the genes of an organism produce different observable results depending on environmental conditions. Such flexibility can be seen in the social insects, but sometimes even more dramatically in other animals.

Most spiders are decidedly solitary, but in about 20 of 45,000 spider species, individuals live in a shared nest and capture food on a shared web. These social spiders benefit from having a mixture of “personality” types in their group, for example bold and shy.

Social spider (Stegodyphus) spin collective webs in Addo Elephant Park, South Africa.
PicturesofThings/Shutterstock

My research identified a flexibility in behaviour where shy spiders would step into a role vacated by absent bold nestmates. This is necessary because the spider colony needs a balance of bold individuals to encourage collective predation, and shyer ones to focus on nest maintenance and parental care. Robots could be programmed with adjustable risk-taking behaviour, sensitive to group composition, with bolder robots entering into hazardous environments while shyer ones know to hold back. This could be very helpful in mapping a disaster area such as Fukushima, including its most dangerous parts, while avoiding too many robots in the swarm being damaged at once.

The ability to adapt

Cane toads were introduced in Australia in the 1930s as a pest control, and have since become an invasive species themselves. In new areas cane toads are seen to be somewhat social. One reason for their growth in numbers is that they are able to adapt to a wide temperature range, a form of physiological plasticity. Swarms of robots with the capability to switch power consumption mode, depending on environmental conditions such as ambient temperature, could be considerably more durable if we want them to function autonomously for the long term. For example, if we want to send robots off to map Mars then they will need to cope with temperatures that can swing from -150°C at the poles to 20°C at the equator.

Cane toads can adapt to temperature changes.
Radek Ziemniewicz/Shutterstock

In addition to behavioural and physiological plasticity, some organisms show morphological (shape) plasticity. For example, some bacteria change their shape in response to stress, becoming elongated and so more resilient to being “eaten” by other organisms. If swarms of robots can combine together in a modular fashion and (re)assemble into more suitable structures this could be very helpful in unpredictable environments. For example, groups of robots could aggregate together for safety when the weather takes a challenging turn.

Whether it’s the “cultures” developed by animal groups that are reliant on learning abilities, or the more fundamental ability to change “personality”, internal function or shape, swarm robotics still has plenty of mileage left when it comes to drawing inspiration from nature. We might even wish to mix and match behaviours from different species, to create robot “hybrids” of our own. Humanity faces challenges ranging from climate change affecting ocean currents, to a growing need for food production, to space exploration – and swarm robotics can play a decisive part given the right bio-inspiration.The Conversation

Edmund Hunt, EPSRC Doctoral Prize Fellow, University of Bristol

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Robots guarded Buddha’s relics in a legend of ancient India

Two small figures guard the table holding the Buddha’s relics. Are they spearmen, or robots? British Museum, CC BY-NC-SA

By Adrienne Mayor

As early as Homer, more than 2,500 years ago, Greek mythology explored the idea of automatons and self-moving devices. By the third century B.C., engineers in Hellenistic Alexandria, in Egypt, were building real mechanical robots and machines. And such science fictions and historical technologies were not unique to Greco-Roman culture.

In my recent book “Gods and Robots,” I explain that many ancient societies imagined and constructed automatons. Chinese chronicles tell of emperors fooled by realistic androids and describe artificial servants crafted in the second century by the female inventor Huang Yueying. Techno-marvels, such as flying war chariots and animated beings, also appear in Hindu epics. One of the most intriguing stories from India tells how robots once guarded Buddha’s relics. As fanciful as it might sound to modern ears, this tale has a strong basis in links between ancient Greece and ancient India.

The story is set in the time of kings Ajatasatru and Asoka. Ajatasatru, who reigned from 492 to 460 B.C., was recognized for commissioning new military inventions, such as powerful catapults and a mechanized war chariot with whirling blades. When Buddha died, Ajatasatru was entrusted with defending his precious remains. The king hid them in an underground chamber near his capital, Pataliputta (now Patna) in northeastern India.

A sculpture depicting the distribution of the Buddha’s relics.
Los Angeles County Museum of Art/Wikimedia Commons

Traditionally, statues of giant warriors stood on guard near treasures. But in the legend, Ajatasatru’s guards were extraordinary: They were robots. In India, automatons or mechanical beings that could move on their own were called “bhuta vahana yanta,” or “spirit movement machines” in Pali and Sanskrit. According to the story, it was foretold that Ajatasatru’s robots would remain on duty until a future king would distribute Buddha’s relics throughout the realm.

Ancient robots and automatons

A statue of Visvakarman, the engineer of the universe.
Suraj Belbase/Wikimedia Commons, CC BY-SA

Hindu and Buddhist texts describe the automaton warriors whirling like the wind, slashing intruders with swords, recalling Ajatasatru’s war chariots with spinning blades. In some versions the robots are driven by a water wheel or made by Visvakarman, the Hindu engineer god. But the most striking version came by a tangled route to the “Lokapannatti” of Burma – Pali translations of older, lost Sanskrit texts, only known from Chinese translations, each drawing on earlier oral traditions.

In this tale, many “yantakara,” robot makers, lived in the Western land of the “Yavanas,” Greek-speakers, in “Roma-visaya,” the Indian name for the Greco-Roman culture of the Mediterranean world. The Yavanas’ secret technology of robots was closely guarded. The robots of Roma-visaya carried out trade and farming and captured and executed criminals.

Robot makers were forbidden to leave or reveal their secrets – if they did, robotic assassins pursued and killed them. Rumors of the fabulous robots reached India, inspiring a young artisan of Pataliputta, Ajatasatru’s capital, who wished to learn how to make automatons.

In the legend, the young man of Pataliputta finds himself reincarnated in the heart of Roma-visaya. He marries the daughter of the master robot maker and learns his craft. One day he steals plans for making robots, and hatches a plot to get them back to India.

Certain of being slain by killer robots before he could make the trip himself, he slits open his thigh, inserts the drawings under his skin and sews himself back up. Then he tells his son to make sure his body makes it back to Pataliputta, and starts the journey. He’s caught and killed, but his son recovers his body and brings it to Pataliputta.

Once back in India, the son retrieves the plans from his father’s body, and follows their instructions to build the automated soldiers for King Ajatasatru to protect Buddha’s relics in the underground chamber. Well hidden and expertly guarded, the relics – and robots – fell into obscurity.

The sprawling Maurya Empire in about 250 B.C.
Avantiputra7/Wikimedia Commons, CC BY-SA

Two centuries after Ajatasatru, Asoka ruled the powerful Mauryan Empire in Pataliputta, 273-232 B.C. Asoka constructed many stupas to enshrine Buddha’s relics across his vast kingdom. According to the legend, Asoka had heard the legend of the hidden relics and searched until he discovered the underground chamber guarded by the fierce android warriors. Violent battles raged between Asoka and the robots.

In one version, the god Visvakarman helped Asoka to defeat them by shooting arrows into the bolts that held the spinning constructions together; in another tale, the old engineer’s son explained how to disable and control the robots. At any rate, Asoka ended up commanding the army of automatons himself.

Exchange between East and West

Is this legend simply fantasy? Or could the tale have coalesced around early cultural exchanges between East and West? The story clearly connects the mechanical beings defending Buddha’s relics to automatons of Roma-visaya, the Greek-influenced West. How ancient is the tale? Most scholars assume it arose in medieval Islamic and European times.

But I think the story could be much older. The historical setting points to technological exchange between Mauryan and Hellenistic cultures. Contact between India and Greece began in the fifth century B.C., a time when Ajatasatru’s engineers created novel war machines. Greco-Buddhist cultural exchange intensified after Alexander the Great’s campaigns in northern India.

Inscriptions in Greek and Aramaic on a monument originally erected by King Asoka at Kandahar, in what is today Afghanistan.
World Imaging/Wikimedia Commons

In 300 B.C., two Greek ambassadors, Megasthenes and Deimachus, resided in Pataliputta, which boasted Greek-influenced art and architecture and was the home of the legendary artisan who obtained plans for robots in Roma-visaya. Grand pillars erected by Asoka are inscribed in ancient Greek and name Hellenistic kings, demonstrating Asoka’s relationship with the West. Historians know that Asoka corresponded with Hellenistic rulers, including Ptolemy II Philadelphus in Alexandria, whose spectacular procession in 279 B.C. famously displayed complex animated statues and automated devices.

Historians report that Asoka sent envoys to Alexandria, and Ptolemy II sent ambassadors to Asoka in Pataliputta. It was customary for diplomats to present splendid gifts to show off cultural achievements. Did they bring plans or miniature models of automatons and other mechanical devices?

I cannot hope to pinpoint the original date of the legend, but it is plausible that the idea of robots guarding Buddha’s relics melds both real and imagined engineering feats from the time of Ajatasatru and Asoka. This striking legend is proof that the concepts of building automatons were widespread in antiquity and reveals the universal and timeless link between imagination and science.

Adrienne Mayor is the author of:

Gods and Robots: Myths, Machines, and Ancient Dreams of TechnologyThe Conversation

Princeton University Press provides funding as a member of The Conversation US.

Adrienne Mayor, Research Scholar, Classics and History and Philosophy of Science, Stanford University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Technology and robots will shake labour policies in Asia and the world

Developing countries must begin seriously considering how technological changes will impact labour trends. KC Jan/Shutterstock

By Asit K. Biswas, University of Glasgow and Kris Hartley, The Education University of Hong Kong

In the 21st century, governments cannot ignore how changes in technology will affect employment and political stability.

The automation of work – principally through robotics, artificial intelligence (AI) and the Internet of things (IoT), collectively known as the Fourth Industrial Revolution – will provide an unprecedented boost to productivity and profit. It will also threaten the stability of low- and mid-skilled jobs in many developing and middle-income countries.

From labour to automation

Developing countries must begin seriously considering how technological changes will impact labour trends. Technology now looms just as large a disruptive force, if not larger, than the whims of global capital.

China has for decades increased its global contribution to manufacturing value-added goods, now enjoying a competitive position in Apple products, household appliances, and technology. In the process, the country has made historic progress lifting its citizens out of poverty.

China has accomplished this by raising worker productivity through technology and up-skilling (improving or acquiring new skills), and higher wages have predictably followed.

However, this trend is also compelling manufacturers to relocate some low-skill production to Southeast Asia. US-China trade disputes could exacerbate this trend.

Relocation of manufacturing activity has been an economic boon for workers in countries like Vietnam and Indonesia. However, the race among global manufacturers to procure the cheapest labour brings no assurances of long-term growth and prosperity to any one country.

Governments in developing countries must parlay the proceeds of ephemeral labour cost advantages into infrastructure investment, industrial upgrading and worker upskilling. China has done this to better effect than many.

The growth in sophistication and commercial feasibility of robotics, IoT, and other automation technologies will impact jobs at nearly every skill level. More broadly, the fallout from technological advancement may replicate the disruptive geographic shifts in production once resulting from labour cost arbitrage.

Political blowback

After many decades of globalisation, a borderless economy has emerged in which capital and production move freely to locations with the greatest investment returns and lowest cost structures. This has prompted a pattern of global economic restructuring, generating unprecedented growth opportunities for developing countries.

Workers have been rewarded for their personal efforts in education and skill development, while millions have been lifted from poverty.

Given advancements in technology and the associated impact on livelihoods, it is time to consider how the next chapter of global development will play out politically. Automation will be a highly disruptive force by most economic, social, and political measures. Few countries – developed or otherwise – will escape this challenge.

Some Western countries, including the United States, are already experiencing a populist political wave fuelled in part by the economic grievances of workers displaced from once stable, middle-class manufacturing jobs. Similar push-back may erupt in countries already embroiled in nationalist politics, including India.

Growing populations and the automation of work will soon mix to create unemployment crises, with serious implications for domestic political stability.

As education systems flood the employment market with scores of ambitious graduates, one of the greatest challenges governments face is how to generate well-paying jobs.

Further, vulnerable workers will include not only new entrants but also experienced workers, some of whom are continuously and aggressively up-skilling in anticipation of more lucrative employment.

In India, over 1 million people enter the working-age population every month. More than 8 million new jobs are needed each year to maintain current employment levels.

India’s young population is becoming increasingly pessimistic about their employment prospects. Although official statistics are unreliable, as a large percentage of work occurs in the informal sector in positions such domestic workers, coolies, street vendors, and transient positions lacking contracts, indications are that India may be facing the prospect of jobless growth.

Insufficient skill levels in much of the workforce are impeding India’s effort to accelerate growth in high-productivity jobs. Thus, the country’s large-scale manufacturers, both domestically and internationally owned, are turning to robots to ensure consistent, reliable, and efficient production.

Urbanisation also adds to India’s employment challenge. The promise of higher-paying jobs has lured many rural workers into urban areas, but these workers are often illiterate and lack sufficient skills. This was not always a concern, as these workers could find menial factory jobs. Robots are now doing much of the low-skilled work that migrant workers were once hired to do.

Towards a future of stable livelihoods

The lingering socio-economic imperative for many governments is to replace eliminated jobs. According to The World Economic Forum, “inequality represents the greatest societal concern associated with the Fourth Industrial Revolution.”

However, the WEF and others have given little useful guidance on how to address this challenge. How should the economy absorb multitudes of variously skilled workers displaced by technology?

People aspire to economic and social mobility more than ever before, particularly as they observe wealth rising ostentatiously all around them – on the streets, in the news, and among seemingly lucky friends and acquaintances. Sadly, the aspirations of most will go unfulfilled.

One way forward is said to be through up-skilling by retraining workers to operate and maintain technology systems. However, this seems to be a paradox, as workers would be training robots to eventually take jobs held by humans. If a major driver of automation is reduction or elimination of labour costs, one cannot expect all displaced workers to enjoy stable and continuing employment opportunities.

Despite political promises about employment growth from high-tech industries and the technological transformation of primary sectors, the tension between the drive for technology-based efficiency and the loss of jobs is undeniable and may have no clear resolution.

Societies have reacted to global economic restructuring in discouraging ways, indulging in nationalism, racism, militarism, and arbitrary economic protectionism. Populist opportunists and foul-tempered troglodytes have ridden reactionary rhetoric into positions of political power, raging against what former White House chief strategist Steve Bannon calls the “liberal postwar international order.” At the same time, left-leaning solutions such as universal basic income face significant fiscal and political headwinds.

The 21st century will see increased disruptions to once-stable work life, due to technological progress and the continuing liberalisation of global capital and production. Early indications about how countries will respond – haphazardly and with no clear long-term strategy – are not encouraging.The Conversation

Asit K. Biswas, Visiting professor, University of Glasgow and Kris Hartley, Assistant professor, The Education University of Hong Kong

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How robots are helping doctors save lives in the Canadian North

Remote presence technology enables a medic to perform an ultrasound at the scene of accident.
(University of Saskatchewan), Author provided

Ivar Mendez, University of Saskatchewan

It is the middle of the winter and a six-month-old child is brought with acute respiratory distress to a nursing station in a remote community in the Canadian North.

The nurse realizes that the child is seriously ill and contacts a pediatric intensivist located in a tertiary care centre 900 kilometres away. The intensivist uses her tablet to activate a remote presence robot installed in the nursing station and asks the robot to go to the assessment room.

The robot autonomously navigates the nursing station corridors and arrives at the assessment room two minutes later. With the help of the robot’s powerful cameras, the doctor “sees” the child and talks to the nurse and the parents to obtain the medical history. She uses the robot’s stethoscope to listen to the child’s chest, measures the child’s oxygen blood saturation with a pulse oximeter and performs an electrocardiogram.

With the robot’s telestrator (an electronic device which enables the user to write and draw freehand over a video image) she helps the nurse to start an intravenous line and commences therapy to treat the child’s life-threatening condition.

This is not science fiction. This remote presence technology is currently in use in Saskatchewan, Canada — to provide care to acutely ill children living in remote Northern communities.

Treating acutely ill children

Advances in telecommunication, robotics, medical sensor technology and artificial intelligence (AI) have opened the door for solutions to the challenge of delivering remote, real-time health care to underserviced rural and remote populations.

A team uses a remote presence robot to see a patient in the emergency room.
(University of Saskatchewan), Author provided

In Saskatchewan, we have established a remote medicine program that focuses on the care of the most vulnerable populations — such as acutely ill children, pregnant women and the elderly.

We have demonstrated that with this technology about 70 per cent of acutely ill children can be successfully treated in their own communities. In similar communities without this technology, all acutely ill children need to be transported to a tertiary care centre.

We have also shown that this technology prevents delays in diagnosis and treatment and results in substantial savings to the health-care system.

Prenatal ultrasounds for Indigenous women

Remote communities often lack access to diagnostic ultrasonography services. This gap disproportionally affects Indigenous pregnant women in the Canadian North and results in increases in maternal and newborn morbidity and mortality.

We are pioneering the use of an innovative tele-robotic ultrasound system that allows an expert sonographer to perform a diagnostic ultrasound study, in real time, in a distant location.

Research shows that robotic ultrasonography is comparable to standard sonography and is accepted by most patients.

The first tele-robotic ultrasonography systems have been deployed to two northern Saskatchewan communities and are currently performing prenatal ultrasounds.

Emergency room trauma assessment

Portable remote presence devices that use available cellular networks could also be used in emergency situations, such as trauma assessment at the scene of an accident or transport of a victim to hospital.

For example, emergency physicians or trauma surgeons could perform real-time ultrasonography of the abdomen, thorax and heart in critically injured patients, identify life-threatening injuries and start life-saving treatment.

Wearable remote presence devices such a Google Glass technology are the next step in remote presence health care for underserviced populations.

For example, a local nurse and a specialist in a tertiary care centre thousand of kilometres away could assess together an acutely ill patient in an emergency room in a remote community through the nurse’s eyes.

A nurse examines a patient with Google Glass.
(University of Saskatchewan), Author provided

Although remote presence technology may be applied initially to emergency situations in remote locations, its major impact may be in the delivery of primary health care. We can imagine the use of mobile remote presence devices by health professionals in a wide range of scenarios — from home-care visits to follow-up mental health sessions — in which access to medical expertise in real time would be just a computer click away.

A paradigm shift in health-care delivery

The current model of centralized health care, where the patient has to go to a hospital or a clinic to receive urgent or elective medical care, is inefficient and costly. Patients have to wait many hours in emergency rooms. Hospitals run at overcapacity. Delays in diagnosis and treatment cause poor outcomes or even death.

Underserviced rural and remote communities and the most vulnerable populations such as children and the elderly are the most affected by this centralized model.

Remote presence technologies have the potential to shift this — so that we can deliver medical care to a patient anywhere. In this decentralized model, patients requiring urgent or elective medical care will be seen, diagnosed and treated in their own communities or homes and patients requiring hospitalization will be triaged without delay.

This technology could have important applications in low-resource settings. Cellular network signals around the globe and rapidly increasing bandwidth will provide the telecommunication platform for a wide range of mobile applications.

Low-cost, dedicated remote-presence devices will increase access to medical expertise for anybody living in a geographical area with a cellphone signal. This access will be especially beneficial to people in developing countries where medical expertise is insufficient or not available.

The future of medical care is not in building more or bigger hospitals but in harnessing the power of technology to monitor and reach patients wherever they are — to preserve life, ensure wellness and speed up diagnosis and treatment.The Conversation

Ivar Mendez, Fred H. Wigmore Professor and Unified Head of the Department of Surgery, University of Saskatchewan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Montréal Declaration: Why we must develop AI responsibly

Yoshua Bengio, Université de Montréal

I have been doing research on intelligence for 30 years. Like most of my colleagues, I did not get involved in the field with the aim of producing technological objects, but because I have an interest in the the abstract nature of the notion of intelligence. I wanted to understand intelligence. That’s what science is: Understanding.

However, when a group of researchers ends up understanding something new, that knowledge can be exploited for beneficial or harmful purposes.

That’s where we are — at a turning point where the science of artificial intelligence is emerging from university laboratories. For the past five or six years, large companies such as Facebook and Google have become so interested in the field that they are putting hundreds of millions of dollars on the table to buy AI firms and then develop this expertise internally.

The progression in AI has since been exponential. Businesses are very interested in using this knowledge to develop new markets and products and to improve their efficiency.

So, as AI spreads in society, there is an impact. It’s up to us to choose how things play out. The future is in our hands.

Killer robots, job losses

From the get-go, the issue that has concerned me is that of lethal autonomous weapons, also known as killer robots.

While there is a moral question because machines have no understanding of the human, psychological and moral context, there is also a security question because these weapons could destabilize the world order.

Another issue that quickly surfaced is that of job losses caused by automation. We asked the question: Why? Who are we trying to bring relief to and from what? The trucker isn’t happy on the road? He should be replaced by… nobody?

We scientists seemingly can’t do much. Market forces determine which jobs will be eliminated or those where the workload will be lessened, according to the economic efficiency of the automated replacements. But we are also citizens who can participate in a unique way in the social and political debate on these issues precisely because of our expertise.

Computer scientists are concerned with the issue of jobs. That is not because they will suffer personally. In fact, the opposite is true. But they feel they have a responsibility and they don’t want their work to potentially put millions of people on the street.

Revising the social safety net

So strong support exists, therefore, among computer scientists — especially those in AI — for a revision of the social safety net to allow for a sort of guaranteed wage, or what I would call a form of guaranteed human dignity.

The objective of technological innovation is to reduce human misery, not increase it.

It is also not meant to increase discrimination and injustice. And yet, AI can contribute to both.

Discrimination is not so much due, as we sometimes hear, to the fact AI was conceived by men because of the alarming lack of women in the technology sector. It is mostly due to AI leading on data that reflects people’s behaviour. And that behaviour is unfortunately biased.

In other words, a system that relies on data that comes from people’s behaviour will have the same biases and discrimination as the people in question. It will not be “politically correct.” It will not act according to the moral notions of society, but rather according to common denominators.

Society is discriminatory and these systems, if we’re not careful, could perpetuate or increase that discrimination.

There could also be what is called a feedback loop. For example, police forces use this kind of system to identify neighbourhoods or areas that are more at-risk. They will send in more officers… who will report more crimes. So the statistics will strengthen the biases of the system.

The good news is that research is currently being done to develop algorithms that will minimize discrimination. Governments, however, will have to bring in rules to force businesses to use these techniques.

Saving lives

There is also good news on the horizon. The medical field will be one of those most affected by AI — and it’s not just a matter of saving money.

Doctors are human and therefore make mistakes. So the more we develop systems with more data, fewer mistakes will occur. Such systems are more precise than the best doctors. They are already using these tools so they don’t miss important elements such as cancerous cells that are difficult to detect in a medical image.

There is also the development of new medications. AI can do a better job of analyzing the vast amount of data (more than what a human would have time to digest) that has been accumulated on drugs and other molecules. We’re not there yet, but the potential is there, as is more efficient analysis of a patient’s medical file.

We are headed toward tools that will allow doctors to make links that otherwise would have been very difficult to make and will enable physicians to suggest treatments that could save lives.

The chances of the medical system being completely transformed within 10 years are very high and, obviously, the importance of this progress for everyone is enormous.

I am not concerned about job losses in the medical sector. We will always need the competence and judgment of health professionals. However, we need to strengthen social norms (laws and regulations) to allow for the protection of privacy (patients’ data should not be used against them) as well as to aggregate that data to enable AI to be used to heal more people and in better ways.

The solutions are political

Because of all these issues and others to come, the Montréal Declaration for Responsible Development of Artificial Intelligence is important. It was signed Dec. 4 at the Society for Arts and Technology in the presence of about 500 people.

It was forged on the basis of vast consensus. We consulted people on the internet and in bookstores and gathered opinion in all kinds of disciplines. Philosophers, sociologists, jurists and AI researchers took part in the process of creation, so all forms of expertise were included.

There were several versions of this declaration. The first draft was at a forum on the socially responsible development of AI organized by the Université de Montréal on Nov. 2, 2017.

That was the birthplace of the declaration.

Its goal is to establish a certain number of principles that would form the basis of the adoption of new rules and laws to ensure AI is developed in a socially responsible manner. Current laws are not always well adapted to these new situations.

And that’s where we get to politics.

The abuse of technology

Matters related to ethics or abuse of technology ultimately become political and therefore belong in the sphere of collective decisions.

How is society to be organized? That is political.

What is to be done with knowledge? That is political.

I sense a strong willingness on the part of provincial governments as well as the federal government to commit to socially responsible development.

Because Canada is a scientific leader in AI, it was one of the first countries to see all its potential and to develop a national plan. It also has the will to play the role of social leader.

Montréal has been at the forefront of this sense of awareness for the past two years. I also sense the same will in Europe, including France and Germany.

Generally speaking, scientists tend to avoid getting too involved in politics. But when there are issues that concern them and that will have a major impact on society, they must assume their responsibility and become part of the debate.

And in this debate, I have come to realize that society has given me a voice — that governments and the media were interested in what I had to say on these topics because of my role as a pioneer in the scientific development of AI.

So, for me, it is now more than a responsibility. It is my duty. I have no choice.The Conversation

Yoshua Bengio, Professeur titulaire, Département d’informatique et de recherche opérationnelle, Université de Montréal

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Worried about AI taking over the world? You may be making some rather unscientific assumptions

Eleni Vasilaki, Professor of Computational Neuroscience, University of Sheffield

File 20180923 117383 1d2tv74.jpg?ixlib=rb 1.1

Phonlamai Photo/Shutterstock

Should we be afraid of artificial intelligence? For me, this is a simple question with an even simpler, two letter answer: no. But not everyone agrees – many people, including the late physicist Stephen Hawking, have raised concerns that the rise of powerful AI systems could spell the end for humanity.

Clearly, your view on whether AI will take over the world will depend on whether you think it can develop intelligent behaviour surpassing that of humans – something referred to as “super intelligence”. So let’s take a look at how likely this is, and why there is much concern about the future of AI.

Humans tend to be afraid of what they don’t understand. Fear is often blamed for racism, homophobia and other sources of discrimination. So it’s no wonder it also applies to new technologies – they are often surrounded with a certain mystery. Some technological achievements seem almost unrealistic, clearly surpassing expectations and in some cases human performance.

No ghost in the machine

But let us demystify the most popular AI techniques, known collectively as “machine learning”. These allow a machine to learn a task without being programmed with explicit instructions. This may sound spooky but the truth is it is all down to some rather mundane statistics.

The machine, which is a program, or rather an algorithm, is designed with the ability to discover relationships within provided data. There are many different methods that allow us to achieve this. For example, we can present to the machine images of handwritten letters (a-z), one by one, and ask it to tell us which letter we show each time in sequence. We have already provided the possible answers – it can only be one of (a-z). The machine at the beginning says a letter at random and we correct it, by providing the right answer. We have also programmed the machine to reconfigure itself so that next time, if presented with the same letter, it is more likely to give us the correct answer for the next one. As a consequence, the machine over time improves its performance and “learns” to recognise the alphabet.

In essence, we have programmed the machine to exploit common relationships in the data in order to achieve the specific task. For instance, all versions of “a” look structurally similar, but different to “b”, and the algorithm can exploit this. Interestingly, after the training phase, the machine can apply the obtained knowledge on new letter samples, for example written by a person whose handwriting the machine has never seen before.

We do give AI answers.
Chim/Shutterstock

Humans, however, are good at reading. Perhaps a more interesting example is Google Deepmind’s artificial Go player, which has surpassed every human player in their performance of the game. It clearly learns in a way different to humans – playing a number of games with itself that no human could play in their lifetime. It has been specifically instructed to win and told that the actions it takes determine whether it wins or not. It has also been told the rules of the game. By playing the game again and again it can discover in each situation what is the best action – inventing moves that no human has played before.

Toddlers versus robots

Now does that make the AI Go player smarter than a human? Certainly not. AI is very specialised to particular type of tasks and it doesn’t display the versatility that humans do. Humans develop an understanding of the world over years that no AI has achieved or seem likely to achieve anytime soon.

The fact that AI is dubbed “intelligent” is ultimately down to the fact that it can learn. But even when it comes to learning, it is no match for humans. In fact, toddlers can learn by just watching somebody solving a problem once. An AI, on the other hand, needs tonnes of data and loads of tries to succeed on very specific problems, and it is difficult to generalise its knowledge on tasks very different to those trained upon. So while humans develop breathtaking intelligence rapidly in the first few years of life, the key concepts behind machine learning are not so different from what they were one or two decades ago.

Toddler brains are amazing.
Mcimage/Shutterstock

The success of modern AI is less due to a breakthrough in new techniques and more due to the vast amount of data and computational power available. Importantly, though, even an infinite amount of data won’t give AI human-like intelligence – we need to make a significant progress on developing artificial “general intelligence” techniques first. Some approaches to doing this involve building a computer model of the human brain – which we’re not even close to achieving.

Ultimately, just because an AI can learn, it doesn’t really follow that it will suddenly learn all aspects of human intelligence and outsmart us. There is no simple definition of what human intelligence even is and we certainly have little idea how exactly intelligence emerges in the brain. But even if we could work it out and then create an AI that could learn to become more intelligent, that doesn’t necessarily mean that it would be more successful.

Personally, I am more concerned by how humans use AI. Machine learning algorithms are often thought of as black boxes, and less effort is made in pinpointing the specifics of the solution our algorithms have found. This is an important and frequently neglected aspect as we are often obsessed with performance and less with understanding. Understanding the solutions that these systems have discovered is important, because we can also evaluate if they are correct or desirable solutions.

If, for instance, we train our system in a wrong way, we can also end up with a machine that has learned relationships that do not hold in general. Say for instance that we want to design a machine to evaluate the ability of potential students in engineering. Probably a terrible idea, but let us follow it through for the sake of the argument. Traditionally, this is a male dominated discipline, which means that training samples are likely to be from previous male students. If we don’t make sure, for instance, that the training data are balanced, the machine might end up with the conclusion that engineering students are male, and incorrectly apply it to future decisions.

Machine learning and artificial intelligence are tools. They can be used in a right or a wrong way, like everything else. It is the way that they are used that should concerns us, not the methods themselves. Human greed and human unintelligence scare me far more than artificial intelligence.The Conversation

How robot math and smartphones led researchers to a drug discovery breakthrough


By Ian Haydon, University of Washington

Robotic movement can be awkward.

For us humans, a healthy brain handles all the minute details of bodily motion without demanding conscious attention. Not so for brainless robots – in fact, calculating robotic movement is its own scientific subfield.

My colleagues here at the University of Washington’s Institute for Protein Design have figured out how to apply an algorithm originally designed to help robots move to an entirely different problem: drug discovery. The algorithm has helped unlock a class of molecules known as peptide macrocycles, which have appealing pharmaceutical properties.

One small step, one giant leap

Roboticists who program movement conceive of it in what they call “degrees of freedom.” Take a metal arm, for instance. The elbow, wrist and knuckles are movable and thus contain degrees of freedom. The forearm, upper arm and individual sections of each finger do not. If you want to program an android to reach out and grasp an object or take a calculated step, you need to know what its degrees of freedom are and how to manipulate them.

The more degrees of freedom a limb has, the more complex its potential motions. The math required to direct even simple robotic limbs is surprisingly abstruse; Ferdinand Freudenstein, a father of the field, once called the calculations underlying the movement of a limb with seven joints “the Mount Everest of kinematics.”

Freudenstein developed his kinematics equations at the dawn of the computer era in the 1950s. Since then, roboticists have increasingly relied on algorithms to solve these complex kinematic puzzles. One algorithm in particular – known as “generalized kinematic closure” – bested the seven joint problem, allowing roboticists to program fine control into mechanical hands.

Molecular biologists took notice.

Many molecules inside living cells can be conceived of as chains with pivot points, or degrees of freedom, akin to tiny robotic arms. These molecules flex and twist according to the laws of chemistry. Peptides and their elongated cousins, proteins, often must adopt precise three-dimensional shapes in order to function. Accurately predicting the complex shapes of peptides and proteins allows scientists like me to understand how they work.

Mastering macrocycles

While most peptides form straight chains, a subset, known as macrocycles, form rings. This shape offers distinct pharmacological advantages. Ringed structures are less flexible than floppy chains, making macrocycles extremely stable. And because they lack free ends, some can resist rapid degradation in the body – an otherwise common fate for ingested peptides.

Macrocycles have a circular ‘main chain’ (shown as thick lines) and many ‘side chains’ (shown as thin lines). The macrocycle on the left — cyclosporin — evolved in a fungus. The one on the right was designed on a computer. Credit: Ian Haydon/Institute for Protein Design

Natural macrocycles such as cyclosporin are among the most potent therapeutics identified to date. They combine the stability benefits of small-molecule drugs, like aspirin, and the specificity of large antibody therapeutics, like herceptin. Experts in the pharmaceutical industry regard this category of medicinal compounds as “attractive, albeit underappreciated.”

“There is a huge diversity of macrocycles in nature – in bacteria, plants, some mammals,” said Gaurav Bhardwaj, a lead author of the new report in Science, “and nature has evolved them for their own particular functions.” Indeed, many natural macrocycles are toxins. Cyclosporin, for instance, displays anti-fungal activity yet also acts as a powerful immunosuppressant in the clinic making it useful as a treatment for rheumatoid arthritis or to prevent rejection of transplanted organs.

A popular strategy for producing new macrocycle drugs involves grafting medicinally useful features onto otherwise safe and stable natural macrocycle backbones. “When it works, it works really well, but there’s a limited number of well-characterized structures that we can confidently use,” said Bhardwaj. In other words, drug designers have only had access to a handful of starting points when making new macrocycle medications.

To create additional reliable starting points, his team used generalized kinematic closure – the robot joint algorithm – to explore the possible conformations, or shapes, that macrocycles can adopt.

Adaptable algorithms

As with keys, the exact shape of a macrocycle matters. Build one with the right conformation and you may unlock a new cure.

Modeling realistic conformations is “one of the hardest parts” of macrocycle design, according to Vikram Mulligan, another lead author of the report. But thanks to the efficiency of the robotics-inspired algorithm, the team was able to achieve “near-exhaustive sampling” of plausible conformations at “relatively low computational cost.”

Supercomputer not necessary – smartphones performed the design calculations. Credit: Los Alamos National Laboratory

The calculations were so efficient, in fact, that most of the work did not require a supercomputer, as is usually the case in the field of molecular engineering. Instead, thousands of smartphones belonging to volunteers were networked together to form a distributed computing grid, and the scientific calculations were doled out in manageable chunks.

With the initial smartphone number crunching complete, the team pored over the results – a collection of hundreds of never-before-seen macrocycles. When a dozen such compounds were chemically synthesized in the lab, nine were shown to actually adopt the predicted conformation. In other words, the smartphones were accurately rendering molecules that scientists can now optimize for their potential as targeted drugs.

The team estimates the number of macrocycles that can confidently be used as starting points for drug design has jumped from fewer than 10 to over 200, thanks to this work. Many of the newly designed macrocycles contain chemical features that have never been seen in biology.

To date, macrocyclic peptide drugs have shown promise in battling cancer, cardiovascular disease, inflammation and infection. Thanks to the mathematics of robotics, a few smartphones and some cross-disciplinary thinking, patients may soon see even more benefits from this promising class of molecules.

Ian Haydon, Doctoral Student in Biochemistry, University of Washington

This article was originally published on The Conversation. Read the original article.

The Conversation

Drones, volcanoes and the ‘computerisation’ of the Earth

The Mount Agung volcano spews smoke, as seen from Karangasem, Bali. EPA-EFE/MADE NAGI

By Adam Fish

The eruption of the Agung volcano in Bali, Indonesia has been devastating, particularly for the 55,000 local people who have had to leave their homes and move into shelters. It has also played havoc with the flights in and out of the island, leaving people stranded while the experts try to work out what the volcano will do next.

But this has been a fascinating time for scholars like me who investigate the use of drones in social justice, environmental activism and crisis preparedness. The use of drones in this context is just the latest example of the “computerisation of nature” and raises questions about how reality is increasingly being constructed by software.

Amazon drone delivery is developing in the UK, drone blood delivery is happening in Rwanda, while in Indonesia people are using drones to monitor orangutan populations, map the growth and expansion of palm oil plantations and gather information that might help us predict when volcanoes such as Agung might again erupt with devastating impact.

In Bali, I have the pleasure of working with a remarkable group of drone professionals, inventors and hackers who work for Aeroterrascan, a drone company from Bandung, on the Indonesian island of Java. As part of their corporate social responsibility, they have donated their time and technologies to the Balinese emergency and crisis response teams. It’s been fascinating to participate in a project that flies remote sensing systems high in the air in order to better understand dangerous forces deep in the Earth.

I’ve been involved in two different drone volcano missions. A third mission will begin in a few days. In the first, we used drones to create an extremely accurate 3D map of the size of the volcano – down to 20cm of accuracy. With this information, we could see if the volcano was actually growing in size – key evidence that it is about to blow up.

The second mission involved flying a carbon dioxide and sulphur dioxide smelling sensor through the plume. An increase in these gases can tell us if an eruption looms. There was a high degree of carbon dioxide and that informed the government to raise the threat warning to the highest level.

In the forthcoming third mission, we will use drones to see if anyone is still in the exclusion zone so they can be found and rescued.

What is interesting to me as an anthropologist is how scientists and engineers use technologies to better understand distant processes in the atmosphere and below the Earth. It has been a difficult task, flying a drone 3,000 meters to the summit of an erupting volcano. Several different groups have tried and a few expensive drones have been lost – sacrifices to what the Balinese Hindus consider a sacred mountain.

More philosophically, I am interested in better understanding the implications of having sensor systems such as drones flying about in the air, under the seas, or on volcanic craters – basically everywhere. These tools may help us to evacuate people before a crisis but it also entails transforming organic signals into computer code. We’ve long interpreted nature through technologies that augment our senses, particularly sight. Microscopes, telescopes and binoculars have been great assets for chemistry, astronomy and biology.

The internet of nature

But the sensorification of the elements is something different. This has been called the computationalisation of Earth. We’ve heard a lot about the internet of things but this is the internet of nature. This is the surveillance state turned onto biology. The present proliferation of drones is the latest step in wiring everything on the planet. In this case, the air itself, to better understand the guts of a volcano.

These flying sensors, it is hoped, will give volcanologists what anthropologist Stephen Helmreich called abduction – or a predictive and prophetic “argument from the future”.

But the drones, sensors and software we use provide a particular and partial worldview. Looking back at today from the future, what will be the impact of increasing datafication of nature: better crop yield, emergency preparation, endangered species monitoring? Or will this quantification of the elements result in a reduction of nature to computer logic?

There is something not fully comprehended – or more ominously not comprehensible – about how flying robots and self-driving cars equipped with remote sensing systems filter the world through big data crunching algorithms capable of generating and responding to their own artificial intelligence.

These non-human others react to the world not as ecological, social, or geological processes but as functions and feature sets in databases. I am concerned by what this software view of nature will exclude, and as they remake the world in their database image, what the implications of those exclusions might be for planetary sustainability and human autonomy.

The ConversationIn this future world, there may be less of a difference between engineering towards nature and the engineering of nature.

Adam Fish, Senior Lecturer in Sociology and Media Studies, Lancaster University

This article was originally published on The Conversation. Read the original article.

What the robots of Star Wars tell us about automation, and the future of human work


File 20171212 9386 8xrbbt.jpg?ixlib=rb 1.1
BB-8 is an “astromech droid” who first appeared in The Force Awakens.
Lucasfilm/IMDB

By Paul Salmon, University of the Sunshine Coast

Millions of fans all over the world eagerly anticipated this week’s release of Star Wars: The Last Jedi, the eighth in the series. At last we will get some answers to questions that have been vexing us since 2015’s The Force Awakens.

Throughout the franchise, the core characters have been accompanied by a number of much-loved robots, including C-3PO, R2-D2 and more recently, BB-8 and K2-SO. While often fulfilling the role of wise-cracking sidekicks, these and other robots also play an integral role in events.

Interestingly, they can also tell us useful things about automation, such as whether it poses dangers to us and whether robots will ever replace human workers entirely. In these films, we see the good, bad and ugly of robots – and can thus glean clues about what our technological future might look like.

The fear of replacement

One major fear is that robots and automation will replace us, despite work design principles that tell us technology should be used as a tool to assist, rather than replace, humans. In the world of Star Wars, robots (or droids as they are known) mostly assist organic lifeforms, rather than completely replace them.

R2-D2 and C3PO in A New Hope.
Lucasfilms/IMDB

So for instance, C-3PO is a protocol droid who was designed to assist in translation, customs and etiquette. R2-D2 and the franchise’s new darling, BB-8, are both “astromech droids” designed to assist in starship maintenance.

In the most recent movie, Rogue One, an offshoot of the main franchise, we were introduced to K2-SO, a wisecracking advanced autonomous military robot who was caught and reprogrammed to switch allegiance to the rebels. K2-SO mainly acts as a co-pilot, for example when flying a U-Wing with the pilot Cassian Andor to the planet of Eadu.

In most cases then, the Star Wars droids provide assistance – co-piloting ships, helping to fix things, and even serving drinks. In the world of these films, organic lifeforms are still relied upon for most skilled work.

When organic lifeforms are completely replaced, it is generally when the work is highly dangerous. For instance, during the duel between Annakin and Obi Wan on the planet Mustafar in Revenge of the Sith, DLC-13 mining droids can be seen going about their work in the planet’s hostile lava rivers.

Further, droid armies act as the frontline in various battles throughout the films. Perhaps, in the future, we will be OK with losing our jobs if the work in question poses a significant risk to our health.

K2-SO in Rogue One.
Lucasfilm/IMDB

However, there are some exceptions to this trend in the Star Wars universe. In the realm of healthcare, for instance, droids have fully replaced organic lifeforms. In The Empire Strikes Back a medical droid treats Luke Skywalker after his encounter with a Wampa, a yeti-like snow beast on the planet Hoth. The droid also replaces his hand following his battle with Darth Vadar on the planet Bespin.

Likewise, in Revenge of the Sith, a midwife droid is seen delivering the siblings Luke and Leia on Polis Massa.

Perhaps this is one area in which Star Wars has it wrong: here on earth, full automation is a long way off in healthcare. Assistance from robots in healthcare is the more realistic prospect and is in fact, already here. Indeed, robots have been assisting surgeons in operating theatres for some time now.

Automated vehicles

Driverless vehicles are currently flavour of the month – but will we actually use them? In Star Wars, despite the capacity for spacecraft and star ships to be fully automated, organic lifeforms still take the controls. The spaceship Millenium Falcon, for example, is mostly flown by the smuggler Han Solo and his companion Chewbacca.

Most of the Star Wars starship fleet (A-Wings, X-Wings, Y-Wings, Tie Fighters, Star Destroyers, Starfighters and more) ostensibly possess the capacity for fully automated flight, however, they are mostly flown by organic lifeforms. In The Phantom Menace the locals on Tatooine have even taken to building and manually racing their own “pod racers”.

It seems likely that here on earth, humans too will continue to prefer to drive, fly, sail, and ride. Despite the ability to fully automate, most people will still want to be able to take full control.

Flawless, error proof robots?

Utopian visions often depict a future where sophisticated robots will perform highly skilled tasks, all but eradicating the costly errors that humans make. This is unlikely to be true.

A final message from the Star Wars universe is that the droids and advanced technologies are often far from perfect. In our own future, costly human errors may simply be replaced by robot designer errors.

R5-D4, the malfunctioning droid of A New Hope.
Lucasfilms/IMDB

The B1 Battle Droids seen in the first and second Star Wars films lack intelligence and frequently malfunction. C-3PO is notoriously error prone and his probability-based estimates are often wide of the mark.

In the fourth film, A New Hope, R5-D4 (another astromech droid) malfunctions and explodes just as the farmer Owen Lars is about to buy it. Other droids are slow and clunky, such as the GNK Power droid and HURID-327, the groundskeeper at the castle of Maz Kanata in The Force Awakens.

The much feared scenario, whereby robots become so intelligent that they eventually take over, is hard to imagine with this lot.

The ConversationPerhaps the message from the Star Wars films is that we need to lower our expectations of robot capabilities, in the short term at least. Cars will still crash, mistakes will still be made, regardless of whether humans or robots are doing the work.

Paul Salmon, Professor of Human Factors, University of the Sunshine Coast

This article was originally published on The Conversation. Read the original article.

We built a robot care assistant for elderly people – here’s how it works

Credit: Trinity College Dublin

By Conor McGinn, Trinity College Dublin

Not all robots will take over human jobs. My colleagues and I have just unveiled a prototype care robot that we hope could take on some of the more mundane work of looking after elderly and disabled people and those with conditions such as dementia. This would leave human carers free to focus on the more personal parts of the job. The robot could also do things humans don’t have time to do now, like keeping a constant check on whether someone is safe and well, while allowing them to keep their privacy.

Our robot, named Stevie, is designed to look a bit (but not too much) like a human, with arms and a head but also wheels. This is because we need it to exist alongside people and perform tasks that may otherwise be done by a human. Giving the robot these features help people realise that they can speak to it and perhaps ask it to do things for them.

Stevie can perform some of its jobs autonomously, for example reminding users to take medication. Other tasks are designed to involve human interaction. For example, if a room sensor detects a user may have fallen over, a human operator can take control of the robot, use it to investigate the event and contact the emergency services if necessary.

Credit:Trinity College Dublin

Stevie can also help users stay socially connected. For example, the screens in the head can facilitate a Skype call, eliminating the challenges many users face using telephones. Stevie can also regulate room temperatures and light levels, tasks that help to keep the occupant comfortable and reduce possible fall hazards.

None of this will mean we won’t need human carers anymore. Stevie won’t be able to wash or dress people, for example. Instead, we’re trying to develop technology that helps and complements human care. We want to combine human empathy, compassion and decision-making with the efficiency, reliability and continuous operation of robotics.

One day, we might might be able to develop care robots that can help with more physical tasks, such as helping users out of bed. But these jobs carry much greater risks to user safety and we’ll need to do a lot more work to make this happen.

Stevie would provide benefits to carers as well as elderly or disabled users. The job of a professional care assistant is incredibly demanding, often involving long, unsocial hours in workplaces that are frequently understaffed. As a result, the industry suffers from extremely low job satisfaction. In the US, more than 35% of care assistants leave their jobs every year. By taking on some of the more routine, mundane work, robots could free carers to spend more time engaging with residents.

Of course, not everyone who is getting older or has a disability may need a robot. And there is already a range of affordable smart technology that can help people by controlling appliances with voice commands or notifying caregivers in the event of a fall or accident.

Credit: Trinity College Dublin

Smarter than smart

But for many people, this type of technology is still extremely limited. For example, how can someone with hearing problems use a conventional smart hub such as the Amazon Echo, a device that communicates exclusively through audio signals? What happens if someone falls and they are unable to press an emergency call button on a wearable device?

Stevie overcomes these problems because it can communicate in multiple ways. It can talk, make gestures, and show facial expressions and display text on its screen. In this way, it follows the principles of universal design, because it is designed to adapt to the needs of the greatest possible number of users, not just the able majority.

The ConversationWe hope to have a version of Stevie ready to sell within two years. We still need to refine the design, decide on and develop new features and make sure it complies with major regulations. All this needs to be guided by extensive user testing so we are planning a range of pilots in Ireland, the UK and the US starting in summer 2018. This will help us achieve a major milestone on the road to developing robots that really do make our lives easier.

This article was originally published on The Conversation. Read the original article.

Robots won’t steal our jobs if we put workers at center of AI revolution

File 20170830 24267 1w1z0fj

Future robots will work side by side with humans, just as they do today.
Credit: AP Photo/John Minchillo

by Thomas Kochan, MIT Sloan School of Management and Lee Dyer, Cornell University

The technologies driving artificial intelligence are expanding exponentially, leading many technology experts and futurists to predict machines will soon be doing many of the jobs that humans do today. Some even predict humans could lose control over their future.

While we agree about the seismic changes afoot, we don’t believe this is the right way to think about it. Approaching the challenge this way assumes society has to be passive about how tomorrow’s technologies are designed and implemented. The truth is there is no absolute law that determines the shape and consequences of innovation. We can all influence where it takes us.

Thus, the question society should be asking is: “How can we direct the development of future technologies so that robots complement rather than replace us?”

The Japanese have an apt phrase for this: “giving wisdom to the machines.” And the wisdom comes from workers and an integrated approach to technology design, as our research shows.

Lessons from history

There is no question coming technologies like AI will eliminate some jobs, as did those of the past.

The invention of the steam engine was supposed to reduce the number of manufacturing workers. Instead, their ranks soared.
Lewis Hine

More than half of the American workforce was involved in farming in the 1890s, back when it was a physically demanding, labor-intensive industry. Today, thanks to mechanization and the use of sophisticated data analytics to handle the operation of crops and cattle, fewer than 2 percent are in agriculture, yet their output is significantly higher.

But new technologies will also create new jobs. After steam engines replaced water wheels as the source of power in manufacturing in the 1800s, the sector expanded sevenfold, from 1.2 million jobs in 1830 to 8.3 million by 1910. Similarly, many feared that the ATM’s emergence in the early 1970s would replace bank tellers. Yet even though the machines are now ubiquitous, there are actually more tellers today doing a wider variety of customer service tasks.

So trying to predict whether a new wave of technologies will create more jobs than it will destroy is not worth the effort, and even the experts are split 50-50.

It’s particularly pointless given that perhaps fewer than 5 percent of current occupations are likely to disappear entirely in the next decade, according to a detailed study by McKinsey.

Instead, let’s focus on the changes they’ll make to how people work.

It’s about tasks, not jobs

To understand why, it’s helpful to think of a job as made up of a collection of tasks that can be carried out in different ways when supported by new technologies.

And in turn, the tasks performed by different workers – colleagues, managers and many others – can also be rearranged in ways that make the best use of technologies to get the work accomplished. Job design specialists call these “work systems.”

One of the McKinsey study’s key findings was that about a third of the tasks performed in 60 percent of today’s jobs are likely to be eliminated or altered significantly by coming technologies. In other words, the vast majority of our jobs will still be there, but what we do on a daily basis will change drastically.

To date, robotics and other digital technologies have had their biggest effects on mostly routine tasks like spell-checking and those that are dangerous, dirty or hard, such as lifting heavy tires onto a wheel on an assembly line. Advances in AI and machine learning will significantly expand the array of tasks and occupations affected.

Creating an integrated strategy

We have been exploring these issues for years as part of our ongoing discussions on how to remake labor for the 21st century. In our recently published book, “Shaping the Future of Work: A Handbook for Change and a New Social Contract,” we describe why society needs an integrated strategy to gain control over how future technologies will affect work.

And that strategy starts with helping define the problems humans want new technologies to solve. We shouldn’t be leaving this solely to their inventors.

Fortunately, some engineers and AI experts are recognizing that the end users of a new technology must have a central role in guiding its design to specify which problems they’re trying to solve.

The second step is ensuring that these technologies are designed alongside the work systems with which they will be paired. A so-called simultaneous design process produces better results for both the companies and their workers compared with a sequential strategy – typical today – which involves designing a technology and only later considering the impact on a workforce.

An excellent illustration of simultaneous design is how Toyota handled the introduction of robotics onto its assembly lines in the 1980s. Unlike rivals such as General Motors that followed a sequential strategy, the Japanese automaker redesigned its work systems at the same time, which allowed it to get the most out of the new technologies and its employees. Importantly, Toyota solicited ideas for improving operations directly from workers.

In doing so, Toyota achieved higher productivity and quality in its plants than competitors like GM that invested heavily in stand-alone automation before they began to alter work systems.

Similarly, businesses that tweaked their work systems in concert with investing in IT in the 1990s outperformed those that didn’t. And health care companies like Kaiser Permanente and others learned the same lesson as they introduced electronic medical records over the past decade.

Each example demonstrates that the introduction of a new technology does more than just eliminate jobs. If managed well, it can change how work is done in ways that can both increase productivity and the level of service by augmenting the tasks humans do.

Worker wisdom

But the process doesn’t end there. Companies need to invest in continuous training so their workers are ready to help influence, use and adapt to technological changes. That’s the third step in getting the most out of new technologies.

And it needs to begin before they are introduced. The important part of this is that workers need to learn what some are calling “hybrid” skills: a combination of technical knowledge of the new technology with aptitudes for communications and problem-solving.

Companies whose workers have these skills will have the best chance of getting the biggest return on their technology investments. It is not surprising that these hybrid skills are now in high and growing demand and command good salaries.

None of this is to deny that some jobs will be eliminated and some workers will be displaced. So the final element of an integrated strategy must be to help those displaced find new jobs and compensate those unable to do so for the losses endured. Ford and the United Auto Workers, for example, offered generous early retirement benefits and cash severance payments in addition to retraining assistance when the company downsized from 2007 to 2010.

Examples like this will need to become the norm in the years ahead. Failure to treat displaced workers equitably will only widen the gaps between winners and losers in the future economy that are now already all too apparent.

The ConversationIn sum, companies that engage their workforce when they design and implement new technologies will be best-positioned to manage the coming AI revolution. By respecting the fact that today’s workers, like those before them, understand their jobs better than anyone and the many tasks they entail, they will be better able to “give wisdom to the machines.”

Thomas Kochan, Professor of Management, MIT Sloan School of Management and Lee Dyer, Professor Emeritus of Human Resource Studies and Research Fellow, Center for Advanced Human Resource Studies (CAHRS), Cornell University

This article was originally published on The Conversation. Read the original article.

Does the next industrial revolution spell the end of manufacturing jobs?

By Jeff Morgan, Trinity College Dublin

Robots have been taking our jobs since the 1960s. So why are politicians and business leaders only now becoming so worried about robots causing mass unemployment?

It comes down to the question of what a robot really is. While science fiction has often portrayed robots as androids carrying out tasks in the much the same way as humans, the reality is that robots take much more specialised forms. Traditional 20th century robots were automated machines and robotic arms building cars in factories. Commercial 21st century robots are supermarket self-checkouts, automated guided warehouse vehicles, and even burger-flipping machines in fast-food restaurants.

Ultimately, humans haven’t become completely redundant because these robots may be very efficient but they’re also kind of dumb. They do not think, they just act, in very accurate but very limited ways. Humans are still needed to work around robots, doing the jobs the machines can’t and fixing them when they get stuck. But this is all set to change thanks to a new wave of smarter, better value machines that can adapt to multiple tasks. This change will be so significant that it will create a new industrial revolution.

The fourth industrial revolution.
Christoph Roser, CC BY-SA

Industry 4.0

This era of “Industry 4.0” is being driven by the same technological advances that enable the capabilities of the smartphones in our pockets. It is a mix of low-cost and high-power computers, high-speed communication and artificial intelligence. This will produce smarter robots with better sensing and communication abilities that can adapt to different tasks, and even coordinate their work to meet demand without the input of humans.

In the manufacturing industry, where robots have arguably made the most headway of any sector, this will mean a dramatic shift from centralised to decentralised collaborative production. Traditional robots focused on single, fixed, high-speed operations and required a highly skilled human workforce to operate and maintain them. Industry 4.0 machines are flexible, collaborative and can operate more independently, which ultimately removes the need for a highly skilled workforce.

 

For large-scale manufacturers, Industry 4.0 means their robots will be able to sense their environment and communicate in an industrial network that can be run and monitored remotely. Each machine will produce large amounts of data that can be collectively studied using what is known as “big data” analysis. This will help identify ways to improve operating performance and production quality across the whole plant, for example by better predicting when maintenance is needed and automatically scheduling it.

For small-to-medium manufacturing businesses, Industry 4.0 will make it cheaper and easier to use robots. It will create machines that can be reconfigured to perform multiple jobs and adjusted to work on a more diverse product range and different production volumes. This sector is already beginning to benefit from reconfigurable robots designed to collaborate with human workers and analyse their own work to look for improvements, such as BAXTER, SR-TEX and CareSelect.

Helping hands.
Rethink Robotics

While these machines are getting smarter, they are still not as smart as us. Today’s industrial artificial intelligence operates at a narrow level, which gives the appearance of human intelligence exhibited by machines, but designed by humans.

What’s coming next is known as “deep learning”. Similar to big data analysis, it involves processing large quantities of data in real time to make decisions about what is the best action to take. The difference is that the machine learns from the data so it can improve its decision making. A perfect example of deep learning was demonstrated by Google’s AlphaGo software, which taught itself to beat the world’s greatest Go players.

The turning point in applying artificial intelligence to manufacturing could come with the application of special microchips called graphical processing units (GPUs). These enable deep learning to be applied to extremely large data sets at extremely fast speeds. But there is still some way to go and big industrial companies are recruiting vast numbers of scientists to further develop the technology.

Impact on industry

As Industry 4.0 technology becomes smarter and more widely available, manufacturers of any size will be able to deploy cost-effective, multipurpose and collaborative machines as standard. This will lead to industrial growth and market competitiveness, with a greater understanding of production processes leading to new high-quality products and digital services.

Exactly what impact a smarter robotic workforce with the potential to operate on its own will have on the manufacturing industry, is still widely disputed. Artificial intelligence as we know it from science fiction is still in its infancy. It could well be the 22nd century before robots really have the potential to make human labour obsolete by developing not just deep learning but true artificial understanding that mimics human thinking.

Ideally, Industry 4.0 will enable human workers to achieve more in their jobs by removing repetitive tasks and giving them better robotic tools. In theory, this would allow us humans to focus more on business development, creativity and science, which it would be much harder for any robot to do. Technology that has made humans redundant in the past has forced us to adapt, generally with more education.

But because Industry 4.0 robots will be able to operate largely on their own, we might see much greater human redundancy from manufacturing jobs without other sectors being able to create enough new work. Then we might see more political moves to protect human labour, such as taxing robots.

The ConversationAgain, in an ideal scenario, humans may be able to focus on doing the things that make us human, perhaps fuelled by a basic income generated from robotic work. Ultimately, it will be up to us to define whether the robotic workforce will work for us, with us, or against us.

This article was originally published on The Conversation. Read the original article.

Asimov’s Laws won’t stop robots harming humans so we’ve developed a better solution

By Christoph Salge, Marie Curie Global Fellow, University of Hertfordshire

How do you stop a robot from hurting people? Many existing robots, such as those assembling cars in factories, shut down immediately when a human comes near. But this quick fix wouldn’t work for something like a self-driving car that might have to move to avoid a collision, or a care robot that might need to catch an old person if they fall. With robots set to become our servants, companions and co-workers, we need to deal with the increasingly complex situations this will create and the ethical and safety questions this will raise.

Science fiction already envisioned this problem and has suggested various potential solutions. The most famous was author Isaac Asimov’s Three Laws of Robotics, which are designed to prevent robots harming humans. But since 2005, my colleagues and I at the University of Hertfordshire, have been working on an idea that could be an alternative.

Instead of laws to restrict robot behaviour, we think robots should be empowered to maximise the possible ways they can act so they can pick the best solution for any given scenario. As we describe in a new paper in Frontiers, this principle could form the basis of a new set of universal guidelines for robots to keep humans as safe as possible.

The Three Laws

Asimov’s Three Laws are as follows:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

While these laws sound plausible, numerous arguments have demonstrated why they are inadequate. Asimov’s own stories are arguably a deconstruction of the laws, showing how they repeatedly fail in different situations. Most attempts to draft new guidelines follow a similar principle to create safe, compliant and robust robots.

One problem with any explicitly formulated robot guidelines is the need to translate them into a format that robots can work with. Understanding the full range of human language and the experience it represents is a very hard job for a robot. Broad behavioural goals, such as preventing harm to humans or protecting a robot’s existence, can mean different things in different contexts. Sticking to the rules might end up leaving a robot helpless to act as its creators might hope.

Our alternative concept, empowerment, stands for the opposite of helplessness. Being empowered means having the ability to affect a situation and being aware that you can. We have been developing ways to translate this social concept into a quantifiable and operational technical language. This would endow robots with the drive to keep their options open and act in a way that increases their influence on the world.

When we tried simulating how robots would use the empowerment principle in various scenarios, we found they would often act in surprisingly “natural” ways. It typically only requires them to model how the real world works but doesn’t need any specialised artificial intelligence programming designed to deal with the particular scenario.

But to keep people safe, the robots need to try to maintain or improve human empowerment as well as their own. This essentially means being protective and supportive. Opening a locked door for someone would increase their empowerment. Restraining them would result in a short-term loss of empowerment. And significantly hurting them could remove their empowerment altogether. At the same time, the robot has to try to maintain its own empowerment, for example by ensuring it has enough power to operate and it does not get stuck or damaged.

Robots could adapt to new situations

Using this general principle rather than predefined rules of behaviour would allow the robot to take account of the context and evaluate scenarios no one has previously envisaged. For example, instead of always following the rule “don’t push humans”, a robot would generally avoid pushing them but still be able to push them out of the way of a falling object. The human might still be harmed but less so than if the robot didn’t push them.

In the film I, Robot, based on several Asimov stories, robots create an oppressive state that is supposed to minimise the overall harm to humans by keeping them confined and “protected”. But our principle would avoid such a scenario because it would mean a loss of human empowerment.

The ConversationWhile empowerment provides a new way of thinking about safe robot behaviour, we still have much work to do on scaling up its efficiency so it can easily be deployed on any robot and translate to good and safe behaviour in all respects. This poses a very difficult challenge. But we firmly believe empowerment can lead us towards a practical solution to the ongoing and highly debated problem of how to rein in robots’ behaviour, and how to keep robots -– in the most naive sense -– “ethical”.

This article was originally published on The Conversation. Read the original article.

Helping or hacking? Engineers and ethicists must work together on brain-computer interface technology

File 20170609 4841 73vkw2
A subject plays a computer game as part of a neural security experiment at the University of Washington.
Patrick Bennett, CC BY-ND

By Eran Klein, University of Washington and Katherine Pratt, University of Washington

 

In the 1995 film “Batman Forever,” the Riddler used 3-D television to secretly access viewers’ most personal thoughts in his hunt for Batman’s true identity. By 2011, the metrics company Nielsen had acquired Neurofocus and had created a “consumer neuroscience” division that uses integrated conscious and unconscious data to track customer decision-making habits. What was once a nefarious scheme in a Hollywood blockbuster seems poised to become a reality.

Recent announcements by Elon Musk and Facebook about brain-computer interface (BCI) technology are just the latest headlines in an ongoing science-fiction-becomes-reality story.

BCIs use brain signals to control objects in the outside world. They’re a potentially world-changing innovation – imagine being paralyzed but able to “reach” for something with a prosthetic arm just by thinking about it. But the revolutionary technology also raises concerns. Here at the University of Washington’s Center for Sensorimotor Neural Engineering (CSNE) we and our colleagues are researching BCI technology – and a crucial part of that includes working on issues such as neuroethics and neural security. Ethicists and engineers are working together to understand and quantify risks and develop ways to protect the public now.

Picking up on P300 signals

All BCI technology relies on being able to collect information from a brain that a device can then use or act on in some way. There are numerous places from which signals can be recorded, as well as infinite ways the data can be analyzed, so there are many possibilities for how a BCI can be used.

Some BCI researchers zero in on one particular kind of regularly occurring brain signal that alerts us to important changes in our environment. Neuroscientists call these signals “event-related potentials.” In the lab, they help us identify a reaction to a stimulus.

Examples of event-related potentials (ERPs), electrical signals produced by the brain in response to a stimulus. Tamara Bonaci, CC BY-ND

In particular, we capitalize on one of these specific signals, called the P300. It’s a positive peak of electricity that occurs toward the back of the head about 300 milliseconds after the stimulus is shown. The P300 alerts the rest of your brain to an “oddball” that stands out from the rest of what’s around you.

For example, you don’t stop and stare at each person’s face when you’re searching for your friend at the park. Instead, if we were recording your brain signals as you scanned the crowd, there would be a detectable P300 response when you saw someone who could be your friend. The P300 carries an unconscious message alerting you to something important that deserves attention. These signals are part of a still unknown brain pathway that aids in detection and focusing attention.

Reading your mind using P300s

P300s reliably occur any time you notice something rare or disjointed, like when you find the shirt you were looking for in your closet or your car in a parking lot. Researchers can use the P300 in an experimental setting to determine what is important or relevant to you. That’s led to the creation of devices like spellers that allow paralyzed individuals to type using their thoughts, one character at a time.

It also can be used to determine what you know, in what’s called a “guilty knowledge test.” In the lab, subjects are asked to choose an item to “steal” or hide, and are then shown many images repeatedly of both unrelated and related items. For instance, subjects choose between a watch and a necklace, and are then shown typical items from a jewelry box; a P300 appears when the subject is presented with the image of the item he took.

Everyone’s P300 is unique. In order to know what they’re looking for, researchers need “training” data. These are previously obtained brain signal recordings that researchers are confident contain P300s; they’re then used to calibrate the system. Since the test measures an unconscious neural signal that you don’t even know you have, can you fool it? Maybe, if you know that you’re being probed and what the stimuli are.

Techniques like these are still considered unreliable and unproven, and thus U.S. courts have resisted admitting P300 data as evidence.

For now, most BCI technology relies on somewhat cumbersome EEG hardware that is definitely not stealth. Mark Stone, University of Washington, CC BY-ND

Imagine that instead of using a P300 signal to solve the mystery of a “stolen” item in the lab, someone used this technology to extract information about what month you were born or which bank you use – without your telling them. Our research group has collected data suggesting this is possible. Just using an individual’s brain activity – specifically, their P300 response – we could determine a subject’s preferences for things like favorite coffee brand or favorite sports.

But we could do it only when subject-specific training data were available. What if we could figure out someone’s preferences without previous knowledge of their brain signal patterns? Without the need for training, users could simply put on a device and go, skipping the step of loading a personal training profile or spending time in calibration. Research on trained and untrained devices is the subject of continuing experiments at the University of Washington and elsewhere.

It’s when the technology is able to “read” someone’s mind who isn’t actively cooperating that ethical issues become particularly pressing. After all, we willingly trade bits of our privacy all the time – when we open our mouths to have conversations or use GPS devices that allow companies to collect data about us. But in these cases we consent to sharing what’s in our minds. The difference with next-generation P300 technology under development is that the protection consent gives us may get bypassed altogether.

What if it’s possible to decode what you’re thinking or planning without you even knowing? Will you feel violated? Will you feel a loss of control? Privacy implications may be wide-ranging. Maybe advertisers could know your preferred brands and send you personalized ads – which may be convenient or creepy. Or maybe malicious entities could determine where you bank and your account’s PIN – which would be alarming.

With great power comes great responsibility

The potential ability to determine individuals’ preferences and personal information using their own brain signals has spawned a number of difficult but pressing questions: Should we be able to keep our neural signals private? That is, should neural security be a human right? How do we adequately protect and store all the neural data being recorded for research, and soon for leisure? How do consumers know if any protective or anonymization measures are being made with their neural data? As of now, neural data collected for commercial uses are not subject to the same legal protections covering biomedical research or health care. Should neural data be treated differently?

Neuroethicists from the UW Philosophy department discuss issues related to neural implants.
Mark Stone, University of Washington, CC BY-ND

These are the kinds of conundrums that are best addressed by neural engineers and ethicists working together. Putting ethicists in labs alongside engineers – as we have done at the CSNE – is one way to ensure that privacy and security risks of neurotechnology, as well as other ethically important issues, are an active part of the research process instead of an afterthought. For instance, Tim Brown, an ethicist at the CSNE, is “housed” within a neural engineering research lab, allowing him to have daily conversations with researchers about ethical concerns. He’s also easily able to interact with – and, in fact, interview – research subjects about their ethical concerns about brain research.

There are important ethical and legal lessons to be drawn about technology and privacy from other areas, such as genetics and neuromarketing. But there seems to be something important and different about reading neural data. They’re more intimately connected to the mind and who we take ourselves to be. As such, ethical issues raised by BCI demand special attention.

Working on ethics while tech’s in its infancy

As we wrestle with how to address these privacy and security issues, there are two features of current P300 technology that will buy us time.

First, most commercial devices available use dry electrodes, which rely solely on skin contact to conduct electrical signals. This technology is prone to a low signal-to-noise ratio, meaning that we can extract only relatively basic forms of information from users. The brain signals we record are known to be highly variable (even for the same person) due to things like electrode movement and the constantly changing nature of brain signals themselves. Second, electrodes are not always in ideal locations to record.

All together, this inherent lack of reliability means that BCI devices are not nearly as ubiquitous today as they may be in the future. As electrode hardware and signal processing continue to improve, it will be easier to continuously use devices like these, and make it easier to extract personal information from an unknowing individual as well. The safest advice would be to not use these devices at all.

The ConversationThe goal should be that the ethical standards and the technology will mature together to ensure future BCI users are confident their privacy is being protected as they use these kinds of devices. It’s a rare opportunity for scientists, engineers, ethicists and eventually regulators to work together to create even better products than were originally dreamed of in science fiction.

Page 2 of 2
1 2