DHL Express Launches Its First Regular Fully-Automated and Intelligent Urban Drone Delivery Service
LORD Sensing Systems – Inertial Sensing Products
Applications of Nanometer-Level Precision Motion Control Solutions
Real Estate Drones Are Here to Stay but Guidelines Need to Be Followed
Zone Picking vs. Wave Picking: Which Method is Best?
SCARA Robots – The Next Move in Automotive Sector; Towards the Automation!
Are ethics keeping pace with technology?

Returning from vacation, my inbox overflowed with emails announcing robot “firsts.” At the same time, my relaxed post-vacation disposition was quickly rocked by the news of the day and recent discussions regarding the extent of AI bias within New York’s financial system. These unrelated incidents are very much connected in representing the paradox of the acceleration of today’s inventions.
Last Friday, The University of Maryland Medical Center (UMMC) became the first hospital system to safely transport, via drone, a live organ to a waiting transplant patient with kidney failure. The demonstration illustrates the huge opportunity of Unmanned Aerial Vehicles (UAVs) to significantly reduce the time, costs, and outcome of organ transplants by removing human-piloted helicopters from the equation. As Dr. Joseph Scalea, UMMC project lead, explains “There remains a woeful disparity between the number of recipients on the organ transplant waiting list and the total number of transplantable organs. This new technology has the potential to help widen the donor organ pool and access to transplantation.” Last year, America’s managing body of the organ transplant system stated it had a waiting list of approximately 114,000 people with 1.5% of deceased donor organs expiring before reaching their intended recipients. This is largely due to unanticipated transportation delays of up to two hours in close to 4% of recorded shipments. Based upon this data, unmanned systems could potentially save more than one thousand lives. In the words of Dr. Scalea, “Delivering an organ from a donor to a patient is a sacred duty with many moving parts. It is critical that we find ways of doing this better.” Unmentioned in the UMMC announcement are the types of ethical considerations required to support autonomous delivery to ensure that rush to extract organs in the field are not overriding the goal of first saving the donor’s life.
As May brings clear skies and the songs of birds, the premise of non life saving drones crowding the air space above is often a haunting image. Last month, the proposition of last mile delivery by UAVs came one step closer with Google’s subsidiary, Wing Aviation, becoming the first drone operator approved by the U.S. Federal Aviation Administration and the Department of Transportation. According to the company, consumer deliveries will commence within the next couple of months in rural Virginia. “It’s an exciting moment for us to have earned the FAA’s approval to actually run a business with our technology,” declared James Ryan Burgess, Wing Chief Executive Officer. The regulations still ban drones in urban areas and limit Wings autonomous missions to farmlands but enable the company to start charging customers for UAV deliveries.
While the rural community administrators are excited “to be the birthplace of drone delivery in the United States,” what is unknown is how its citizens will react to the technology, prone to menacing noise and privacy complaints. Mark Blanks, director of the Virginia Tech Mid-Atlantic Aviation Partnership, optimistically stated, “Across the board everybody we’ve spoken to has been pretty excited.” Cautiously, he admits, “We’ll be working with the community a lot more as we prepare to roll this out.” Google’s terrestrial autonmous driving tests have received less than stellar reviews from locals in Chandler, Arizona, which reached a crescendo earlier this year with one resident pulling a gun on a car (one-third of all Virginians own firearms). Understanding the rights of citizenry in policing the skies above their properties is an important policy and ethical issue as unmanned operators move from testing systems to live deployments.
The rollout of advanced computing technologies is not limited to aviation; artificial intelligence (AI) is being rapidly deployed across every enterprise and organization in the United States. On Friday, McKinsey & Company released a report on the widening penetration of deep learning systems within corporate America. While it is still early in the development of such technologies, almost half of the respondents in the study stated that their departments have embedded such software within at least one business practice this past year. As stated: “Forty-seven percent of respondents say their companies have embedded at least one AI capability in their business processes—compared with 20 percent of respondents in a 2017 study.” This dramatic increase in adoption is driving tech spending with 71% of respondents expecting large portions of digital budgets going toward the implementation of AI. The study also tracked the perceived value of the use of AI with “41 percent reporting significant value and 37 percent reporting moderate value,” compared to 1% “claiming a negative impact.
Before embarking on a journey south of the border, I participated in a discussion at one of New York’s largest financial institutions about AI bias. The output of this think tank became a suggested framework for administrating AI throughout an organization to protect its employees from bias. We listed three principals: 1) the definition of bias (as it varies from institution to institution); 2) the policies when developing and installing technologies (from hiring to testing to reporting metrics); and 3) employing a Chief Ethics Officer that would report to the board not the Chief Executive Officer (as the CEO is concerned about profit, and could potentially override ethics for the bottomline). These conclusions were supported by a 2018 Deloitte survey that found that 32% of executives familiar with AI ranked ethical issues as one of the top three risks of deployments. At the same time, Forbes reported that the idea of engaging an ethics officer is a hard sell for most Blue Chip companies. In response, Professor Timothy Casey of California Western School of Law recommends repercussions similar to other licensing fields for malicious software, “In medicine and law, you have an organization that can revoke your license if you violate the rules, so the impetus to behave ethically is very high. AI developers have nothing like that.” He suggests that building a value system through these endeavors will create an atmosphere whereby “being first in ethics rarely matters as much as being first in revenues.”
While the momentum of AI adoption accelerates faster than a train going down a hill, some forward-thinking organizations are starting to take ethics very seriously. As an example, Salesforce this past January became one of the first companies to hire a “chief ethical and humane use officer,” empowering Paula Goldman: “To develop a strategic framework for the ethical and humane use of technology.” Writing this article, I am reminded of the words of Winston Churchill in the 1930s cautioning his generation about balancing morality with the speed of scientific discoveries, as the pace of innovation even then far exceeded humankind’s own development: “Certain it is that while men are gathering knowledge and power with ever-increasing and measureless speed, their virtues and their wisdom have not shown any notable improvement as the centuries have rolled. The brain of modern man does not differ in essentials from that of the human beings who fought and loved here millions of years ago. The nature of man has remained hitherto practically unchanged. Under sufficient stress—starvation, terror, warlike passion, or even cold intellectual frenzy—the modern man we know so well will do the most terrible deeds, and his modern woman will back him up.”
Join RobotLab on May 16th when we dig deeper into ethics and technology with Alexis Block, inventor of HuggieBot, and Andrew Flett, partner at Mobility Impact Partners, discussing “Society 2.0: Understanding The Human-Robot Connection In Improving The World” at SOSA’s Global Cyber Center in NYC – RSVP Today!
Insect behavior, miniature blimps may unlock the key to military swarming technology
Everything You Want to Know About Alpine Conveyors
The Perfect Storm of Automation, Autonomous Machines and Artificial Intelligence
XPRIZE and California Governor Gavin Newsom Partner To Design an Incentive Prize For Innovation To Battle Wildfires
#286: Halodi Robotic’s EVEr3: A Full-size Humanoid Robot, with Bernt Børnich
In this episode, Audrow Nash interviews Bernt Børnich, CEO, CTO, and Co-founder of Halodi Robotics, about Eve (EVEr3), a general purpose full-size humanoid robot, capable of a wide variety of tasks. Børnich discusses how Eve can be used in research, how Eve’s motors have been designed to be safe around humans (including why they use a low gear ratio), how they do direct force control and the benefits of this approach, and how they use machine learning to reduce cogging in their motors. Børnich also discusses the longterm goal of Halodi Robotics and how they plan to support researchers using Eve.
Below are two videos of Eve. The first is a video of how Eve can be used as a platform to address several research questions. The second shows Eve moving a box and dancing.
Bernt Børnich

Bernt Børnich is the CEO and CTO of Halodi Robotics, and had the main responsibility for designing the motors, electronics and CAD-models for Eve. He holds a bachelor of robotics and nano-electronics from the University of Oslo.
Links
YOLO – You only look once (Single shot detectors)
The social animals that are inspiring new behaviours for robot swarms
By Edmund Hunt, University of Bristol
From flocks of birds to fish schools in the sea, or towering termite mounds, many social groups in nature exist together to survive and thrive. This cooperative behaviour can be used by engineers as “bio-inspiration” to solve practical human problems, and by computer scientists studying swarm intelligence.
“Swarm robotics” took off in the early 2000s, an early example being the “s-bot” (short for swarm-bot). This is a fully autonomous robot that can perform basic tasks including navigation and the grasping of objects, and which can self-assemble into chains to cross gaps or pull heavy loads. More recently, “TERMES” robots have been developed as a concept in construction, and the “CoCoRo” project has developed an underwater robot swarm that functions like a school of fish that exchanges information to monitor the environment. So far, we’ve only just begun to explore the vast possibilities that animal collectives and their behaviour can offer as inspiration to robot swarm design.

EyeSeeMicrostock/Shutterstock
Robots that can cooperate in large numbers could achieve things that would be difficult or even impossible for a single entity. Following an earthquake, for example, a swarm of search and rescue robots could quickly explore multiple collapsed buildings looking for signs of life. Threatened by a large wildfire, a swarm of drones could help emergency services track and predict the fire’s spread. Or a swarm of floating robots (“Row-bots”) could nibble away at oceanic garbage patches, powered by plastic-eating bacteria.

Shutterstock
Bio-inspiration in swarm robotics usually starts with social insects – ants, bees and termites – because colony members are highly related, which favours impressive cooperation. Three further characteristics appeal to researchers: robustness, because individuals can be lost without affecting performance; flexibility, because social insect workers are able to respond to changing work needs; and scalability, because a colony’s decentralised organisation is sustainable with 100 workers or 100,000. These characteristics could be especially useful for doing jobs such as environmental monitoring, which requires coverage of huge, varied and sometimes hazardous areas.
Social learning
Beyond social insects, other species and behavioural phenomena in the animal kingdom offer inspiration to engineers. A growing area of biological research is in animal cultures, where animals engage in social learning to pick up behaviours that they are unlikely to innovate alone. For example, whales and dolphins can have distinctive foraging methods that are passed down through the generations. This includes forms of tool use – dolphins have been observed breaking off marine sponges to protect their beaks as they go rooting around for fish, like a person might put a glove over a hand.

Yann Hubert/Shutterstock
Forms of social learning and artificial robotic cultures, perhaps using forms of artificial intelligence, could be very powerful in adapting robots to their environment over time. For example, assistive robots for home care could adapt to human behavioural differences in different communities and countries over time.
Robot (or animal) cultures, however, depend on learning abilities that are costly to develop, requiring a larger brain – or, in the case of robots, a more advanced computer. But the value of the “swarm” approach is to deploy robots that are simple, cheap and disposable. Swarm robotics exploits the reality of emergence (“more is different”) to create social complexity from individual simplicity. A more fundamental form of “learning” about the environment is seen in nature – in sensitive developmental processes – which do not require a big brain.
‘Phenotypic plasticity’
Some animals can change behavioural type, or even develop different forms, shapes or internal functions, within the same species, despite having the same initial “programming”. This is known as “phenotypic plasticity” – where the genes of an organism produce different observable results depending on environmental conditions. Such flexibility can be seen in the social insects, but sometimes even more dramatically in other animals.
Most spiders are decidedly solitary, but in about 20 of 45,000 spider species, individuals live in a shared nest and capture food on a shared web. These social spiders benefit from having a mixture of “personality” types in their group, for example bold and shy.

PicturesofThings/Shutterstock
My research identified a flexibility in behaviour where shy spiders would step into a role vacated by absent bold nestmates. This is necessary because the spider colony needs a balance of bold individuals to encourage collective predation, and shyer ones to focus on nest maintenance and parental care. Robots could be programmed with adjustable risk-taking behaviour, sensitive to group composition, with bolder robots entering into hazardous environments while shyer ones know to hold back. This could be very helpful in mapping a disaster area such as Fukushima, including its most dangerous parts, while avoiding too many robots in the swarm being damaged at once.
The ability to adapt
Cane toads were introduced in Australia in the 1930s as a pest control, and have since become an invasive species themselves. In new areas cane toads are seen to be somewhat social. One reason for their growth in numbers is that they are able to adapt to a wide temperature range, a form of physiological plasticity. Swarms of robots with the capability to switch power consumption mode, depending on environmental conditions such as ambient temperature, could be considerably more durable if we want them to function autonomously for the long term. For example, if we want to send robots off to map Mars then they will need to cope with temperatures that can swing from -150°C at the poles to 20°C at the equator.

Radek Ziemniewicz/Shutterstock
In addition to behavioural and physiological plasticity, some organisms show morphological (shape) plasticity. For example, some bacteria change their shape in response to stress, becoming elongated and so more resilient to being “eaten” by other organisms. If swarms of robots can combine together in a modular fashion and (re)assemble into more suitable structures this could be very helpful in unpredictable environments. For example, groups of robots could aggregate together for safety when the weather takes a challenging turn.
Whether it’s the “cultures” developed by animal groups that are reliant on learning abilities, or the more fundamental ability to change “personality”, internal function or shape, swarm robotics still has plenty of mileage left when it comes to drawing inspiration from nature. We might even wish to mix and match behaviours from different species, to create robot “hybrids” of our own. Humanity faces challenges ranging from climate change affecting ocean currents, to a growing need for food production, to space exploration – and swarm robotics can play a decisive part given the right bio-inspiration.
Edmund Hunt, EPSRC Doctoral Prize Fellow, University of Bristol
This article is republished from The Conversation under a Creative Commons license. Read the original article.