Archive 30.05.2021

Page 1 of 5
1 2 3 5

Ethics is the new Quality

I took part in the first panel at the BSI conference The Digital World: Artificial Intelligence.  The subject of the panel was AI Governance and Ethics. My co-panelist was Emma Carmel, and we were expertly chaired by Katherine Holden.

Emma and I each gave short opening presentations prior to the Q&A. The title of my talk was Why is Ethical Governance in AI so hard? Something I’ve thought about alot in recent months.

Here are the slides exploring that question.

 

And here are my words.

Early in 2018 I wrote a short blog post with the title Ethical Governance: what is it and who’s doing it? Good ethical governance is important because in order for people to have confidence in their AI they need to know that it has been developed responsibly. I concluded my piece by asking for examples of good ethical governance. I had several replies, but none were nominating AI companies.

So. why is it that 3 years on we see some of the largest AI companies on the planet shooting themselves in the foot, ethically speaking? I’m not at all sure I can offer an answer but, in the next few minutes, I would like to explore the question: why is ethical governance in AI so hard? 

But from a new perspective. 

Slide 2

In the early 1970s I spent a few months labouring in a machine shop. The shop was chaotic and disorganised. It stank of machine oil and cigarette smoke, and the air was heavy with the coolant spray used to keep the lathe bits cool. It was dirty and dangerous, with piles of metal swarf cluttering the walkways. There seemed to be a minor injury every day.

Skip forward 40 years and machine shops look very different. 

Slide 3

So what happened? Those of you old enough will recall that while British design was world class – think of the British Leyland Mini, or the Jaguar XJ6 – our manufacturing fell far short. “By the mid 1970s British cars were shunned in Europe because of bad workmanship, unreliability, poor delivery dates and difficulties with spares. Japanese car manufacturers had been selling cars here since the mid 60s but it was in the 1970s that they began to make real headway. Japanese cars lacked the style and heritage of the average British car. What they did have was superb build quality and reliability”*.

What happened was Total Quality Management. The order and cleanliness of modern machine shops like this one is a strong reflection of TQM practices. 

Slide 4

In the late 1970s manufacturing companies in the UK learned – many the hard way – that ‘quality’ is not something that can be introduced by appointing a quality inspector. Quality is not something that can be hired in.

This word cloud reflects the influence from Japan. The words Japan, Japanese and Kaizen – which roughly translates as continuous improvement – appear here. In TQM everyone shares the responsibility for quality. People at all levels of an organization participate in kaizen, from the CEO to assembly line workers and janitorial staff. Importantly suggestions from anyone, no matter who, are valued and taken equally seriously.

Slide 5

In 2018 my colleague Marina Jirotka and I published a paper on ethical governance in robotics and AI. In that paper we proposed 5 pillars of good ethical governance. The top four are:

  • have an ethical code of conduct, 
  • train everyone on ethics and responsible innovation,
  • practice responsible innovation, and
  • publish transparency reports.

The 5th pillar underpins these four and is perhaps the hardest: really believe in ethics.

Now a couple of months ago I looked again at these 5 pillars and realised that they parallel good practice in Total Quality Management: something I became very familiar with when I founded and ran a company in the mid 1980s.

Slide 6 

So, if we replace ethics with quality management, we see a set of key processes which exactly parallel our 5 pillars of good ethical governance, including the underpinning pillar: believe in total quality management.

I believe that good ethical governance needs the kind of corporate paradigm shift that was forced on UK manufacturing industry in the 1970s.

Slide 7

In a nutshell I think ethics is the new quality

Yes, setting up an ethics board or appointing an AI ethics officer can help, but on their own these are not enough. Like Quality, everyone needs to understand and contribute to ethics. Those contributions should be encouraged, valued and acted upon. Nobody should be fired for calling out unethical practices.

Until corporate AI understands this we will, I think, struggle to find companies that practice good ethical governance. 

Quality cannot be ‘inspected in’, and nor can ethics.

Thank you.


Notes.

[1] I’m quoting here from the excellent history of British Leyland by Ian Nicholls.

[2] My company did a huge amount of work for Motorola and – as a subcontractor – we became certified software suppliers within their six sigma quality management programme.

[3] It was competitive pressure that forced manufacturing companies in the 1970s to up their game by embracing TQM. Depressingly the biggest AI companies face no such competitive pressures, which is why regulation is both necessary and inevitable.

A helping hand for working robots

Until now, competing types of robotic hand designs offered a trade-off between strength and durability. One commonly used design, employing a rigid pin joint that mimics the mechanism in human finger joints, can lift heavy payloads, but is easily damaged in collisions, particularly if hit from the side. Meanwhile, fully compliant hands, typically made of molded silicone, are more flexible, harder to break, and better at grasping objects of various shapes, but they fall short on lifting power.

Researchers create robot that smiles back

While our facial expressions play a huge role in building trust, most robots still sport the blank and static visage of a professional poker player. With the increasing use of robots in locations where robots and humans need to work closely together, from nursing homes to warehouses and factories, the need for a more responsive, facially realistic robot is growing more urgent.

#333: Snake-like Robot as a Worker Companion, with Matt Bilsky

Matt Bilsky, founder and CEO of FLX Solutions, discusses the snake-like robot he invented called the FLX BOT. The FLX BOT consists of modular links, each with a joint that can extend and rotate to get into tight spaces. Each link includes sensors including inertial measurement units and a camera. The robot is used to navigate and work in challenging environments, such as above ceilings and within walls. Matt discusses the key innovations of his product as well as his academic and entrepreneurial journey that led him to the FLX BOT.

 

Matt Bilksy

Matt Bilsky, PhD, PE is the inventor of the FLX BOT, a licensed Professional Engineer, a Mechanical Engineering professor at Lehigh University, and a former repair/maintenance contractor. In 2017, he was awarded with the Lehigh University Entrepreneurship Educator of the Year. Matt has a Mechanical Engineering Ph.D. from Lehigh University focused on smart product design, Technical Entrepreneurship, and mechatronics. He holds two additional Lehigh degrees: a BS in Mechanical Engineering with an Electrical Engineering minor and a Master of Engineering degree also in Mechanical Engineering. Since he was a child Matt has been an innovator. In his basement shop he designed and built numerous electronic gadgets. In 2003 he started his first company, Mattcomp Services LLC, offering computer repair, networking, home theater, and handyman services. He also created a web hosting company in 2005, Mattcomp Hosting, including all necessary back-end components on dedicated servers.

Links

Slender robotic finger senses buried items

MIT researchers developed a “Digger Finger” robot that digs through granular material, like sand and gravel, and senses the shapes of buried objects. The technology could aid in disarming buried bombs or inspecting underground cables. Image courtesy of the researchers.

By Daniel Ackerman

Over the years, robots have gotten quite good at identifying objects — as long as they’re out in the open.

Discerning buried items in granular material like sand is a taller order. To do that, a robot would need fingers that were slender enough to penetrate the sand, mobile enough to wriggle free when sand grains jam, and sensitive enough to feel the detailed shape of the buried object.

MIT researchers have now designed a sharp-tipped robot finger equipped with tactile sensing to meet the challenge of identifying buried objects. In experiments, the aptly named Digger Finger was able to dig through granular media such as sand and rice, and it correctly sensed the shapes of submerged items it encountered. The researchers say the robot might one day perform various subterranean duties, such as finding buried cables or disarming buried bombs.

The research will be presented at the next International Symposium on Experimental Robotics. The study’s lead author is Radhen Patel, a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Co-authors include CSAIL PhD student Branden Romero, Harvard University PhD student Nancy Ouyang, and Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in CSAIL and the Department of Brain and Cognitive Sciences.

Seeking to identify objects buried in granular material — sand, gravel, and other types of loosely packed particles — isn’t a brand new quest. Previously, researchers have used technologies that sense the subterranean from above, such as Ground Penetrating Radar or ultrasonic vibrations. But these techniques provide only a hazy view of submerged objects. They might struggle to differentiate rock from bone, for example.

“So, the idea is to make a finger that has a good sense of touch and can distinguish between the various things it’s feeling,” says Adelson. “That would be helpful if you’re trying to find and disable buried bombs, for example.” Making that idea a reality meant clearing a number of hurdles.

The team’s first challenge was a matter of form: The robotic finger had to be slender and sharp-tipped.

In prior work, the researchers had used a tactile sensor called GelSight. The sensor consisted of a clear gel covered with a reflective membrane that deformed when objects pressed against it. Behind the membrane were three colors of LED lights and a camera. The lights shone through the gel and onto the membrane, while the camera collected the membrane’s pattern of reflection. Computer vision algorithms then extracted the 3D shape of the contact area where the soft finger touched the object. The contraption provided an excellent sense of artificial touch, but it was inconveniently bulky.

A closeup photograph of the new robot and a diagram of its parts. Image courtesy of the researchers.

For the Digger Finger, the researchers slimmed down their GelSight sensor in two main ways. First, they changed the shape to be a slender cylinder with a beveled tip. Next, they ditched two-thirds of the LED lights, using a combination of blue LEDs and colored fluorescent paint. “That saved a lot of complexity and space,” says Ouyang. “That’s how we were able to get it into such a compact form.” The final product featured a device whose tactile sensing membrane was about 2 square centimeters, similar to the tip of a finger.

With size sorted out, the researchers turned their attention to motion, mounting the finger on a robot arm and digging through fine-grained sand and coarse-grained rice. Granular media have a tendency to jam when numerous particles become locked in place. That makes it difficult to penetrate. So, the team added vibration to the Digger Finger’s capabilities and put it through a battery of tests.

“We wanted to see how mechanical vibrations aid in digging deeper and getting through jams,” says Patel. “We ran the vibrating motor at different operating voltages, which changes the amplitude and frequency of the vibrations.” They found that rapid vibrations helped “fluidize” the media, clearing jams and allowing for deeper burrowing — though this fluidizing effect was harder to achieve in sand than in rice.

Top row: 3D printed objects used for the object identification experiment. Middle row: Example image data when the Digger Finger directly touches a 3d printed object. Bottom row: Example image data when the Digger Finger touches a 3d printed object that is buried in sand. Image courtesy of the researchers.

They also tested various twisting motions in both the rice and sand. Sometimes, grains of each type of media would get stuck between the Digger-Finger’s tactile membrane and the buried object it was trying to sense. When this happened with rice, the trapped grains were large enough to completely obscure the shape of the object, though the occlusion could usually be cleared with a little robotic wiggling. Trapped sand was harder to clear, though the grains’ small size meant the Digger Finger could still sense the general contours of target object.

Patel says that operators will have to adjust the Digger Finger’s motion pattern for different settings “depending on the type of media and on the size and shape of the grains.” The team plans to keep exploring new motions to optimize the Digger Finger’s ability to navigate various media.

Adelson says the Digger Finger is part of a program extending the domains in which robotic touch can be used. Humans use their fingers amidst complex environments, whether fishing for a key in a pants pocket or feeling for a tumor during surgery. “As we get better at artificial touch, we want to be able to use it in situations when you’re surrounded by all kinds of distracting information,” says Adelson. “We want to be able to distinguish between the stuff that’s important and the stuff that’s not.”

Funding for this research was provided, in part, by the Toyota Research Institute through the Toyota-CSAIL Joint Research Center; the Office of Naval Research; and the Norwegian Research Council.


Making “cheddar” With Industrial Automation – Achieving 83 Per Cent Waste Reduction in Food Manufacturing

Ultrasonic uses energy from microscopic vibrations of a blade to pass easily through the material and offers a more effective solution for cutting cheese and other food products. When paired with automation, ultrasonic technology can deliver precise and accurate cutting.

Meet the #NCCRWomen in robotics

Film still showing Maria Vittoria Minniti working with a robot
Film still by schwarzpictures.com

Meet Maria Vittoria and Inés!

To celebrate Women’s Day 2021 and the 50th anniversary of women’s right to vote in Switzerland, the Swiss NCCRs (National Centres of Competence in Research) wanted to show you who our women researchers are and what a day in their job looks like. The videos are targeted at women and girls of school and undergraduate age to show what day to day life as a scientist is like and make it more accessible. Each NCCR hosted a week where they published several videos covering multiple scientific disciplines, and here we are bringing you what was produced by NCCR Digital Fabrication.

The videos cover a wide range of subjects, including (but not limited to) maths, physics, microbiology, psychology and planetary science, but here we have two women who work with robots.

Maria Vittoria Minniti is a robotics engineer and PhD student, she enhances mobile manipulation capabilities in under-actuated robots.

Inés Ariza is an architect, she uses a robot to 3D print custom metal joints for complex structures.

 

Head over to YouTube or Instagram (EnglishGerman or French) to see the women featured in the #NCCRWomen campaign.

Creating expressive robot swarms

As robot swarms leave the lab and enter our daily lives, it is important that we find ways by which we can effectively communicate with robot swarms, especially ones that contain a high number of robots. In our lab, we are thinking of ways to make swarms for people that are easy and intuitive to interact with. By making robots expressive, we will be able to understand their state and therefore, we will be able to make decisions accordingly. To that extent, we have created a system where humans can build a canvas with robots and create shapes with up to 300 real robots and up to 1000 simulated robots.

Painting with robots

In a system we created called Robotic Canvas, we project an image onto a robot swarm via an overhead projector, and the swarm replicates the image using their LEDs by sensing the colour of light projected. Each robot, therefore, acts as a pixel on the canvas. The humans can then interact with the robot pixels by copying/pasting pixels (LED colour) onto different parts on the canvas, erasing them (turning off LEDs), changing their colour, or saving/retrieving paintings (by saving/retrieving the state of the LED). If a GIF or a video is projected onto the robots, the robots appear to be showing a video. The robots are also decentralised, which means they have no central controller telling them what to do, avoiding a single point of failure of the system. This enables the system to carry on if one robot fails, enabling humans to still interact and create paintings with the rest of the robots. The robots have to resort to talking to their neighbours and sensing their environment in order to obtain information on how to act next. Consequently, robot pixels use only local interactions (i.e. communications with their neighbouring robots) and environmental interactions (i.e. sensing the colour of light and shadows) to be able to tell which interaction is taking place. Here are some images showing the robots recreating images projected onto them.

Here is a performance we did with the robots. This is a story of day and night; the sun sets on the ocean permitting night to arrive. Then, start begin to show in the night sky, and clouds form as well. Finally, the sun rises again.

Creating shapes with robots

In Robotic Canvas, the robots were stationary, and they needed to fill out the whole shape to be able to represent the image projected onto them properly. However, we were able to reduce the number of robots used by enabling them to move and aggregate around edges of images projected onto them. This way, a smaller number of robots can be used to represent an image while still preserving clear image representation. We were also able to produce videos with the robots by projecting a video onto them. We used up to 300 real robots (as can be seen from the line, circle and arrow shapes below) and up to 1000 simulated robots (as can be seen in the letters F, T and the blinking eye video).

The way the robots are able to aggregate around edges is again done only through local and environmental interactions. Robots share with their neighbours the light colour they sense. They then combine their neighbours’ opinions with their own to reach a final decision on what colour is being projected. Robots move randomly as long as their opinions match their neighbours’ opinions. If there is a high conflict of opinions, that means robots are standing on an edge, and they stop moving. They then broadcast a message to their neighbours to aggregate around them to represent the edge. We can increase the distance by which neighbours respond to the edge robots, giving us the ability to have thicker or thinner lines of robots on the edges.

The robots do not need to have the image the user wishes for them to represent stored in their memory. That gives the system an important feature: adaptability. The user can change the image projected at any time and the robots will re-configure themselves to represent the new image.

The inspiration behind robot expressivity

Our research was inspired by the fact that robot swarms and human-swarm interaction are exciting hot topics in today’s world of robotics. Searching for ways by which to interact with swarm robots that neither break their decentralisation nature of not needing a central controller, nor humans needing to interact with each robot separately (as there could potentially be thousands of robots and hence would be unfeasible to update them separately) is an interesting and challenging problem to solve. Therefore, we created the Robotic Canvas to experiment with methods by which a user can relay messages to, and influence the behaviour of, 100’s of robots without needing to communicate with each separately. We researched how we can do such a task only using environmental and/or local interactions only. Furthermore, we decided to add mobility to the robots, which results in using a smaller number of robots for image representation.

Challenges along the way

While creating paintings and shapes look interesting and fun, the road to creating this system was not without obstacles! Going from working in simulation to working with real-life robots deemed challenging. This was due to noise from robot motion in real robots and also errors in sensor readings. Using a circular arena helped with preventing robots getting stuck at the boundaries of the arena. As for sensing errors, filtering noisy readings before broadcasting opinions to neighbours helped with reducing error in edge detection.

Limitations of our system

The size of the robots is directly proportional to the resolution of the image, similar to how pixels work. The smaller the robots (and their LEDs), the clearer the picture representation will be. Therefore, there is a limitation on how clearly very complex images could be represented with the current robots used (the Kilobots).

Beyond painting

The emergent shape-formation behaviour of robot swarms has many potential applications in the real world. Its expressive nature serves it well as an artistic and interactive visual display. Also, the robots could be used as functional materials which respond to light projections by depositing themselves onto image edges, which could have applications in architecture and electronics. The system could also have applications in ocean clean-ups, where robot swarms could detect and surround pollutants in oceans such as oil spills.

Page 1 of 5
1 2 3 5