Page 398 of 579
1 396 397 398 399 400 579

Q&A: Vivienne Sze on crossing the hardware-software divide for efficient artificial intelligence

Associate professor Vivienne Sze is bringing artificial intelligence applications to smartphones and tiny robots by co-designing energy-efficient hardware and software. Image credits: Lillie Paquette, MIT School of Engineering

Not so long ago, watching a movie on a smartphone seemed impossible. Vivienne Sze was a graduate student at MIT at the time, in the mid 2000s, and she was drawn to the challenge of compressing video to keep image quality high without draining the phone’s battery. The solution she hit upon called for co-designing energy-efficient circuits with energy-efficient algorithms.

Sze would go on to be part of the team that won an Engineering Emmy Award for developing the video compression standards still in use today. Now an associate professor in MIT’s Department of Electrical Engineering and Computer Science, Sze has set her sights on a new milestone: bringing artificial intelligence applications to smartphones and tiny robots.

Her research focuses on designing more-efficient deep neural networks to process video, and more-efficient hardware to run those applications. She recently co-published a book on the topic, and will teach a professional education course on how to design efficient deep learning systems in June.

On April 29, Sze will join Assistant Professor Song Han for an MIT Quest AI Roundtable on the co-design of efficient hardware and software moderated by Aude Oliva, director of MIT Quest Corporate and the MIT director of the MIT-IBM Watson AI Lab. Here, Sze discusses her recent work.

Q: Why do we need low-power AI now?

A: AI applications are moving to smartphones, tiny robots, and internet-connected appliances and other devices with limited power and processing capabilities. The challenge is that AI has high computing requirements. Analyzing sensor and camera data from a self-driving car can consume about 2,500 watts, but the computing budget of a smartphone is just about a single watt. Closing this gap requires rethinking the entire stack, a trend that will define the next decade of AI.

Q: What’s the big deal about running AI on a smartphone?

A: It means that the data processing no longer has to take place in the “cloud,” on racks of warehouse servers. Untethering compute from the cloud allows us to broaden AI’s reach. It gives people in developing countries with limited communication infrastructure access to AI. It also speeds up response time by reducing the lag caused by communicating with distant servers. This is crucial for interactive applications like autonomous navigation and augmented reality, which need to respond instantaneously to changing conditions. Processing data on the device can also protect medical and other sensitive records. Data can be processed right where they’re collected.

Q: What makes modern AI so inefficient?

A: The cornerstone of modern AI — deep neural networks — can require hundreds of millions to billions of calculations — orders of magnitude greater than compressing video on a smartphone. But it’s not just number crunching that makes deep networks energy-intensive — it’s the cost of shuffling data to and from memory to perform these computations. The farther the data have to travel, and the more data there are, the greater the bottleneck.

Q: How are you redesigning AI hardware for greater energy efficiency?

A: We focus on reducing data movement and the amount of data needed for computation. In some deep networks, the same data are used multiple times for different computations. We design specialized hardware to reuse data locally rather than send them off-chip. Storing reused data on-chip makes the process extremely energy-efficient.  

We also optimize the order in which data are processed to maximize their reuse. That’s the key property of the Eyeriss chip that was developed in collaboration with Joel Emer. In our followup work, Eyeriss v2, we made the chip flexible enough to reuse data across a wider range of deep networks. The Eyeriss chip also uses compression to reduce data movement, a common tactic among AI chips. The low-power Navion chip that was developed in collaboration with Sertac Karaman for mapping and navigation applications in robotics uses two to three orders of magnitude less energy than a CPU, in part by using optimizations that reduce the amount of data processed and stored on-chip. 

Q: What changes have you made on the software side to boost efficiency?

A: The more that software aligns with hardware-related performance metrics like energy efficiency, the better we can do. Pruning, for example, is a popular way to remove weights from a deep network to reduce computation costs. But rather than remove weights based on their magnitude, our work on energy-aware pruning suggests you can remove the more energy-intensive weights to improve overall energy consumption. Another method we’ve developed, NetAdapt, automates the process of adapting and optimizing a deep network for a smartphone or other hardware platforms. Our recent followup work, NetAdaptv2, accelerates the optimization process to further boost efficiency.

Q: What low-power AI applications are you working on?

A: I’m exploring autonomous navigation for low-energy robots with Sertac Karaman. I’m also working with Thomas Heldt to develop a low-cost and potentially more effective way of diagnosing and monitoring people with neurodegenerative disorders like Alzheimer’s and Parkinson’s by tracking their eye movements. Eye-movement properties like reaction time could potentially serve as biomarkers for brain function. In the past, eye-movement tracking took place in clinics because of the expensive equipment required. We’ve shown that an ordinary smartphone camera can take measurements from a patient’s home, making data collection easier and less costly. This could help to monitor disease progression and track improvements in clinical drug trials.

Q: Where is low-power AI headed next?

A: Reducing AI’s energy requirements will extend AI to a wider range of embedded devices, extending its reach into tiny robots, smart homes, and medical devices. A key challenge is that efficiency often requires a tradeoff in performance. For wide adoption, it will be important to dig deeper into these different applications to establish the right balance between efficiency and accuracy.

Moon exploration rovers from Lunar Zebro. What if you would use this?

In this series of articles, we take robot innovations from their test-lab and bring them to a randomly selected workplace in the outside world. We discovered that Lunar Zebro is not only good for risky endeavours on extraterrestrial terrains, but that these sturdy self-organising little rovers can also simplify the life of hardworking interior decorators.

‘Shoot for the moon’ is taken quite literally by this team of students and professors from TU Delft. “We want to make the exploration of the moon available for a wider audience”, says Pieter van Santvliet, partnerships coordinator at Lunar Zebro. “So we build the world’s smallest and lightest rover.”

And what is the key to making a robust space vehicle as simple and cheap as possible? Korneel Somers, the team’s content creator, says: “The key is to not focus on one thing, but to create an entire system of collaborations”.

The Lunar Zebro discovering moon-like landscapes in Hawaii
A render of how the swarm would look on the moon
Testing the Lunar Zebro on a rugged terrain

Genius steals and this project is always looking for existing robotic innovations to combine and improve upon. Zebro’s distinctive C-shaped legs were sourced in this way. These plastic half-circles rotate over the highest point in the C, enabling the robot to take little steps on uneven terrain. The Lunar Zebro Legs Team coordinates these 6 identical parts with a special algorithm, resulting in a walking motion. Other teams are Team Thermal, Team Body, Team Power and so on.

A collection of Lunar Zebro’s on the lawn

Lunar Zebro’s oddly shaped leg allows it to move around rough terrains

The power of the concept manifests itself through the collective. Individually, the rovers have a simple design and are highly customisable. But collectively, they can in theory accomplish complex tasks. Zebros can work in a swarm, each robot making autonomous spur-of-the-moment decisions while the collective is achieving a common goal. And because they are cheap, losing one robot is not the end of the mission.

Sounds good. Current space missions are expensive and therefore highly focused. Affordable robots could open up moon exploration to a much wider range of research projects. Imagine we let rovers swarm the moon’s far side (the quietest radio location near Earth), to look for cosmic signals from intelligent life. Also, the extremely harsh conditions on the moon challenges Lunar Zebro students to become exceptional engineers.

But we are also curious what the Zebros may do on planet earth. Like all inventions, they will surely pop-up in unexpected places. To catch a glimpse of a possible future, we asked a professional decorator: “What would happen if you would use this?”

Selma making a big piece of decor
Selma having wall-paper with care for detail

Meet Selma van Gent. An inspiring professional who works as a decorator and interior designer. Which means she builds decors, is trained in decorative painting and is an expert in hanging wall paper. She first tells us she is “anti-computer and actually anti-anything-technological”. But then she sees the Zebros and starts exploring the possibilities of these little machines, and her ideas keep flowing.

Mapping the building site
“When I build something on-site, I have to take into account everything about the entire space”, she explains. “I have to measure distances and corners, estimate what sound does, how high things can be. Sometimes I get bogged down in details.” With a legion of Zebros at her command, Selma could just send them out to map the physical dimensions of the place, while she looks at the overall picture.

Intelligent robotic scaffolding

When she does paper hanging, Selma usually needs a structure to stand on. This scaffolding must be sturdy and safe, but also perfectly tailored so every wall is within reach. It often takes endless tinkering to get it just right. If Zebro rovers would be strong enough, and able to crawl on top of each other and interlock, a swarm of Zebros could be a wonderful self-constructing scaffold says Selma: “If they can analyse the required heights and combine these with my preferences, they could make the entire structure without me. That would be such a relief.”

Always bring me the right tools for the job

Our handy-woman can do all kinds of jobs, and she is blessed with all kinds of tools. Sometimes this blessing becomes a curse. When Selma is painting on Tuesday, and on Wednesday hanging wall-paper, and making props on Thursday, there are so many changes that she doesn’t always have all the right tools with her in the van. “I would tell the rovers what job I had today and they would know which tools I would need,” she says. “And they would stock the van for me and then always bring me the right tools for the job.” This would save a lot of headspace and allow Selma to focus on what she likes to do most.

These cheap robots are now made to explore the moon and withstand the harsh environment of space, but who knows what the future holds? Maybe they will eventually land on planet earth: in the creative spaces of professional decoration.

The post Moon exploration rovers from Lunar Zebro – What if you would use this? appeared first on RoboValley.

Cognitive neuroscience could pave the way for emotionally intelligent robots

Human beings have the ability to recognize emotions in others. Although perfectly capable of communicating with humans through speech, robots and virtual agents are only good at processing logical instructions, which greatly restricts human-robot interaction (HRI). Consequently, a great deal of research in HRI is about emotion recognition from speech. But first, how do we describe emotions?

Origami based tires can change shape while a vehicle is moving

A team of researchers affiliated with Seoul National University, Harvard University and Hankook Tire and Technology Co. Ltd., has developed a tire based on an origami design that allows for changing the shape of a tire while a vehicle is moving. In their paper published in the journal Science Robotics, the group describes their new tire design and how well it worked when tested.

Vision test for autonomous cars

The five meter-long Lexus RX-450h leads a rather contemplative life at Empa. It never takes long trips. Instead, the SUV dutifully makes its rounds on a special track just 180 meters long in a secluded backyard of the Empa campus. The scenery is not particularly spectacular: The Mobileye camera behind the windshield sees freshly painted lane markings on aging concrete; the Velodyne lidar scans the window front of always the same lab building at every turn, and the Delphi radar behind the Lexus' radiator grille routinely measures the distance to five tin trash cans set up to either side of the course.

Researchers develop a robotic guide dog to assist blind individuals

Guide dogs, dogs that are trained to help humans move through their environments, have played a critical role in society for many decades. These highly trained animals, in fact, have proved to be valuable assistants for visually impaired individuals, allowing them to safely navigate indoor and outdoor environments.
Page 398 of 579
1 396 397 398 399 400 579