News

Page 406 of 479
1 404 405 406 407 408 479

Inside “The Laughing Room”

interactive installation at Cambridge Public Library, was shown live on monitors at Hayden Library and streamed online.
Still photos courtesy of metaLAB (at) Harvard.

By Brigham Fay

“The Laughing Room,” an interactive art installation by author, illustrator, and MIT graduate student Jonathan “Jonny” Sun, looks like a typical living room: couches, armchairs, coffee table, soft lighting. This cozy scene, however, sits in a glass-enclosed space, flanked by bright lights and a microphone, with a bank of laptops and a video camera positioned across the room. People wander in, take a seat, begin chatting. After a pause in the conversation, a riot of canned laughter rings out, prompting genuine giggles from the group.

Presented at the Cambridge Public Library in Cambridge, Massachusetts, Nov. 16-18, “The Laughing Room” was an artificially intelligent room programmed to play an audio laugh track whenever participants said something that its algorithm deemed funny. Sun, who is currently on leave from his PhD program within the MIT Department of Urban Studies and Planning, is an affiliate at the Berkman Klein Center for Internet and Society at Harvard University, and creative researcher at the metaLAB at Harvard, created the project to explore the increasingly social and cultural roles of technology in public and private spaces, users’ agency within and dependence on such technology, and the issues of privacy raised by these systems. The installations were presented as part of ARTificial Intelligence, an ongoing program led by MIT associate professor of literature Stephanie Frampton that fosters public dialogue about the emerging ethical and social implications of artificial intelligence (AI) through art and design.

Setting the scene

“Cambridge is the birthplace of artificial intelligence, and this installation gives us an opportunity to think about the new roles that AI is playing in our lives every day,” said Frampton. “It was important to us to set the installations in the Cambridge Public Library and MIT Libraries, where they could spark an open conversation at the intersections of art and science.”

“I wanted the installation to resemble a sitcom set from the 1980s–a private, familial space,” said Sun. “I wanted to explore how AI is changing our conception of private space, with things like the Amazon Echo or Google Home, where you’re aware of this third party listening.”

“The Control Room,” a companion installation located in Hayden Library at MIT, displayed a live stream of the action in “The Laughing Room,while another monitor showed the algorithm evaluating people’s speech in real time. Live streams were also shared online via YouTube and Periscope. “It’s an extension of the sitcom metaphor, the idea that people are watching,” said Sun. The artist was interested to see how people would act, knowing they had an audience. Would they perform for the algorithm? Sun likened it to Twitter users trying to craft the perfect tweet so it will go viral.

Programming funny

“Almost all machine learning starts from a dataset,” said Hannah Davis, an artist, musician, and programmer who collaborated with Sun to create the installation’s algorithm. She described the process at an “Artists Talk Back” event held Saturday, Nov. 17, at Hayden Library. The panel discussion included Davis; Sun; Frampton; collaborator Christopher Sun, research assistant Nikhil Dharmaraj, Reinhard Engels, manager of technology and innovation at Cambridge Public Library, Mark Szarko, librarian at MIT Libraries, and Sarah Newman, creative researcher at the metaLAB. The panel was moderated by metaLAB founder and director Jeffrey Schnapp.

Davis explained how, to train the algorithm, she scraped stand-up comedy routines from YouTube, selecting performances by women and people of color to avoid programming misogyny and racism into how the AI identified humor. “It determines what is the setup to the joke and what shouldn’t be laughed at, and what is the punchline and what should be laughed at,” said Davis. Depending on how likely something is to be a punchline, the laugh track plays at different intensities.

Fake laughs, real connections

Sun acknowledged that the reactions from “The Laughing Room” participants have been mixed: “Half of the people came out saying ‘that was really fun,’” he said. “The other half said ‘that was really creepy.’”

That was the impression shared by Colin Murphy, a student at Tufts University who heard about the project from following Sun on Twitter: “This idea that you are the spectacle of an art piece, that was really weird.”

“It didn’t seem like it was following any kind of structure,” added Henry Scott, who was visiting from Georgia. “I felt like it wasn’t laughing at jokes, but that it was laughing at us. The AI seems mean.”

While many found the experience of “The Laughing Room” uncanny, for others it was intimate, joyous, even magical.

“There’s a laughter that comes naturally after the laugh track that was interesting to me, how it can bring out the humanness,” said Newman at the panel discussion. “The work does that more than I expected it to.”

Frampton noted how the installation’s setup also prompted unexpected connections: “It enabled strangers to have conversations with each other that wouldn’t have happened without someone listening.”

Continuing his sitcom metaphor, Sun described these first installations as a “pilot,” and is looking forward to presenting future versions of “The Laughing Room.” He and his collaborators will keep tweaking the algorithm, using different data sources, and building on what they’ve learned through these installations. “The Laughing Room” will be on display in the MIT Wiesner Student Art Gallery in May 2019, and the team is planning further events at MIT, Harvard, and Cambridge Public Library throughout the coming year.

“This has been an extraordinary collaboration and shown us how much interest there is in this kind of programming and how much energy can come from using the libraries in new ways,” said Frampton.

“The Laughing Room” and “The Control Room” were funded by the metaLAB (at) Harvard, the MIT De Florez Fund for Humor, the Council of the Arts at MIT, and the MIT Center For Art, Science and Technology and presented in partnership with the Cambridge Public Library and the MIT Libraries.

Waymo soft launches in Phoenix, but…

Waymo car drives in Tempe

Waymo announced today they will begin commercial operations in the Phoenix area under the name “Waymo One.” Waymo has promised that it would happen this year, and it is a huge milestone, but I can’t avoid a small bit of disappointment.

Regular readers will know I am a huge booster of Waymo, not simply because I worked on that team in its early years, but because it is clearly the best by every metric we know. However, this pilot rollout is also quite a step down from what was anticipated, though for sensible reasons.

  1. At first, it is only available to the early rider program members. In fact, it’s not clear that this is any different from what they had before, other than it is more polished and there is a commercial charging structure (not yet published.)
  2. Vehicles will continue to operate with safety drivers.

Other companies — including Waymo, Uber, Lyft and several others — have offered limited taxi services with safety drivers. This service is mainly different in its polish and level of development — or at least that’s all we have been told. They only say they “hope” to expand it to people outside the early rider program soon.

In other words, Waymo has missed the target it set of a real service in 2018. It was a big, hairy audacious target, so there is no shame or surprise in missing it, and it may not be missed by much.

There is a good reason for missing the target. The Uber fatality, right in that very operation area, has everybody skittish. The public. Developers. Governments. It used up the tolerance the public would normally have for mistakes. Waymo can’t take the risk of a mistake, especially in Phoenix, especially now, and especially if it is seen it came about because they tried to go too fast, or took new risks like dropping safety drivers.

I suspect at Waymo they had serious talks about not launching in Phoenix, in spite of the huge investment there. But in the end, changing towns may help, but not enough. Everybody is slowed down by this. Even an injury-free accident that could have had an injury will be problematic — and the truth is, as the volume of service increases, that’s coming.

It was terribly jarring for me to watch Waymo’s introduction video. I set it to play at one minute, where they do the big reveal and declare they are “Introducing the self driving service.”

The problem? The car is driving down N. Mill Avenue in Tempe, the road on which Uber killed Elaine Herzberg, about 1,100 feet from the site of her death. Waymo assures me that this was entirely unintentional — and those who live outside the area or who did not study the accident may not recognize it — but it soured the whole launch for me.

Universal Robots Celebrates 10 Year Anniversary of Selling the World’s First Commercially Viable Collaborative Robot

Universal Robots marks a decade of pioneering collaborative robots, now the fastest-growing segment of industrial automation. The company is poised for continued growth as its cobots continue to lower the automation barrier within a wide range of industries.

Visual model-based reinforcement learning as a path towards generalist robots


By Chelsea Finn∗, Frederik Ebert∗, Sudeep Dasari, Annie Xie, Alex Lee, and Sergey Levine

With very little explicit supervision and feedback, humans are able to learn a wide range of motor skills by simply interacting with and observing the world through their senses. While there has been significant progress towards building machines that can learn complex skills and learn based on raw sensory information such as image pixels, acquiring large and diverse repertoires of general skills remains an open challenge. Our goal is to build a generalist: a robot that can perform many different tasks, like arranging objects, picking up toys, and folding towels, and can do so with many different objects in the real world without re-learning for each object or task.

While these basic motor skills are much simpler and less impressive than mastering Chess or even using a spatula, we think that being able to achieve such generality with a single model is a fundamental aspect of intelligence.

The key to acquiring generality is diversity. If you deploy a learning algorithm in a narrow, closed-world environment, the agent will recover skills that are successful only in a narrow range of settings. That’s why an algorithm trained to play Breakout will struggle when anything about the images or the game changes. Indeed, the success of image classifiers relies on large, diverse datasets like ImageNet. However, having a robot autonomously learn from large and diverse datasets is quite challenging. While collecting diverse sensory data is relatively straightforward, it is simply not practical for a person to annotate all of the robot’s experiences. It is more scalable to collect completely unlabeled experiences. Then, given only sensory data, akin to what humans have, what can you learn? With raw sensory data there is no notion of progress, reward, or success. Unlike games like Breakout, the real world doesn’t give us a score or extra lives.

We have developed an algorithm that can learn a general-purpose predictive model using unlabeled sensory experiences, and then use this single model to perform a wide range of tasks.



With a single model, our approach can perform a wide range of tasks, including lifting objects, folding shorts, placing an apple onto a plate, rearranging objects, and covering a fork with a towel.

In this post, we will describe how this works. We will discuss how we can learn based on only raw sensory interaction data (i.e. image pixels, without requiring object detectors or hand-engineered perception components). We will show how we can use what was learned to accomplish many different user-specified tasks. And, we will demonstrate how this approach can control a real robot from raw pixels, performing tasks and interacting with objects that the robot has never seen before.

Read More

Reproducing paintings that make an impression

The RePaint system reproduces paintings by combining two approaches called color-contoning and half-toning, as well as a deep learning model focused on determining how to stack 10 different inks to recreate the specific shades of color.
Image courtesy of the researchers

By Rachel Gordon

The empty frames hanging inside the Isabella Stewart Gardner Museum serve as a tangible reminder of the world’s biggest unsolved art heist. While the original masterpieces may never be recovered, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) might be able to help, with a new system aimed at designing reproductions of paintings.

RePaint uses a combination of 3-D printing and deep learning to authentically recreate favorite paintings — regardless of different lighting conditions or placement. RePaint could be used to remake artwork for a home, protect originals from wear and tear in museums, or even help companies create prints and postcards of historical pieces.

“If you just reproduce the color of a painting as it looks in the gallery, it might look different in your home,” says Changil Kim, one of the authors on a new paper about the system, which will be presented at ACM SIGGRAPH Asia in December. “Our system works under any lighting condition, which shows a far greater color reproduction capability than almost any other previous work.”

To test RePaint, the team reproduced a number of oil paintings created by an artist collaborator. The team found that RePaint was more than four times more accurate than state-of-the-art physical models at creating the exact color shades for different artworks.

At this time the reproductions are only about the size of a business card, due to the time-costly nature of printing. In the future the team expects that more advanced, commercial 3-D printers could help with making larger paintings more efficiently.

While 2-D printers are most commonly used for reproducing paintings, they have a fixed set of just four inks (cyan, magenta, yellow, and black). The researchers, however, found a better way to capture a fuller spectrum of Degas and Dali. They used a special technique they call “color-contoning,” which involves using a 3-D printer and 10 different transparent inks stacked in very thin layers, much like the wafers and chocolate in a Kit-Kat bar. They combined their method with a decades-old technique called half-toning, where an image is created by lots of little colored dots rather than continuous tones. Combining these, the team says, better captured the nuances of the colors.

With a larger color scope to work with, the question of what inks to use for which paintings still remained. Instead of using more laborious physical approaches, the team trained a deep-learning model to predict the optimal stack of different inks. Once the system had a handle on that, they fed in images of paintings and used the model to determine what colors should be used in what particular areas for specific paintings.

Despite the progress so far, the team says they have a few improvements to make before they can whip up a dazzling duplicate of “Starry Night.” For example, mechanical engineer Mike Foshey said they couldn’t completely reproduce certain colors like cobalt blue due to a limited ink library. In the future they plan to expand this library, as well as create a painting-specific algorithm for selecting inks, he says. They also can hope to achieve better detail to account for aspects like surface texture and reflection, so that they can achieve specific effects such as glossy and matte finishes.

“The value of fine art has rapidly increased in recent years, so there’s an increased tendency for it to be locked up in warehouses away from the public eye,” says Foshey. “We’re building the technology to reverse this trend, and to create inexpensive and accurate reproductions that can be enjoyed by all.”

Kim and Foshey worked on the system alongside lead author Liang Shi; MIT professor Wojciech Matusik; former MIT postdoc Vahid Babaei, now Group Leader at Max Planck Institute of Informatics; Princeton University computer science professor Szymon Rusinkiewicz; and former MIT postdoc Pitchaya Sitthi-Amorn, who is now a lecturer at Chulalongkorn University in Bangkok, Thailand.

This work is supported in part by the National Science Foundation.

The end of parking as we know it

A day before snow hindered New York commuters, researchers at the University of Iowa and Princeton identified the growth of urbanization as the leading cause for catastrophic storm damage. Wednesday’s report stated that the $128 billion wake of Hurricane Harvey was 21 times greater due to the population density of Houston, one of America’s fastest growing cities. This startling statistic is even more alarming in light of a recent UN study which reported that 70% of the projected 9.7 billion people in the world will live in urban centers by 2050. Superior urban management is one of the major promises of autonomous systems and smart cities.

Today, one of the biggest headaches for civil planners is the growth of traffic congestion and demand for parking, especially considering cars are one of the most inefficient and expensive assets owned by Americans. According to the Governors Highway Safety Association the average private car is parked 95% of the time. Billions of dollars of real-estate in America is dedicated to parking. For example, in a city like Seattle 40% of the land is consumed by parking. Furthermore, INRIX analytics estimates that more than $70 billion is spent by Americans looking for parking. The average driver wastes $345 a year in time, fuel and emissions. “Parking pain costs much moreNew Yorkers spend 107 hours a year looking for parking spots at a cost of $2,243 per driver,” states INRIX.Screen Shot 2018-11-17 at 8.43.58 PM.pngThis month I spoke with Dr. Anuja Sonalker about her plan to save Americans billions of dollars from parking. Dr. Sonalker is the founder of STEER Technologies, a full-service auto-valet platform that is providing autonomous solutions to America’s parking pain. The STEER value proposition uses a sensor array that easily connects to a number of popular automobile models, seamlessly controlled by one’s smart phone. As Dr. Sonalker explains, “Simply put STEER allows a vehicle user to pull over at a curb (at certain destinations), and with a press of a button let the vehicle go find a parking spot and park for you. When it’s time to go, simply summon the vehicle and it comes back to get you.” An added advantage of STEER is its ability to conserve space, as cars can be parked very close together since computers don’t use doors.

Currently, STEER is piloting its technology near its Maryland headquarters. In describing her early success, Dr. Sonalker boasts, “We have successfully completed testing various scenarios under different weather, and lighting conditions at malls, train stations, airports, parks, construction sites, downtown areas. We have also announced launch dates in late 2019 with the Howard Hughes Corporation to power the Merriweather district – a 4.9 Million square foot new smart development in Columbia, MD, and the BWI airport.” The early showing from STEER’s performance is the results of its proprietary product that is built for all seasons and topographies. “Last March, we demonstrated live in Detroit under a very fast snowstorm system. Within less than an hour the ground was covered in 2+ inches of snow,” describes Dr. Sonalker. “No lane markings were visible any more, and parking lines certainly were not visible. The STEER car successfully completed its mission to ‘go park’, driving around the parking lot, recognizing other oncoming vehicles, pacing itself accordingly and locating and manoeuvring itself into a parking spot among other parked vehicles in that weather.”

In breaking down the STEER solution, Dr. Sonalker expounds, “The technology is built with a lean sensor suite, its cost equation is very favorable to both after market and integrated solutions for consumer ownership.” She further clarifies, “From a technology stand point both solutions are identical in the feature they provide. The difference lies in how the technology is integrated into the vehicle. For after market, STEER’s technology runs on additional hardware that is retrofitted to the vehicle. In an integrated solution STEER’s technology would be housed on an appropriate ECU driven by vehicle economics and architecture, but with a tight coupling with STEER’s software. The coupling will be cross layered in order to maintain the security posture.” Unlike many self-driving applications that rely heavily on LIDAR (Light Detection And Ranging), STEER uses location mapping of predetermined parking structures along with computer vision. I pressed Dr. Sonalker about her unusual setup, “Yes, it’s true we don’t use LIDAR. You see, STEER started from the principle of security-led design which is where we start from a minimum design, minimum attack surface, maximum default security.”

I continued my interview of Dr. Sonalker to learn how she plans to roll out the platform, “In the long term, we expect to be a feature on new vehicles as they roll out of the assembly line. 2020-2021 seems to be happening based on our current OEM partner targets. Our big picture vision is that people no longer have to think about what to do with their cars when they get to their destination. The equivalent effect of ride sharing – your ride ends when you get off. There will be a network of service points that your vehicle will recognize and go park there until you summon it for use again.” STEER’s solution is part of a growing fleet of new smart city initiatives cropping up across the automation landscape. Screen Shot 2018-11-17 at 10.58.59 PM.pngAt last year’s Consumer Electronic Show, German auto supplier Robert Bosch GmbH unveiled its new crowd-sourcing parking program called, “community-based parking.” Using a network of cameras and sensors to identify available parking spots Bosch’s cloud network automatically directs cars to the closest spot. This is part of Bosch’s larger urban initiative, as the company’s president Mike Mansuetti says, “You could say that our sensors are the eyes and ears of the connected city. In this case, its brain is our software. Of Bosch’s nearly 400,000 associates worldwide, more than 20,000 are software engineers, nearly 20% of whom are working exclusively on the IoT. We supply an open software platform called the Bosch IoT Suite, which offers all the functions necessary to connect devices, users, and companies.”

As the world grabbles with population explosion exacerbated by cars strangling city centers, civil engineers are challenging technologists to reimagine urban communities. Today, most cars are dormant, and when used run at one-fifth capacity with typical trips less than a mile from one’s home (easily accessible on foot or bike). In the words Dr. Sonalker, “All autonomous technologies will lead to societal change. AV Parking will result in more efficient utilization of existing spaces, fitting more in the same spaces, better use of underutilized remote lots, and frankly, even shunting parking off to further remote locations and using prime space for more enjoyable activities.”

Join the next RobotLab forum discussing “Cybersecurity & Machines” to learn how hackers are attacking the the ecosystem of smart cities and autonomous vehicles with John Frankel of ffVC and Guy Franklin of SOSA on February 12th in New York City, RSVP Today!

Page 406 of 479
1 404 405 406 407 408 479