Page 329 of 399
1 327 328 329 330 331 399

Visual model-based reinforcement learning as a path towards generalist robots


By Chelsea Finn∗, Frederik Ebert∗, Sudeep Dasari, Annie Xie, Alex Lee, and Sergey Levine

With very little explicit supervision and feedback, humans are able to learn a wide range of motor skills by simply interacting with and observing the world through their senses. While there has been significant progress towards building machines that can learn complex skills and learn based on raw sensory information such as image pixels, acquiring large and diverse repertoires of general skills remains an open challenge. Our goal is to build a generalist: a robot that can perform many different tasks, like arranging objects, picking up toys, and folding towels, and can do so with many different objects in the real world without re-learning for each object or task.

While these basic motor skills are much simpler and less impressive than mastering Chess or even using a spatula, we think that being able to achieve such generality with a single model is a fundamental aspect of intelligence.

The key to acquiring generality is diversity. If you deploy a learning algorithm in a narrow, closed-world environment, the agent will recover skills that are successful only in a narrow range of settings. That’s why an algorithm trained to play Breakout will struggle when anything about the images or the game changes. Indeed, the success of image classifiers relies on large, diverse datasets like ImageNet. However, having a robot autonomously learn from large and diverse datasets is quite challenging. While collecting diverse sensory data is relatively straightforward, it is simply not practical for a person to annotate all of the robot’s experiences. It is more scalable to collect completely unlabeled experiences. Then, given only sensory data, akin to what humans have, what can you learn? With raw sensory data there is no notion of progress, reward, or success. Unlike games like Breakout, the real world doesn’t give us a score or extra lives.

We have developed an algorithm that can learn a general-purpose predictive model using unlabeled sensory experiences, and then use this single model to perform a wide range of tasks.



With a single model, our approach can perform a wide range of tasks, including lifting objects, folding shorts, placing an apple onto a plate, rearranging objects, and covering a fork with a towel.

In this post, we will describe how this works. We will discuss how we can learn based on only raw sensory interaction data (i.e. image pixels, without requiring object detectors or hand-engineered perception components). We will show how we can use what was learned to accomplish many different user-specified tasks. And, we will demonstrate how this approach can control a real robot from raw pixels, performing tasks and interacting with objects that the robot has never seen before.

Read More

Reproducing paintings that make an impression

The RePaint system reproduces paintings by combining two approaches called color-contoning and half-toning, as well as a deep learning model focused on determining how to stack 10 different inks to recreate the specific shades of color.
Image courtesy of the researchers

By Rachel Gordon

The empty frames hanging inside the Isabella Stewart Gardner Museum serve as a tangible reminder of the world’s biggest unsolved art heist. While the original masterpieces may never be recovered, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) might be able to help, with a new system aimed at designing reproductions of paintings.

RePaint uses a combination of 3-D printing and deep learning to authentically recreate favorite paintings — regardless of different lighting conditions or placement. RePaint could be used to remake artwork for a home, protect originals from wear and tear in museums, or even help companies create prints and postcards of historical pieces.

“If you just reproduce the color of a painting as it looks in the gallery, it might look different in your home,” says Changil Kim, one of the authors on a new paper about the system, which will be presented at ACM SIGGRAPH Asia in December. “Our system works under any lighting condition, which shows a far greater color reproduction capability than almost any other previous work.”

To test RePaint, the team reproduced a number of oil paintings created by an artist collaborator. The team found that RePaint was more than four times more accurate than state-of-the-art physical models at creating the exact color shades for different artworks.

At this time the reproductions are only about the size of a business card, due to the time-costly nature of printing. In the future the team expects that more advanced, commercial 3-D printers could help with making larger paintings more efficiently.

While 2-D printers are most commonly used for reproducing paintings, they have a fixed set of just four inks (cyan, magenta, yellow, and black). The researchers, however, found a better way to capture a fuller spectrum of Degas and Dali. They used a special technique they call “color-contoning,” which involves using a 3-D printer and 10 different transparent inks stacked in very thin layers, much like the wafers and chocolate in a Kit-Kat bar. They combined their method with a decades-old technique called half-toning, where an image is created by lots of little colored dots rather than continuous tones. Combining these, the team says, better captured the nuances of the colors.

With a larger color scope to work with, the question of what inks to use for which paintings still remained. Instead of using more laborious physical approaches, the team trained a deep-learning model to predict the optimal stack of different inks. Once the system had a handle on that, they fed in images of paintings and used the model to determine what colors should be used in what particular areas for specific paintings.

Despite the progress so far, the team says they have a few improvements to make before they can whip up a dazzling duplicate of “Starry Night.” For example, mechanical engineer Mike Foshey said they couldn’t completely reproduce certain colors like cobalt blue due to a limited ink library. In the future they plan to expand this library, as well as create a painting-specific algorithm for selecting inks, he says. They also can hope to achieve better detail to account for aspects like surface texture and reflection, so that they can achieve specific effects such as glossy and matte finishes.

“The value of fine art has rapidly increased in recent years, so there’s an increased tendency for it to be locked up in warehouses away from the public eye,” says Foshey. “We’re building the technology to reverse this trend, and to create inexpensive and accurate reproductions that can be enjoyed by all.”

Kim and Foshey worked on the system alongside lead author Liang Shi; MIT professor Wojciech Matusik; former MIT postdoc Vahid Babaei, now Group Leader at Max Planck Institute of Informatics; Princeton University computer science professor Szymon Rusinkiewicz; and former MIT postdoc Pitchaya Sitthi-Amorn, who is now a lecturer at Chulalongkorn University in Bangkok, Thailand.

This work is supported in part by the National Science Foundation.

The end of parking as we know it

A day before snow hindered New York commuters, researchers at the University of Iowa and Princeton identified the growth of urbanization as the leading cause for catastrophic storm damage. Wednesday’s report stated that the $128 billion wake of Hurricane Harvey was 21 times greater due to the population density of Houston, one of America’s fastest growing cities. This startling statistic is even more alarming in light of a recent UN study which reported that 70% of the projected 9.7 billion people in the world will live in urban centers by 2050. Superior urban management is one of the major promises of autonomous systems and smart cities.

Today, one of the biggest headaches for civil planners is the growth of traffic congestion and demand for parking, especially considering cars are one of the most inefficient and expensive assets owned by Americans. According to the Governors Highway Safety Association the average private car is parked 95% of the time. Billions of dollars of real-estate in America is dedicated to parking. For example, in a city like Seattle 40% of the land is consumed by parking. Furthermore, INRIX analytics estimates that more than $70 billion is spent by Americans looking for parking. The average driver wastes $345 a year in time, fuel and emissions. “Parking pain costs much moreNew Yorkers spend 107 hours a year looking for parking spots at a cost of $2,243 per driver,” states INRIX.Screen Shot 2018-11-17 at 8.43.58 PM.pngThis month I spoke with Dr. Anuja Sonalker about her plan to save Americans billions of dollars from parking. Dr. Sonalker is the founder of STEER Technologies, a full-service auto-valet platform that is providing autonomous solutions to America’s parking pain. The STEER value proposition uses a sensor array that easily connects to a number of popular automobile models, seamlessly controlled by one’s smart phone. As Dr. Sonalker explains, “Simply put STEER allows a vehicle user to pull over at a curb (at certain destinations), and with a press of a button let the vehicle go find a parking spot and park for you. When it’s time to go, simply summon the vehicle and it comes back to get you.” An added advantage of STEER is its ability to conserve space, as cars can be parked very close together since computers don’t use doors.

Currently, STEER is piloting its technology near its Maryland headquarters. In describing her early success, Dr. Sonalker boasts, “We have successfully completed testing various scenarios under different weather, and lighting conditions at malls, train stations, airports, parks, construction sites, downtown areas. We have also announced launch dates in late 2019 with the Howard Hughes Corporation to power the Merriweather district – a 4.9 Million square foot new smart development in Columbia, MD, and the BWI airport.” The early showing from STEER’s performance is the results of its proprietary product that is built for all seasons and topographies. “Last March, we demonstrated live in Detroit under a very fast snowstorm system. Within less than an hour the ground was covered in 2+ inches of snow,” describes Dr. Sonalker. “No lane markings were visible any more, and parking lines certainly were not visible. The STEER car successfully completed its mission to ‘go park’, driving around the parking lot, recognizing other oncoming vehicles, pacing itself accordingly and locating and manoeuvring itself into a parking spot among other parked vehicles in that weather.”

In breaking down the STEER solution, Dr. Sonalker expounds, “The technology is built with a lean sensor suite, its cost equation is very favorable to both after market and integrated solutions for consumer ownership.” She further clarifies, “From a technology stand point both solutions are identical in the feature they provide. The difference lies in how the technology is integrated into the vehicle. For after market, STEER’s technology runs on additional hardware that is retrofitted to the vehicle. In an integrated solution STEER’s technology would be housed on an appropriate ECU driven by vehicle economics and architecture, but with a tight coupling with STEER’s software. The coupling will be cross layered in order to maintain the security posture.” Unlike many self-driving applications that rely heavily on LIDAR (Light Detection And Ranging), STEER uses location mapping of predetermined parking structures along with computer vision. I pressed Dr. Sonalker about her unusual setup, “Yes, it’s true we don’t use LIDAR. You see, STEER started from the principle of security-led design which is where we start from a minimum design, minimum attack surface, maximum default security.”

I continued my interview of Dr. Sonalker to learn how she plans to roll out the platform, “In the long term, we expect to be a feature on new vehicles as they roll out of the assembly line. 2020-2021 seems to be happening based on our current OEM partner targets. Our big picture vision is that people no longer have to think about what to do with their cars when they get to their destination. The equivalent effect of ride sharing – your ride ends when you get off. There will be a network of service points that your vehicle will recognize and go park there until you summon it for use again.” STEER’s solution is part of a growing fleet of new smart city initiatives cropping up across the automation landscape. Screen Shot 2018-11-17 at 10.58.59 PM.pngAt last year’s Consumer Electronic Show, German auto supplier Robert Bosch GmbH unveiled its new crowd-sourcing parking program called, “community-based parking.” Using a network of cameras and sensors to identify available parking spots Bosch’s cloud network automatically directs cars to the closest spot. This is part of Bosch’s larger urban initiative, as the company’s president Mike Mansuetti says, “You could say that our sensors are the eyes and ears of the connected city. In this case, its brain is our software. Of Bosch’s nearly 400,000 associates worldwide, more than 20,000 are software engineers, nearly 20% of whom are working exclusively on the IoT. We supply an open software platform called the Bosch IoT Suite, which offers all the functions necessary to connect devices, users, and companies.”

As the world grabbles with population explosion exacerbated by cars strangling city centers, civil engineers are challenging technologists to reimagine urban communities. Today, most cars are dormant, and when used run at one-fifth capacity with typical trips less than a mile from one’s home (easily accessible on foot or bike). In the words Dr. Sonalker, “All autonomous technologies will lead to societal change. AV Parking will result in more efficient utilization of existing spaces, fitting more in the same spaces, better use of underutilized remote lots, and frankly, even shunting parking off to further remote locations and using prime space for more enjoyable activities.”

Join the next RobotLab forum discussing “Cybersecurity & Machines” to learn how hackers are attacking the the ecosystem of smart cities and autonomous vehicles with John Frankel of ffVC and Guy Franklin of SOSA on February 12th in New York City, RSVP Today!

5 ways to help robots work together with people

For most people today, robots and smart systems are servants that work in the background, vacuuming carpets or turning lights on and off. Or they're machines that have taken over repetitive human jobs from assembly-line workers and bank tellers. But the technologies are getting good enough that machines will be able work alongside people as teammates – much as human-dog teams handle tasks like hunting and bomb detection.

#274: IROS 2018 Exhibition (Part 1 of 3), with Gabriel Lopes and Bernt Børnich


In this interview, Audrow Nash interviews Gabriel Lopes, Robot and Control Scientist at Robot Care Systems, and Bernt Børnich, CEO and Co-founder of Halodi Robotics.

Gabriel Lopes speaks about an assistive walker with additional functionality.  Lopes discusses how the walker can be used, including driving up to the person, helping them stand up, exercise, and make video calls.  He also discusses building a software language for non-technical people that work with older adults. Lopes also speaks about the future of their company.

Bernt Børnich speaks about the humanoid platform at Halodi Robotics. Børnich’s long-term goal is to create an affordable home robot. He discusses the design and form factor of this robot. This includes the custom motor designed for high torque, and how the robot can “lean in” from a joint at its ankle. Børnich discusses the timeline, speculated cost, and working with investors on the long term goal of building an affordable humanoid robot.

 

Links

 

Robot carers could help lonely seniors—they’re cheering humans up already

The film Robot and Frank imagined a near-future where robots could do almost everything humans could. The elderly title character was given a "robot butler" to help him continue living on his own. The robot was capable of everything from cooking and cleaning to socialising (and, it turned out, burglary).
Page 329 of 399
1 327 328 329 330 331 399