Category robots in business

Page 345 of 430
1 343 344 345 346 347 430

A ‘cookbook’ for vehicle manufacturers: Getting automated parts to talk to each other

Automated, networked truck convoys could save fuel and cut down on driving time. Image credit – MAN Truck & Bus

by Sandrine Ceurstemont
Semi-autonomous cars are expected to hit the roads in Europe next year with truck convoys following a few years later. But before different brands can share the roads, vehicle manufacturers need to agree on standards for automated functions.

Automation will increasingly allow vehicles to take over certain aspects of driving. However automated functions are still being fine-tuned, for example, to ensure smooth transitions when switching between the human driver and driverless mode.

Standards also need to be set across different car manufacturers, which is one of the goals of a project called L3Pilot. Although each brand can maintain some unique features, automated functions that help with navigating traffic jams, parking and motorway and urban driving must be programmed to do the same thing.

‘It’s like if you rent a car today, your expectation is that it has a gear shift, it has pedals, it has a steering wheel and so on,’ said project coordinator Aria Etemad from Volkswagen Group Research in Wolfsburg, Germany. ‘The approaches and the interfaces to the human driver are the same.’

To get the same functions from different brands to operate in an identical way, the team is creating a code of practice. This will result in a checklist for developers to run through when creating a self-driving function. ‘It’s like a cookbook for how to design and develop automated driving functions,’ said Etemad.

So far, the project team, which includes representatives from 13 vehicle manufacturers, has been conducting initial tests to make sure their company’s technology works. Cars are equipped with several sensors as well as cameras and computers which need to be properly calibrated to deal with real-world traffic.

The next step is to test the automated functions on public roads to ensure that the vehicles are ready. The tests will begin this month. Volunteer drivers will be chosen from diverse backgrounds, including different ages and genders. ‘We plan, all in all, to have 1,000 drivers using 100 vehicles in 10 different countries,’ Etemad said.

Transition

The technologies being trialled will cover a wide range of situations from overtaking on motorways to driving through urban intersections. Long journeys are also planned to see how people are able to transition back to driving after a long time in automated driving mode.

‘We really want to understand if the way we have designed our systems is the way drivers expect them to behave,’ said Etemad.

The team will also be investigating other aspects of automated driving such as the effect on traffic flow and CO2 emissions. Self-driving features are likely to make driving more efficient due to connectivity between vehicles and infrastructure, for example, although research so far has shown mixed results due to other contributing factors such as more people choosing to drive.

The first automated functions, which should be available in the next year, are likely to be for motorway driving, according to Etemad. Parking functions are likely to come to market next followed by automated urban driving, which is much more complex due to additional elements such as pedestrians and cyclists moving around.

‘There will be a good contribution from the project with results about long-term automated driving and a general understanding of people and functions that will impact how these systems are developed,’ said Etemad.

Automated functions are of interest for trucks too, where networked vehicles driving in a convoy could help save fuel, cut down on driving time or help with traffic flow. Truck platooning involves several trucks linking up through a wireless connection when they are close by so that they can share information and use automated functions to drive together as one. But so far, the concept has mostly been demonstrated in trucks of the same brand and in very few cases, with two brands.

‘Each automotive manufacturer tries it out and develops the technique within their own factory,’ said Dr Marika Hoedemaeker, a senior project manager at the Netherlands Organisation for Applied Scientific Research (TNO) in Helmond.

Truck platooning

Hoedemaeker and her project partners are now trying to break new ground by developing truck platooning that works across different brands as part of a project called ENSEMBLE. ‘Now we’re going to show that we can do this together with all the European truck manufacturers,’ said Dr Hoedemaeker.

The first phase, which the team has just completed, involved coming up with specifications that need to be implemented by all manufacturers. For example, braking and speed keeping will be automated whereas steering won’t be.

They’ve also come up with guidelines for how a convoy will respond when it enters a zone with a new speed limit or passes a toll gate, for example. ‘It’s not only communicating with the platoon but also with the outside world,’ she said.

The team is also keen to gauge the impact that a platoon will have on traffic. They would like to calculate its effect on traffic flow and whether it would in fact reduce congestion on the roads. Driving simulators will also be used to see how other drivers react when they encounter an automated truck convoy. ‘Will it change their traffic behaviour or will they never drive in the right lane anymore? There are lots of questions around this other traffic behaviour as well,’ said Dr Hoedemaeker.

Once specifications have been implemented in the trucks, they will start to test platoons on dedicated grounds then on public roads. Since trucks cross borders quite often, they will have to respect laws in different countries which vary across EU member states. The minimum following distance between vehicles, for example, differs from country to country.

In a final showcase event in May 2021, trucks from seven different brands, such as Daimler and Volvo, will drive together in one or more convoys across national borders, most likely to a key goods transport destination such as a large European port.

Following this deployment on European roads, Hoedemaeker expects the first generation platooning trucks to start being manufactured and sold a year after the project ends in 2021.

Since platoons are being developed worldwide, the standards created during the project could also be adopted more widely.

‘I think there is potential that the rest of the world could say they (Europe) already thought about the standards so we can use these and not do the whole thing over again,’ she said.

The research in this article was funded by the EU.

Teaching machines to reason about what they see

Researchers trained a hybrid AI model to answer questions like “Does the red object left of the green cube have the same shape as the purple matte thing?” by feeding it examples of object colors and shapes followed by more complex scenarios involving multi-object comparisons. The model could transfer this knowledge to new scenarios as well as or better than state-of-the-art models using a fraction of the training data.
Image: Justin Johnson

A child who has never seen a pink elephant can still describe one — unlike a computer. “The computer learns from data,” says Jiajun Wu, a PhD student at MIT. “The ability to generalize and recognize something you’ve never seen before — a pink elephant — is very hard for machines.”

Deep learning systems interpret the world by picking out statistical patterns in data. This form of machine learning is now everywhere, automatically tagging friends on Facebook, narrating Alexa’s latest weather forecast, and delivering fun facts via Google search. But statistical learning has its limits. It requires tons of data, has trouble explaining its decisions, and is terrible at applying past knowledge to new situations; It can’t comprehend an elephant that’s pink instead of gray.  

To give computers the ability to reason more like us, artificial intelligence (AI) researchers are returning to abstract, or symbolic, programming. Popular in the 1950s and 1960s, symbolic AI wires in the rules and logic that allow machines to make comparisons and interpret how objects and entities relate. Symbolic AI uses less data, records the chain of steps it takes to reach a decision, and when combined with the brute processing power of statistical neural networks, it can even beat humans in a complicated image comprehension test. 

A new study by a team of researchers at MITMIT-IBM Watson AI Lab, and DeepMind shows the promise of merging statistical and symbolic AI. Led by Wu and Joshua Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Laboratory, the team shows that its hybrid model can learn object-related concepts like color and shape, and leverage that knowledge to interpret complex object relationships in a scene. With minimal training data and no explicit programming, their model could transfer concepts to larger scenes and answer increasingly tricky questions as well as or better than its state-of-the-art peers. The team presents its results at the International Conference on Learning Representations in May.

“One way children learn concepts is by connecting words with images,” says the study’s lead author Jiayuan Mao, an undergraduate at Tsinghua University who worked on the project as a visiting fellow at MIT. “A machine that can learn the same way needs much less data, and is better able to transfer its knowledge to new scenarios.”

The study is a strong argument for moving back toward abstract-program approaches, says Jacob Andreas, a recent graduate of the University of California at Berkeley, who starts at MIT as an assistant professor this fall and was not involved in the work. “The trick, it turns out, is to add more symbolic structure, and to feed the neural networks a representation of the world that’s divided into objects and properties rather than feeding it raw images,” he says. “This work gives us insight into what machines need to understand before language learning is possible.”

The team trained their model on images paired with related questions and answers, part of the CLEVR image comprehension test developed at Stanford University. As the model learns, the questions grow progressively harder, from, “What’s the color of the object?” to “How many objects are both right of the green cylinder and have the same material as the small blue ball?” Once object-level concepts are mastered, the model advances to learning how to relate objects and their properties to each other.

Like other hybrid AI models, MIT’s works by splitting up the task. A perception module of neural networks crunches the pixels in each image and maps the objects. A language module, also made of neural nets, extracts a meaning from the words in each sentence and creates symbolic programs, or instructions, that tell the machine how to answer the question. A third reasoning module runs the symbolic programs on the scene and gives an answer, updating the model when it makes mistakes.

Key to the team’s approach is a perception module that translates the image into an object-based representation, making the programs easier to execute. Also unique is what they call curriculum learning, or selectively training the model on concepts and scenes that grow progressively more difficult. It turns out that feeding the machine data in a logical way, rather than haphazardly, helps the model learn faster while improving accuracy.

Once the model has a solid foundation, it can interpret new scenes and concepts, and increasingly difficult questions, almost perfectly. Asked to answer an unfamiliar question like, “What’s the shape of the big yellow thing?” it outperformed its peers at Stanford and nearby MIT Lincoln Laboratory with a fraction of the data. 

While other models trained on the full CLEVR dataset of 70,000 images and 700,000 questions, the MIT-IBM model used 5,000 images and 100,000 questions. As the model built on previously learned concepts, it absorbed the programs underlying each question, speeding up the training process. 

Though statistical, deep learning models are now embedded in daily life, much of their decision process remains hidden from view. This lack of transparency makes it difficult to anticipate where the system is susceptible to manipulation, error, or bias. Adding a symbolic layer can open the black box, explaining the growing interest in hybrid AI systems.

“Splitting the task up and letting programs do some of the work is the key to building interpretability into deep learning models,” says Lincoln Laboratory researcher David Mascharka, whose hybrid model, Transparency by Design Network, is benchmarked in the MIT-IBM study.      

The MIT-IBM team is now working to improve the model’s performance on real-world photos and extending it to video understanding and robotic manipulation. Other authors of the study are Chuang Gan and Pushmeet Kohli, researchers atthe MIT-IBM Watson AI Lab and DeepMind, respectively.

ProMat preview: Its time to cut the cord

Last week’s breaking news story on The Robot Report was unfortunately the demise of Helen Greiner’s company, CyPhy Works (d/b/a Aria Insights). The high-flying startup raised close to $40 million since its creation in 2008, making it the second business founded by an iRobot alum that has shuttered within five months. While it is not immediately clear why the tethered-drone company went bust, it does raise important questions about the long-term market opportunities for leashed robots.

PARC_Photo.jpg

The tether concept is not exclusive to Greiner’s company, there are a handful of drone companies that vie for marketshare, including: FotoKite, Elistair, and HoverFly. The primary driver towards chaining an Unmanned Ariel Vehicle (UAV) is bypassing the Federal Aviation Administration’s (FAA) ban on beyond line of sight operations. Therefore the only legal way to truly fly autonomously, without a FAA waiver, is attaching a cord to the machine. There are a host of other advantages such as continuous power and data links. In the words of Elistair customer Alexandre Auger of Adéole, “We flew 2 hours 45 minutes before the concert and 1 hour after with Elistair’s system. This innovation allowed us to significantly increase our flight time! During our previous missions, we did not have this system and the pressure related to battery life was huge.”

Most of the millions of robots installed around the world are stationary and, thus, tethered. The question of binding an unmanned system to a power supply and data uplink is really only relevant for units that require mobility. In a paper written in 2014 Dr. Jamshed Iqbal stated, “Over the last few years, mobile robot systems have demonstrated the ability to operate in constrained and hazardous environment and perform many difficult tasks. Many of these tasks demand tethered robot systems. Tether provides the locomotion and navigation so that robot can move on steep slopes.” Most robotic companies employed leashes five years ago, even mobility leader Boston Dynamics. However, today Marc Raibert’s company has literally cut the cord on its fleet, proving once and for all that greater locomotion and agility await on the other side of the tether.

This past Thursday, Boston Dynamics unveiled its latest breakthrough for commercializing unhitched robots – freewheeling warehouse-bots. In a video on YouTube! that has already garnered close to a million views, a bipedal wheeled robot named Handle is shown seamlessly palletizing boxes and unloading cartons onto a working conveyor belt. Since SoftBank’s acquisition of Boston Dynamics in 2017, the mechatronic innovator has pivoted from contractor of defense concepts to a purveyor of real world robo-business solutions. Earlier this year, Raibert exclaimed that his latest creations are “motivated by thinking about what could go in an office — in a space more accessible for business applications — and then, the home eventually.” The online clip of Handle as the latest “mobile manipulation robot designed for logistics” is part of a wider marketing campaign leading up to ProMat 2019*, the largest trade show for supplychain automation held in Chicago later this month.

According to the company’s updated website, Handle, the six foot two hundred pound mechanical beast, is “A robot that combines the rough-terrain capability of legs with the efficiency of wheels. It uses many of the same principles for dynamics, balance, and mobile manipulation​ found in the quadruped and biped robots we build, but with only 10 actuated joints, it is significantly less complex. Wheels are fast and efficient on flat surfaces while legs can go almost anywhere: by combining wheels and legs, Handle has the best of both worlds.” The video is already creating lots of buzz on social media with Evan Ackerman of IEEE Spectrum tweeting, “Nice to see progress, although I’ve still got questions about cost effectiveness, reliability, and safety.”

palletizing-robots-market

To many in the retail market palletizing is the holy grail for automating logistics. In a study released earlier this month by Future Market Insights (FMI) the market for such technologies could climb to over $1.5 billion by 2022 worldwide. FMI estimated that the driving force behind this huge spike is that “Most of the production units are opting for palletizing robots in order to achieve higher production rates. The factors that are driving the palletizing robots market include improved functionality of such robots along with a simplified user interface.” It further provided a vision of the types of innovations that would be most successful in this arena, “Due to the changing requirements of the packaging industry, hybrid palletizing robots have been developed that possess the flexibility and advantages of a robotic palletizer and can handle complex work tasks with the simplicity of a conventional high speed palletizer. Such kind of palletizing robots can even handle delicate products and perform heavy duty functions as well, apart from being simple to use and cost effective in operations.” Almost prophetic in its description, FMI described Handle’s free-wheeling demonstration weeks before the public release by Boston Dynamics.

Screen Shot 2019-03-31 at 11.18.03 AM.png

The mantra for successful robot applications is “dull, dirty and dangerous.” While advances like Handle continue to push the limits of mobility for the “dull” tedious tasks of inventory management, “dirty and dangerous” use cases require more continuous power than ninety minutes. By example, tethered machines have been deployed in the cleanup efforts of the Fukushima Daiichi Nuclear Power Plant since the tsunami in March 2011. The latest invention released this month is a Toshiba robot packed with cameras and sensors that include “fingers” for directly interacting the deposits of the environment enabling deeper study of radioactive residue. In explaining the latest invention, Jun Suzuki of Toshiba said, “Until now we have only seen those deposits, and we need to know whether they will break off and can be picked up and taken out. Touching the deposits is important so we can make plans to sample the deposits, which is a next key step.”

The work of Suzuki and his team in creating leashed robots in disaster recovery has already spilled over to new strides for underwater and space exploration. Last week, the Japanese Aerospace Exploration Agency announced a partnership with GITAI to build devices for the International Space Station. In the words of GITAI’s CEO, Sho Nakanose, “GITAI aims to replace astronauts with robots that can work for a long time while being remotely controlled from Earth while in low Earth orbit space stations to reduce the burden on astronauts, shorten the time it takes to perform work in space, and reduce costs.”

* Editors Note: I will be moderating a panel at ProMat 2019 on “Achieving Return On Investment: Demystifying The Hype And Achieving Implementation Success,” on April 9th and the next day hosting a fireside chat with Daniel Theobald of Vecna Robotics on “Investing in Automation,” as part of the Automate program.    

Page 345 of 430
1 343 344 345 346 347 430