The deployment of connected, automated, and autonomous vehicles presents us with transformational opportunities for road transport. These opportunities reach beyond single-vehicle automation: by enabling groups of vehicles to jointly agree on maneuvers and navigation strategies, real-time coordination promises to improve overall traffic throughput, road capacity, and passenger safety. However, coordinated driving for intelligent vehicles still remains a challenging research problem, and testing new approaches is cumbersome. Developing true-scale facilities for safe, controlled vehicle testbeds is massively expensive and requires a vast amount of space. One approach to facilitating experimental research and education is to build low-cost testbeds that incorporate fleets of down-sized, car-like mobile platforms.
Following this idea, our lab (with key contributions by Nicholas Hyldmar and Yijun He) developed a multi-car testbed that allows for the operation of tens of vehicles within the space of a moderately large robotics laboratory. This testbed facilitates the development of coordinated driving strategies in dense traffic scenarios, and enables us to test the effects of vehicle-vehicle interactions (cooperative as well as non-cooperative). Our robotic car, the Cambridge Minicar, is based on a 1:24 model of an existing commercial car. The Minicar is an Ackermann-steering platform, and one out of very few openly available designs. It is built from off-the-shelf components (with the exception of one laser-cut piece), costs approximately US $76 in its basic configuration, and is especially attractive for robotics labs that already possess telemetry infrastructure. Its low cost enables the composition of large fleets, which can be used to test navigation strategies and driver models. Our Minicar design and code is available in an open-source repository (https://github.com/proroklab/minicar).
The movie above demonstrates the applicability of the testbed for large-fleet experimentation by implementing different driving schemes that lead to distinct traffic behaviors. Notably, in experiments on a fleet of 16 Minicars, we show the benefits of cooperative driving: when traffic disruptions occur, instead of queuing, a cooperative Minicar communicates its intention to lane-change; following vehicles in the new lane reduce their speeds to make space for this projected maneuver, hence maintaining traffic flow (and throughput), whilst ensuring safety.
As Hurricane Florence raged across the coastline of Northern Carolina, 600 miles north the 174th Attack Wing Nation Guard base in Syracuse, New York was on full alert. Governor Cuomo just hung up with Defence Secretary Mattis to ready the airbase’s MQ-9’s drone force to “provide post-storm situational awareness for the on-scene commanders and emergency personnel on the ground.” Suddenly, the entire country turned to the Empire State as the epicenter for unmanned search & rescue operations.
Located a few miles from the 174th is the Genius NY Accelerator. Genius boasts of the largest competition for unmanned systems in the world. Previous winners that received one million dollars, include: AutoModality and FotoKite. One of Genius’ biggest financial backers is the Empire State Development (ESD). Last month, I moderated a discussion in New York City between Sharon Rutter of the ESD, Peter Kunz of Boeing Horizon X and Victor Friedberg of FoodShot Global. These three investors spanned the gamut of early stage funders of autonomous machines. I started our discussion by asking if they think New York is poised to take a leading role in shaping the future of automation. While Kunz and Friedberg shared their own perspectives as corporate and social impact investors accordingly, Rutter singled out one audience participant in particular as representing the future of New York’s innovation venture scene.
Andrew Hong of ff Venture Capital sat quietly in front of the presenters, yet his firm has been loudly reshaping the Big Apple’s approach to investing in mechatronics for almost a decade (with the ESD as a proud limited partner). Founded in 2008 by John Frankel, formerly of Goldman Sachs, ff has deployed capital in more than 100 companies with market values of over $6 billion. As the original backer of crowd-funding site Indiegogo, ff could be credited as a leading contributor to a new suite of technologies. As Frankel explains, “We like hardware if it is a vector to selling software, as recurring models based on services lead to better economics for us than one-off hardware sales.” In the spirit of fostering greater creativity for artificial intelligence software, ff collaborated with New York University in 2016 to start the NYU/ffVC AI NexusLab — the country’s first AI accelerator program between a university and a venture fund. NexusLab culminated in the Future Labs AI Summit in 2017. Frankel describes how this technology is influencing the future of autonomy, “As we saw that AI was coming into its own we looked at AI application plays and that took us deeper into cyber security, drones and robotics. In addition, both drones and robotics benefited as a byproduct of the massive investment into mobile phones and their embedded sensors and radios. Thus we invested in a number of companies in the space (Skycatch, PlusOne Robotics, Cambrian Intelligence and TopFlight Technologies) and continue to look for more.”
Recently, ff VC bolstered its efforts to support the growth of an array of cognitive computing systems by opening a new state-of-the-art headquarter in the Empire State Building and expanding its venture partner program. In addition to providing seed capital to startups, ff VC has distinguished itself for more than a decade by augmenting technical founders with robust back-office services, especially accounting and financial management. Last year, ff also widened its industry venture partner program with the addition of Dr. Kathryn Hume to its network. Dr. Hume is probably best known for her work as the former president of Fast Forward Labs, a leading advisory to Fortune 500 companies in utilizing data science and artificial intelligence. I am pleased to announce that I have decided to join Dr. Hume and the ff team as a venture partner to widen their network in the robotics industry. I share Frankel’s vision that today we are witnessing “massive developments in AI and ML that had led to unprecedented demand for automation solutions across every industry.”
ff’s commitment is not an isolated example across the Big Apple but part of a growing invigorated community of venture capitalists, academics, inventors, and government sponsors. In a few weeks, New York City Economic Development Corporation (NYCEDC) will officially announce the winner of a $30 million investment grant to boost the city’s cybersecurity ecosystem. CyberNYC will include a new startup accelerator, city-wide programming, educational curricula, up-skilling/job placement, and a funding network for home-grown ventures. As NYCEDC President and CEO James Patchett explains, “The de Blasio Administration is investing in cybersecurity to both fuel innovation, and to create new, accessible pathways to jobs in the industry. We’re looking for big-thinking proposals to help us become the global capital of cybersecurity and to create thousands of good jobs for New Yorkers.” The Mayor’s office projects that its initiative will create 100,000 new jobs over the next ten years, enabling NYC to fully maximize the opportunities of an autonomous world.
The inspiration for CyberNYC could probably be found in the sands of the Israeli desert town of Beer Sheva. In the past decade, this bedouin city in the Holy Land has been transformed from tents into a high-tech engine for cybersecurity, remote sensing and automation technologies. At the center of this oasis is Cyber Labs, a government-backed incubator created by Jerusalem Venture Partners (JVP). Next week, JVP will kick off its New York City “Hub” with a $1 million competition called “New York Play” to bridge the opportunities between Israeli and NYC entrepreneurship. In the words of JVP’s Chairman and Founder, Erel Margalit, “JVP’s expansion to New York and the launch of New York Play are all about what’s possible. As New York becomes America’s gateway for international collaboration and innovation, JVP, at the center of the “Startup Nation,” will play a significant role boosting global partnerships to create solutions that better the world and drive international business opportunities.”
Looking past the skyscrapers, I reflect on Margalit’s image of New York as a “Gateway” to the future of autonomy. Today, the wheel of New York City is turning into a powerful hub, connected throughout America’s academic corridor and beyond, with spokes shooting in from Boston, Pittsburgh, Philadelphia, Washington DC, Silicon Valley, Europe, Asia and Israel. The Excelsior State is pulsing with entrepreneurial energy fostered by the partnerships of government, venture capital, academia and industry. As ff VC’s newest venture partner, I personally am excited to play a pivotal role in helping them harness the power of acceleration for the benefit of my city and, quite possibly, the world.
Come learn how New York’s Retail industry is utilizing robots to drive sales at the next RobotLab on “Retail Robotics” with Pano Anthos of XRC Labs and Ken Pilot, formerly President of Gap on October 17th, RSVP today.
Rethink Robotics shut down this week, closing the chapter on a remarkable journey making collaborative robots a reality.
This will come as a surprise and with sadness to the robotics community. I fondly remember interviewing Rodney Brooks, co-founder of the company, back in 2012 for the Robots Podcast. I’d toured the laboratory and was impressed by how human-centered the robot design was. You could show it an object and it would memorise it, teach it new tasks just by moving its arms, the robot’s facial expressions made the process intuitive. Students in laboratories around the world (including ours) spent countless hours learning and doing state-of-the-art research with their Baxter robots.
There is no word on what happens next, although I hope this is the start of a new crazy project. . . 10 years before its time.
When moving through a crowd to reach some end goal, humans can usually navigate the space safely without thinking too much. They can learn from the behavior of others and note any obstacles to avoid. Robots, on the other hand, struggle with such navigational concepts.
MIT researchers have now devised a way to help robots navigate environments more like humans do. Their novel motion-planning model lets robots determine how to reach a goal by exploring the environment, observing other agents, and exploiting what they’ve learned before in similar situations. A paper describing the model was presented at this week’s IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Popular motion-planning algorithms will create a tree of possible decisions that branches out until it finds good paths for navigation. A robot that needs to navigate a room to reach a door, for instance, will create a step-by-step search tree of possible movements and then execute the best path to the door, considering various constraints. One drawback, however, is these algorithms rarely learn: Robots can’t leverage information about how they or other agents acted previously in similar environments.
“Just like when playing chess, these decisions branch out until [the robots] find a good way to navigate. But unlike chess players, [the robots] explore what the future looks like without learning much about their environment and other agents,” says co-author Andrei Barbu, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute. “The thousandth time they go through the same crowd is as complicated as the first time. They’re always exploring, rarely observing, and never using what’s happened in the past.”
The researchers developed a model that combines a planning algorithm with a neural network that learns to recognize paths that could lead to the best outcome, and uses that knowledge to guide the robot’s movement in an environment.
In their paper, “Deep sequential models for sampling-based planning,” the researchers demonstrate the advantages of their model in two settings: navigating through challenging rooms with traps and narrow passages, and navigating areas while avoiding collisions with other agents. A promising real-world application is helping autonomous cars navigate intersections, where they have to quickly evaluate what others will do before merging into traffic. The researchers are currently pursuing such applications through the Toyota-CSAIL Joint Research Center.
“When humans interact with the world, we see an object we’ve interacted with before, or are in some location we’ve been to before, so we know how we’re going to act,” says Yen-Ling Kuo, a PhD in CSAIL and first author on the paper. “The idea behind this work is to add to the search space a machine-learning model that knows from past experience how to make planning more efficient.”
Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL, is also a co-author on the paper.
Trading off exploration and exploitation
Traditional motion planners explore an environment by rapidly expanding a tree of decisions that eventually blankets an entire space. The robot then looks at the tree to find a way to reach the goal, such as a door. The researchers’ model, however, offers “a tradeoff between exploring the world and exploiting past knowledge,” Kuo says.
The learning process starts with a few examples. A robot using the model is trained on a few ways to navigate similar environments. The neural network learns what makes these examples succeed by interpreting the environment around the robot, such as the shape of the walls, the actions of other agents, and features of the goals. In short, the model “learns that when you’re stuck in an environment, and you see a doorway, it’s probably a good idea to go through the door to get out,” Barbu says.
The model combines the exploration behavior from earlier methods with this learned information. The underlying planner, called RRT*, was developed by MIT professors Sertac Karaman and Emilio Frazzoli. (It’s a variant of a widely used motion-planning algorithm known as Rapidly-exploring Random Trees, or RRT.) The planner creates a search tree while the neural network mirrors each step and makes probabilistic predictions about where the robot should go next. When the network makes a prediction with high confidence, based on learned information, it guides the robot on a new path. If the network doesn’t have high confidence, it lets the robot explore the environment instead, like a traditional planner.
For example, the researchers demonstrated the model in a simulation known as a “bug trap,” where a 2-D robot must escape from an inner chamber through a central narrow channel and reach a location in a surrounding larger room. Blind allies on either side of the channel can get robots stuck. In this simulation, the robot was trained on a few examples of how to escape different bug traps. When faced with a new trap, it recognizes features of the trap, escapes, and continues to search for its goal in the larger room. The neural network helps the robot find the exit to the trap, identify the dead ends, and gives the robot a sense of its surroundings so it can quickly find the goal.
Results in the paper are based on the chances that a path is found after some time, total length of the path that reached a given goal, and how consistent the paths were. In both simulations, the researchers’ model more quickly plotted far shorter and consistent paths than a traditional planner.
Working with multiple agents
In one other experiment, the researchers trained and tested the model in navigating environments with multiple moving agents, which is a useful test for autonomous cars, especially navigating intersections and roundabouts. In the simulation, several agents are circling an obstacle. A robot agent must successfully navigate around the other agents, avoid collisions, and reach a goal location, such as an exit on a roundabout.
“Situations like roundabouts are hard, because they require reasoning about how others will respond to your actions, how you will then respond to theirs, what they will do next, and so on,” Barbu says. “You eventually discover your first action was wrong, because later on it will lead to a likely accident. This problem gets exponentially worse the more cars you have to contend with.”
Results indicate that the researchers’ model can capture enough information about the future behavior of the other agents (cars) to cut off the process early, while still making good decisions in navigation. This makes planning more efficient. Moreover, they only needed to train the model on a few examples of roundabouts with only a few cars. “The plans the robots make take into account what the other cars are going to do, as any human would,” Barbu says.
Going through intersections or roundabouts is one of the most challenging scenarios facing autonomous cars. This work might one day let cars learn how humans behave and how to adapt to drivers in different environments, according to the researchers. This is the focus of the Toyota-CSAIL Joint Research Center work.
“Not everybody behaves the same way, but people are very stereotypical. There are people who are shy, people who are aggressive. The model recognizes that quickly and that’s why it can plan efficiently,” Barbu says.
More recently, the researchers have been applying this work to robots with manipulators that face similarly daunting challenges when reaching for objects in ever-changing environments.
Humans aren't the only people in society – at least according to the law. In the U.S., corporations have been given rights of free speech and religion. Some natural features also have person-like rights. But both of those required changes to the legal system. A new argument has laid a path for artificial intelligence systems to be recognized as people too – without any legislation, court rulings or other revisions to existing law.
A motion control system is any system that entails the use of moving parts in a coordinated way. Most of the technology used in mechanical engineering is a result of the development and implementation of motion control systems.
Booth #S4314 - Introducing our AS40 conveyors with backlights. These backlights are specially designed to fit within the frame of our conveyors and, when coupled with a translucent belt, provide extra contrast for vision systems to aid in inspection applications
Brandon Alexander would like to introduce you to Angus, the farmer of the future. He's heavyset, weighing in at nearly 1,000 pounds, not to mention a bit slow. But he's strong enough to hoist 800-pound pallets of maturing vegetables and can move them from place to place on his own.
Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed a technology whereby two robots can work in unison to 3-D-print a concrete structure. This method of concurrent 3-D printing, known as swarm printing, paves the way for a team of mobile robots to print even bigger structures in the future. Developed by Assistant Professor Pham Quang Cuong and his team at NTU's Singapore Centre for 3-D Printing, this new multi-robot technology is reported in Automation in Construction. The NTU scientist was also behind the Ikea Bot project earlier this year, in which two robots assembled an Ikea chair in about nine minutes.
The human arm can perform a wide range of extremely delicate and coordinated movements, from turning a key in a lock to gently stroking a puppy's fur. The robotic "arms" on underwater research submarines, however, are hard, jerky, and lack the finesse to be able to reach and interact with creatures like jellyfish or octopuses without damaging them. Previously, the Wyss Institute for Biologically Inspired Engineering at Harvard University and collaborators developed a range of soft robotic grippers to more safely handle delicate sea life, but those gripping devices still relied on hard, robotic submarine arms that made it difficult to maneuver them into various positions in the water.
In this article, I discuss the primary reasons why autonomous vehicles are emerging, what factors go into developing self-driving cars, and how energy storage is a vital part of autonomous vehicle design.