Page 365 of 400
1 363 364 365 366 367 400

Healthcare’s regulatory AI conundrum

It was the last question of the night and it hushed the entire room. An entrepreneur expressed his aggravation about the FDA’s antiquated regulatory environment for AI-enabled devices to Dr. Joel Stein of Columbia University. Stein a leader in rehabilitative robotic medicine, sympathized with the startup knowing full well that tomorrow’s exoskeletons will rely heavily on machine intelligence. Nodding her head in agreement, Kate Merton of JLabs shared the sentiment. Her employer, Johnson & Johnson, is partnered with Google to revolutionize the operating room through embedded deep learning systems. In many ways this astute observation encapsulated RobotLab this past Tuesday with our topic being “The Future Of Robotic Medicine,” the paradox of software-enabled therapeutics offering a better quality of life and the societal, technological and regulatory challenges ahead.

To better understand the frustration expressed at RobotLab, a review of the policies of the Food & Drug Administration (FDA) relative to medical devices and software is required. Most devices fall within a criteria that was established in the 1970s. The “build and freeze” model whereby a product filed doesn’t change overtime and currently excludes therapies that rely on neural networks and deep learning algorithms that evolve with use. Charged with progressing its regulatory environment, the Obama Administration established a Digital Health Program tasked with implementing new regulatory guidance for software and mobile technology. This initiative eventually led Congress to pass the 21st Century Cures Act (“Cures Act”) in December 2016. An important aspect of the Cures Act is its provisions for digital health products, medical software, and smart devices. The legislators singled out AI for its unparalleled ability to be used in supporting human decision making referred to as “Clinical Decision Support” (“CDS”) with examples like Google and IBM Watson. Last year, the administration updated the Cures Act with a new framework that included a Digital Health Innovation Action Plan. These steps have been leading a change in the the FDA’s attitude towards mechatronics, updating its traditional approach to devices to include software and hardware that iterates with cognitive learning. The Action Plan states “an efficient, risk-based approach to regulating digital health technology will foster innovation of digital health products.” In addition, the FDA has been offering tech partners the ability of filing a Digital Health Software Pre-Certification (“Pre-Cert”) to fast track the evaluation and approval process, current Pre-Cert pilot filings include Apple, Fitbit, Samsung and other leading technology companies.

Another way for AI and robotic devices to receive approval from the FDA is through their “De Novo premarket review pathway.” According to the FDA’s website, the De Novo program is designed for “medical devices that are low to moderate risk and have no legally marketed predicate device to base a determination of substantial equivalence.” Many computer vision systems fall into the De Novo category using their sensors to provide “triage” software to efficiently identify disease markers based upon its training data of radiology images. As an example, last month the FDA approved Viz.ai a new type of “clinical decision support software designed to analyze computed tomography (CT) results that may notify providers of a potential stroke in their patients.”

Dr. Robert Ochs of the FDA’s Center for Devices and Radiological Health explains, “The software device could benefit patients by notifying a specialist earlier thereby decreasing the time to treatment. Faster treatment may lessen the extent or progression of a stroke.” The Viz.ai algorithm has the ability to change the lives of the nearly 800,000 annual stroke victims in the USA. The data platform will enable clinicians to quickly identify patients at risk for stroke by analyzing thousands of CT brain scans for blood vessel blockages and then automatically send alerts via text messages to neurovascular specialists. Viz.AI promises to streamline the diagnosis process by cutting the traditional time it takes for radiologists to review, identify and escalate cases to specialists for high-risk patients.

Screen Shot 2018-03-11 at 9.32.20 AMDr. Chris Mansi, Viz.ai CEO, says “The Viz.ai LVO Stroke Platform is the first example of applied artificial intelligence software that seeks to augment the diagnostic and treatment pathway of critically unwell stroke patients. We are thrilled to bring artificial intelligence to healthcare in a way that works alongside physicians and helps get the right patient, to the right doctor at the right time.” According to the FDA’s statement, Mansi’s company “submitted a study of only 300 CT scans that assessed the independent performance of the image analysis algorithm and notification functionality of the Viz.ai Contact application against the performance of two trained neuro-radiologists for the detection of large vessel blockages in the brain. Real-world evidence was used with a clinical study to demonstrate that the application could notify a neurovascular specialist sooner in cases where a blockage was suspected.”

Viz.ai joins a market for AI diagnosis software that is growing rapidly and projected to eclipse six billion by 2021 (Frost & Sullivan), an increase of more than forty percent since 2014. According to the study, AI has the ability to reduce healthcare costs by nearly half, while at the same time improving the outcomes for a third of all US healthcare patients. However, diagnosis software is only part of the AI value proposition, adding learning algorithms throughout the entire ecosystem of healthcare could provide new levels of quality of care.

At the same time, the demand for AI treatment is taking its toll on an underfunded FDA which is having difficulty keeping up with the new filings to review computer-aided therapies from diagnosis to robotic surgery to invasive therapeutics. In addition, many companies are currently unable to afford the seven-figure investment required to file with the FDA, leading to missed opportunities to find cures for the most plaguing diseases. The Atlantic reported last fall about a Canadian company, Cloud DX, that is still waiting for approval for its AI software that analyzes coughing data via audio wavelengths to detect lung-based diseases (i.e., asthma, tuberculosis, and pneumonia). Cloud DX’s founder, Robert Kaul, shared wth the magazine, “There’s a reason that tech companies like Google haven’t been going the FDA route [of clinical trials aimed at diagnostic certification]. It can be a bureaucratic nightmare, and they aren’t used to working at this level of scrutiny and slowness.” It took Cloud DX two years and close to a million dollars to achieve the basic ISO 13485 certification required to begin filing with the agency. Kaul, questioned, “How many investors are going to give you that amount of money just so you can get to the starting line?”

Last month, Rani Therapeutics raised $53 million to begin clinical trials for its new robotic pill. Rani’s solution could usher in a new paradigm of needle-free therapy, whereby drugs are mechanically delivered to the exact site of infection. Unfortunately, innovations like Rani’s are getting backlogged with a shortage of knowledgable examiners able review the clinical data. Bakul Patel, the FDA’s New Associate Center Director For Digital Health, describes that one of his top priorities is hiring, “Yes, it’s hard to recruit people in AI right now. We have some understanding of these technologies. But we need more people. This is going to be a challenge.” Patel is cautiously optimistic, “We are evolving… The legacy model is the one we know works. But the model that works continuously—we don’t yet have something to validate that. So the question is [as much] scientific as regulatory: How do you reconcile real-time learning [with] people having the same level of trust and confidence they had yesterday?”

Screen Shot 2018-03-11 at 11.13.44 AMAs I concluded my discussion with Stein, I asked if he thought disabled people will eventually be commuting to work wearing robotic exoskeletons as easily as they do in electric wheelchairs? He answered that it could come within the next decade if society changes its mindset on how we distribute and pay for such therapies. To quote the President, “Nobody knew health care could be so complicated.”

The paradox on robocar accidents

I have written a few times about the unusual nature of robocar accidents. Recently I was discussing this with a former student who is doing some research on the area. As a first step, she began looking at lists of all the reasons that humans cause accidents. (The majority of them, on police reports, are simply that one car was not in its proper right-of-way, which doesn’t reveal a lot.)

This led me, though to the following declaration that goes against most early intuitions.

Every human accident teaches us something about the way people have accidents. Every robocar accident teaches us about a way robocars will never have an accident again.

While this statement is not 100% true, it reveals the stark difference between the way people and robots drive. The whole field of actuarial science is devoted to understanding unusual events (with car accidents being the primary subject) and their patterns. When you notice a pattern, you can calculate probabilities that other people will do it, and they use that to price insurance and even set policy.

When a robocar team discovers their car has made any mistake, and certainly caused any incident, their immediate move is to find and fix the cause of the mistake, and update the software. That particular mistake will generally never happen again. We have learned very little about the pattern of robocar accidents, though the teams have definitely learned something to fix in their software. Since for now, and probably forever, the circumstances of robocar accidents will be a matter of public record, all teams and thus call cars will also learn portions of the same thing. (More on that below.)

The rule won’t be entirely true. There are some patterns. There are patterns of software bugs too — every programmer knows the risk of off-by-one errors and memory allocation mistakes. We actually build our compilers, tools and even processors to help detect and prevent all the known common programming mistakes we can, and there’s an active field of research to develop AI able to do that. But that does not teach us a lot about what type of car accidents this might generate. We know robocars will suffer general software crashes and any good system has to be designed to robustly handle that with redundancies. There is a broad class of errors known as perception false negatives, where a system fails to see or understand something on the road, but usually learning about one such error does not teach us much about the others of its class. It is their consequences which will be similar, not their causes.

Perception errors are the easiest to analogize to human activity. Humans also don’t see things before an accident. This, however, is due to “not paying attention,” or “not looking in that direction,” something that robots won’t ever be guilty of. A robot’s perception failure would be like a human’s mini-stroke temporarily knocking out part of our visual cortex (ie. not a real-world issue) or a flaw in the “design” of the human brain. In the robot, however, design flaws can usually be fixed.

There are some things that can’t be fixed, and thus teach us patterns. Some things are just plain hard, like seeing in fog or under snow, or seeing hidden vehicles/pedestrians. The fact that these things are hard can help you calculate probabilities of error.

This is much less true for the broad class of accidents where the vehicle perceives the world correctly, but decides the wrong thing to do. These are the mistakes that once done, will probably never be done badly again.

There actually have been very few accidents involving robocars that were the fault of the robocar system. In fact, the record is surprisingly good. I am not including things like Tesla autopilot crashes — the Tesla autopilot is designed to be an incomplete system and they document explicitly what it won’t do.

Indeed the only pattern I see from the few reported incidents is the obvious one — they happened in unusual driving situations. Merging with a bus when one wide lane is often used by 2 cars at once. Dealing with a lane splitting motorcycle after aborting an attempt at a lane change. (Note that police ruled against the motorcyclist in this case but it is being disputed in court.) Whatever faults occur here have been fixed by now.

Developers know that unusual situations are an area of risk, so they go out searching for them, and use simulators and test tracks to let them work extensively with them. You may not learn patterns, but you can come up with probability estimates to measure what fraction of everyday driving involves extraordinary situations. This can give you insurance-style confidence, to know that even if you aren’t sure you handle every unusual situation, the overall total risk is low.

One place this can be useful is in dealing with equipment failures. Today, a car driving only with vision and radar is not safe enough; a LIDAR is needed for the full safety level. If the LIDAR fails, however, the car is not blind, it is just a bit less safe. While you would not drive for miles with the LIDAR off, you might judge that the risk of driving to a safe spot to pull off on just camera and radar is acceptable, simply because the amount of driving in that mode will be very small. We do the same thing with physical hardware — driving with a blown out tire is riskier, but we can usually get off the road. We don’t insist every car have 5 wheels to handle that situation.

Learning from humans

It is also worth noting that the human race is capable of learning from accidents. In a sense, every traffic safety rule, most road signs, and many elements of road engineering are the result of learning from accidents and improving safety. While the fact that one human misjudges a left turn doesn’t stop other humans from doing so, if it causes a “no left turn” sign to go up, we do. Ironically, the robot does not need the no left turn sign — it will never misjudge the speed of oncoming vehicles and make the turn at the wrong time.

Car design also is guided a lot from lessons of past accidents. That’s everything from features like crumple zones which don’t affect human behaviour, to the blindspot warning system which does.

Pure neural network approaches

There are a few teams who hope to make a car drive with only a neural network. That is to say the neural network outputs steering controls and takes in sensor data. Such a system is more akin to humans in that a flaw found in that approach might be found again in other similarly designed vehicles. After an accident, such a car would have its network retrained so that it never made that precise mistake again (nor any of the other potential mistakes in its training library.) This might be a very narrow, retraining however.

This is one reason that only smaller teams are trying this approach. Larger teams like Waymo are making very extensive use of neural networks, but primarily in the area of improving perception, not in making driving decisions. If a perception error is discovered, the network retraining to fix it will ideally be quite extensive, to avoid related errors. Neural network perception errors also tend to be intermittent — ie. the network fails to see something in one frame, but sees it in a later frame. The QA effort is to make it see things sooner and more reliably.

Sharing crash reports

This raises the interesting question of sharing crash reports. Developers are now spending huge amounts of money racking up test miles on their cars. They want to encounter lots of strange situations and learn from them. Waymo’s 4 million miles of testing hardly came for free. This makes them and others highly reluctant to release to the public all that hard-won information.

The 2016 NHTSA car regulations, though highly flawed, included a requirement that vendors share all the sensor data on any significant incident. As you might expect, they resisted that, and the requirement was gone from the 2017 proposed regulations (along with almost all the others.)

Vendors are unlikely to want to share every incident, and they need incentives to do lots of test driving, but it might be reasonable to talk about sharing for any actual accident. It is not out of the question that this could be forced by law, though it is an open question on how far this should go. If an accident is the system’s fault, and there is a lawsuit, the details will certainly come out, but companies don’t want to air their dirty laundry. Most are so afraid of accidents that the additional burden of having their mistake broadcast to all won’t change the incentives.

Generally, companies all want as much test data as they can get their hands on. They might be convinced that while unpleasant, sharing all accident data helps the whole industry. Only a player who was so dominant that they had more data than everybody else combined (which is Waymo at this time) would not gain more from sharing than they lose.

Tokyo Tech’s six-legged robots get closer to nature

A study led by researchers at Tokyo Institute of Technology (Tokyo Tech) has uncovered new ways of driving multi-legged robots by means of a two-level controller. The proposed controller uses a network of so-called non-linear oscillators that enables the generation of diverse gaits and postures, which are specified by only a few high-level parameters. The study inspires new research into how multi-legged robots can be controlled, including in the future using brain-computer interfaces.

Personalizing wearable devices

A Bayesian optimization method that integrates the metabolic costs in wearers of this hip-assisting exosuit enabled the individualized fine-tuning of assistive forces. Credit: Ye Ding/Harvard University
By Leah Burrows

When it comes to soft, assistive devices — like the exosuit being designed by the Harvard Biodesign Lab — the wearer and the robot need to be in sync. But every human moves a bit differently and tailoring the robot’s parameters for an individual user is a time-consuming and inefficient process.

Now, researchers from the Wyss Institute for Biologically Inspired Engineering and the Harvard John A. Paulson School of Engineering and Applied and Sciences (SEAS) have developed an efficient machine learning algorithm that can quickly tailor personalized control strategies for soft, wearable exosuits.

The research is described in Science Robotics.

“This new method is an effective and fast way to optimize control parameter settings for assistive wearable devices,” said Ye Ding, a Postdoctoral Fellow at SEAS and co-first author of the research. “Using this method, we achieved a huge improvement in metabolic performance for the wearers of a hip extension assistive device.”

When humans walk, we constantly tweak how we move to save energy (also known as metabolic cost).

“Before, if you had three different users walking with assistive devices, you would need three different assistance strategies,” said Myunghee Kim, Ph.D., postdoctoral research fellow at SEAS and co-first author of the paper. “Finding the right control parameters for each wearer used to be a difficult, step-by-step process because not only do all humans walk a little differently but the experiments required to manually tune parameters are complicated and time consuming.”

The researchers, led by Conor Walsh, Ph.D., Core Faculty member at the Wyss Institute and the John L. Loeb Associate Professor of Engineering and Applied Sciences, and Scott Kuindersma, Ph.D., Assistant Professor of Engineering and Computer Science at SEAS, developed an algorithm that can cut through that variability and rapidly identify the best control parameters that work best  for minimizing the energy used for walking.

The researchers used so-called human-in-the-loop optimization, which uses real-time measurements of human physiological signals, such as breathing rate, to adjust the control parameters of the device. As the algorithm honed in on the best parameters, it directed the exosuit on when and where to deliver its assistive force to improve hip extension. The Bayesian Optimization approach used by the team was first report in a paper last year in PLOS ONE.

The combination of the algorithm and suit reduced metabolic cost by 17.4 percent compared to walking without the device. This was a more than 60 percent improvement compared to the team’s previous work.

“Optimization and learning algorithms will have a big impact on future wearable robotic devices designed to assist a range of behaviors,” said Kuindersma. “These results show that optimizing even very simple controllers can provide a significant, individualized benefit to users while walking. Extending these ideas to consider more expressive control strategies and people with diverse needs and abilities will be an exciting next step.” 

“With wearable robots like soft exosuits, it is critical that the right assistance is delivered at the right time so that they can work synergistically with the wearer,” said Walsh. “With these online optimization algorithms, systems can learn how do achieve this automatically in about twenty minutes, thus maximizing benefit to the wearer.”

Next, the team aims to apply the optimization to a more complex device that assists multiple joints, such as hip and ankle, at the same time.

“In this paper, we demonstrated a high reduction in metabolic cost by just optimizing hip extension,” said Ding. “This goes to show what you can do with a great brain and great hardware.”

This research was supported by the Defense Advanced Research Projects Agency, Warrior Web Program, Harvard’s Wyss Institute for Biologically Inspired Engineering, and the Harvard John A. Paulson School of Engineering and Applied Science.

Self-driving cars have power consumption problems

I recently chaired a UJA Tech Talk on “The Future Of Autonomous Cars” with former General Motors Vice-Chairman Steve Girsky. The auto executive enthusiastically shared his vision for the next 15-25 years of driving – a congestion-free world of automated wheeled capsules zipping commuters to and from work.

Girsky stated that connected cars with safety assist (autonomy-lite) features are moving much faster toward mass adoption than fully autonomous vehicles (sans steering wheels and pedals). In his opinion, the largest roadblocks toward a consumer-ready robocar are the current technical inefficiencies of prototypes on the road today, which burn huge amounts of energy supporting enhanced computing and arrays of sensors. This makes the sticker price closer to a 1972 Ferrari than a 2018 Prius.

As main street adoption relies heavily on converting combustion engines to electric at accessible pricing, Girsky’s sentiment was shared by many CES 2018 participants. NVIDIA, the leading chip manufacturer for autonomous vehicles, unveiled its latest technology, Xavier, with auto industry partner Volkswagen in Las Vegas. Xavier promises to be 15 times more energy-efficient than previous chip generations delivering 30 trillion operations per second by wielding only 30 watts of power.

After the Xavier CES demonstration, Volkswagen CEO Herbert Diess exclaimed, “Autonomous driving, zero-emission mobility, and digital networking are virtually impossible without advances in AI and deep learning. Working with NVIDIA, the leader in AI technology, enables us to take a big step into the future.”

NVIDIA is becoming the industry standard as Volkswagen joins more than 320 companies and organizations working with the chip manufacturer on autonomous vehicles. While NVIDIA is leading the pack, Intel and Qualcomm are not far behind with their low-power solutions. Electric vehicle powerhouse Tesla is developing its own internal chip for the next generation of Autopilot. While these new chips represents a positive evolution in processors, there is still much work to be done as current self-driving prototypes require close to 2,500 watts per second.

Power Consumption a Tradeoff for Self-Driving Cars

The power-consumption problem was highlighted recently with a report published by the the University of Michigan Center for Sustainable Systems. Its lead author, Greg Keoleian, questions whether the current autonomous car models will slow the overall adoption towards electric vehicles. Keoleian’s team simulated a number of self-driving Ford Fusion models with different-sized computer configurations and engine designs. In sharing his findings, Keoleian said, “We knew there was going to be a tradeoff in terms of the energy and greenhouse gas emissions associated with the equipment and the benefits gained from operational efficiency. I was surprised that it was so significant.”

Keoleian’s conclusions challenged the premise of self-driving cars accelerating the adoption of renewal energy. For years, the advocates of autonomous vehicles have claimed that smart driving will lead to a reduction of greenhouse gas emissions through the platooning of vehicles on highways and intersections; the decrease of aerodynamic drag on freeways, and the overall reduction in urban congestion.

Analysis: How California’s Self-Driving Cars Performed in 2017

However, the University of Michigan tests only showed a “six to nine percent net energy reduction” over the vehicle’s lifecycle when running on autonomy mode. This went down by five percent when using a large Waymo rooftop sensor package (shown below) as it increased the aerodynamic drag. The report also stated that the greatest net efficiencies were in cars with gas drivetrains that benefit the most from smart driving. Waymo currently uses a hybrid Chrysler Pacifica to run its complex fusion of sensors and processing units.

Keoleian told IEEE Spectrum that his modeling actually “overstates real impacts from future autonomous vehicles.” While he anticipates the reduction of computing and sensor drag, he is concerned that the impact of 5G communications has not been fully explored. The increased bandwidth will lead to greater data streams and boost power consumption for inboard systems and processors. In addition, he thinks that self-driving systems will lead to greater distances traveled as commuters move further away from city centers with the advent of easier commutes. Keoleian explains, “There could be a rebound effect. They could induce travel, adding to congestion and fuel use.” Koeleian points to a confusing conclusion by the U.S. National Renewable Energy Laboratory that presents two possible outcomes of full autonomy:

  1. A reduction in greenhouse emissions by sixty percent with greater ride sharing options
  2. An increase of two hundred percent with increased driving distances

Energy Efficiency Self Driving Cars

According to Wilko Stark, Mercedes-Benz’s Vice President of Strategy, it only makes sense for autonomous vehicles to be electric as the increased power requirements will go to the computers instead of the motors. “To put such a system into a combustion-engined car doesn’t make any sense, because the fuel consumption will go up tremendously,” explains Stark.

Analysis: Fleet Expansion Shows Waymo Lapping Self-Driving Competition

Girsky shares Stark’s view, as he predicted that the first large scale use cases for autonomy will be fleets of souped-up golf carts running low speed pre-planned shuttle routes. Also on view at CES were complimentary autonomous shared taxi rides around Las Vegas, courtesy of French startup Navya. Today, Navya boasts of 60 operating shuttles in more than 10 cities, including around the University of Michigan.

Fully autonomous cars might not be far behind, as Waymo has seen a ninety percent drop in component costs by bringing its sensor development in-house. The autonomous powerhouse recently passed the four million mile marker on public roads and is planning on ditching its safety driver later this year in its Phoenix, Arizona test program. According Dmitri Dolgov, Vice President of Waymo’s Engineering, “Sensors on our new generation of vehicles can see farther, sharper, and more accurately than anything available on the market. Instead of taking components that might have been designed for another application, we engineered everything from the ground up, specifically for the task of Level 4 autonomy.”

With increased roadside fatalities and rising CO2 emissions, the world can’t wait too much longer for affordable, energy-efficient autonomous transportation. Girsky and others remind us there is still a long road ahead, while the industry experts estimate that the current gas-burning Waymo Chrysler Pacifica cruising around Arizona costs more than one hundred times the sticker price of the minivan. I guess until then there is always Citibike.

Power transmission line inspection robots

In 2010 I wrote that there were three sponsored research projects to solve the problem of safely inspecting and maintaining high voltage transmission lines using robotics. Existing 2010 methods ranged from humans crawling the lines, to helicopters flying close-by and scanning, to cars and jeeps with people and binoculars attempting to scan with the human eye. (2010 article)

In 2014 I described the progress from 2010 including the Japanese start-up HiBot and their inspection robot Expliner which seemed promising. This project got derailed by the Fukushima disaster which took away the funding and attention from Tepco which was forced to refocus all its resources on the disaster. HiBot later sold their IP to Hitachi High-Tech which, thus far, hasn’t reported any progress or offered any products. (2014 article)

Also in 2014 Canada’s Hydro-Québec Research Institute was working on their transmission line robot, LineScout and in America, the EPRI (American Electric Power Research Institute) was researching robots and drones for line inspection.

Now, in 2018, Canada’s MIR Innovations (the product arm of Hydro Québec) is promoting their new LineRanger inspection robot and their LineDrone flying corrosion sensor as finished products while both Hitachi High Tech and the EPRI have been silent about their research progress thus far.

The progress of these three electrical power research projects to solve a very real need shows how deep pockets are needed to solve real problems with robotic solutions and how slowly that research process often takes. This is not atypical. I observed the same kind of delays in two recent visits I made to robot startups where original concepts have morphed into totally different ones that now – after many development iterations – seem close to acceptably solving the original problems yet with no scale-up production plans in sight — again after years of funding and research.

$

ML 2.0: Machine learning for many

“As the momentum builds, developers will be able to set up a ML [machine learning] apparatus just as they set up a database,” says Max Kanter, CEO at Feature Labs. “It will be that simple.”
Courtesy of the Laboratory for Information and Decision Systems

Today, when an enterprise wants to use machine learning to solve a problem, they have to call in the cavalry. Even a simple problem requires multiple data scientists, machine learning experts, and domain experts to come together to agree on priorities and exchange data and information.

This process is often inefficient, and it takes months to get results. It also only solves the problem immediate at hand. The next time something comes up, the enterprise has to do the same thing all over again.

One group of MIT researchers wondered, “What if we tried another strategy? What if we created automation tools that enable the subject matter experts to use ML, in order to solve these problems themselves?”

For the past five years, Kalyan Veeramachaneni, a principal research scientist at MIT’s Laboratory for Information and Decision Systems, along with Max Kanter and Ben Schreck who began working with Veeramachaneni as MIT students and later co-founded machine learning startup Feature Labs, has been designing a rigorous paradigm for applied machine learning.

The team first divided the process into a discrete set of steps. For instance, one step involved searching for buried patterns with predictive power, known as “feature engineering.” Another is called “model selection,” in which the best modeling technique is chosen from the many available options. They then automated these steps, releasing open-source tools to help domain experts efficiently complete them.

In their new paper, “Machine Learning 2.0: Engineering Data Driven AI Products,” the team brings together these automation tools, turning raw data into a trustworthy, deployable model over the course of seven steps. This chain of automation makes it possible for subject matter experts — even those without data science experience — to use machine learning to solve business problems.

“Through automation, ML 2.0 frees up subject matter experts to spend more time on the steps that truly require their domain expertise, like deciding which problems to solve in the first place and evaluating how predictions impact business outcomes,” says Schreck.

Last year, Accenture joined the MIT and Feature Labs team to undertake an ambitious project — build an AI project manager by developing and deploying a machine learning model that could predict critical problems ahead of time and augment seasoned human project managers in the software industry.

This was an opportunity to test ML 2.0’s automation tool, Featuretools, an open-source library funded by DARPA’s Data-Driven Discovery of Models (D3M) program, on a real-world problem.

Veeramachaneni and his colleagues closely collaborated with domain experts from Accenture along every step, from figuring out the best problem to solve, to running through a robust gauntlet of testing. The first model the team built was to predict the performance of software projects against a host of delivery metrics. When testing was completed, the model was found to correctly predict more than 80 percent of project performance outcomes.

Using Featuretools involved a series of human-machine interactions. In this case, Featuretools first recommended 40,000 features to the domain experts. Next, the humans used their expertise to narrow this list down to the 100 most promising features, which they then put to work training the machine-learning algorithm.

Next, the domain experts used the software to simulate using the model, and test how well it would work as new, real-time data came in. This method also extends the “train-test-validate” protocol typical to contemporary machine-learning research, making it more applicable to real-world use. The model was then deployed making predictions for hundreds of projects on a weekly basis.

“We wanted to apply machine learning (ML) to critical problems that we face in the technology services business,” says Sanjeev Vohra, global technology officer, Accenture Technology. “More specifically, we wanted to see for ourselves if MIT’s ML 2.0 could help anticipate potential risks in software delivery. We are very happy with the outcomes, and will be sharing them broadly so others can also benefit.”

In a separate joint paper, “The AI Project Manager,” the teams walk through how they used the ML 2.0 paradigm to achieve fast and accurate predictions.

“For 20 years, the task of applying machine learning to problems has been approached as a research or feasibility project, or an opportunity to make a discovery,” says Veeramachaneni. “With these new automation tools it is now possible to create a machine learning model from raw data and put them to use — within weeks,” says Veeramachaneni.

The team intends to keep honing ML 2.0 in order to make it relevant to as many industry problems as possible. “This is the true idea behind democratizing machine learning. We want to make ML useful to a broad swath of people,” he adds.

In the next five years, we are likely to see an increase in the adoption of ML 2.0. “As the momentum builds, developers will be able to set up a ML apparatus just as they set up a database,” says Max Kanter, CEO at Feature Labs. “It will be that simple.”

Segway Robotics Launches Indiegogo Campaign for Loomo – The World’s First Mobile Robot Sidekick

Loomo can follow the rider autonomously as a robot sidekick, and also has a playful personality. Loomo also comes with a free Android-based SDK for professional developers and enthusiasts alike to build on top of Segway's mobility + AI platform.
Page 365 of 400
1 363 364 365 366 367 400