Cyberbotics Ltd. is launching https://robotbenchmark.net to allow everyone to program simulated robots online for free.
Robotbenchmark offers a series of robot programming challenges that address various topics across a wide range of difficulty levels, from middle school to PhD. Users don’t need to install any software on their computer, cloud-based 3D robotics simulations run on a web page. They can learn programming by writing Python code to control robot behavior. The performance achieved by users is recorded and displayed online, so that they can challenge their friends and show off their skills at robot programming on social networks. Everything is designed to be extremely easy-to-use, runs on any computer, any web browser, and is totally free of charge.
This project is funded by Cyberbotics Ltd. and the Human Brain Project.
About Cyberbotics Ltd.: Cyberbotics is a Swiss-based company, spin-off from the École Polytechnique Fédérale de Lausanne, specialized in the development of robotics simulation software. It has been developing and selling the Webots software for more than 19 years. Webots is a reference software in robotics simulation being used in more than 1200 companies and universities across the world. Cyberbotics is also involved in industrial and research projects, such as the Human Brain Project.
About the Human Brain Project: The Human Brain Project is a large ten-year scientific research project that aims to build a collaborative ICT-based scientific research infrastructure to allow researchers across the globe to advance knowledge in the fields of neuroscience, computing, neurorobotics, and brain-related medicine. The Project, which started on 1 October 2013, is a European Commission Future and Emerging Technologies Flagship. Based in Geneva, Switzerland, it is coordinated by the École Polytechnique Fédérale de Lausanne and is largely funded by the European Union.
During a nighttime flight in the Persian Gulf, an Iranian surveillance drone followed a U.S. aircraft carrier and came within 300 feet of a U.S. fighter jet. It was the second time in a week that an Iranian drone interfered with U.S. Navy operations in the Gulf. In a statement, Iran’s Revolutionary Guard said that its drones were operated “accurately and professionally.” (Associated Press)
At the Atlantic, Naomi Nix looks at how the Kentucky Valley Educational Cooperative is investing in programs that teach students how to build and operate drones.
At War is Boring, Robert Beckhusen writes that the Israeli military is investigating allegations that an Israeli drone manufacturer carried out a drone strike against Armenian soldiers as part of a product demonstration.
Israeli firm Meteor Aerospace is developing a medium-altitude long-endurance surveillance and reconnaissance drone. (FlightGlobal)
Following the U.S. Army’s decision to discontinue use of its products, Chinese drone maker DJI is speeding the development of a security system that allows users to disconnect drones from DJI’s servers while in flight. (The New York Times)
The U.S. Naval Research Laboratory is developing a fuel cell-powered drone called Hybrid Tiger that could have an endurance of up to three days. (Jane’s)
China’s Beijing Sifang Automation is developing an autonomous unmanned boat called SeaFly, which it hopes will be ready for production by the end of the year. (Jane’s)
The U.S. Defense Advanced Research Projects Agency unveiled the Assured Autonomy program, which seeks to build better trustworthiness into a range of military unmanned systems. (Shephard Media)
The U.S. Forest Service used a drone to to collect data over the Minerva Fire in the Plumas National Forest. (Unmanned Aerial Online)
A team of researchers from Oklahoma State University and the University of Nebraska are planning to use drones to study atmospheric conditions during the upcoming solar eclipse. (Popular Science)
In a test, U.S. drone maker General Atomics flew its new Grey Eagle Extended Range drone for 42 hours. (Jane’s)
The Michigan Department of Corrections announced that three people have been arrested after attempting to use a drone to smuggle drugs and a cellphone into a prison in the city of Ionia. (New York Post)
A medevac helicopter responding to a fatal car crash in Michigan had to abort its first landing attempt at the scene because a drone was spotted flying over the area. (MLive)
NASA plans to once again use its Global Hawk high-altitude drone to study severe storms over the Pacific this hurricane season. (International Business Times)
Police in Glynn County, Georgia used a drone to search for a suspect fleeing in a marshy area. (Florida Times-Union)
For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.
Doctors are often deluged by signals from charts, test results, and other metrics to keep track of. It can be difficult to integrate and monitor all of these data for multiple patients while making real-time treatment decisions, especially when data is documented inconsistently across hospitals.
In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) explore ways for computers to help doctors make better medical decisions.
One team created a machine-learning approach called “ICU Intervene” that takes large amounts of intensive-care-unit (ICU) data, from vitals and labs to notes and demographics, to determine what kinds of treatments are needed for different symptoms. The system uses “deep learning” to make real-time predictions, learning from past ICU cases to make suggestions for critical care, while also explaining the reasoning behind these decisions.
“The system could potentially be an aid for doctors in the ICU, which is a high-stress, high-demand environment,” says PhD student Harini Suresh, lead author on the paper about ICU Intervene. “The goal is to leverage data from medical records to improve health care and predict actionable interventions.”
Another team developed an approach called “EHR Model Transfer” that can facilitate the application of predictive models on an electronic health record (EHR) system, despite being trained on data from a different EHR system. Specifically, using this approach the team showed that predictive models for mortality and prolonged length of stay can be trained on one EHR system and used to make predictions in another.
ICU Intervene was co-developed by Suresh, undergraduate student Nathan Hunt, postdoc Alistair Johnson, researcher Leo Anthony Celi, MIT Professor Peter Szolovits, and PhD student Marzyeh Ghassemi. It was presented this month at the Machine Learning for Healthcare Conference in Boston.
EHR Model Transfer was co-developed by lead authors Jen Gong and Tristan Naumann, both PhD students at CSAIL, as well as Szolovits and John Guttag, who is the Dugald C. Jackson Professor in Electrical Engineering. It was presented at the ACM’s Special Interest Group on Knowledge Discovery and Data Mining in Halifax, Canada.
Both models were trained using data from the critical care database MIMIC, which includes de-identified data from roughly 40,000 critical care patients and was developed by the MIT Lab for Computational Physiology.
ICU Intervene
Integrated ICU data is vital to automating the process of predicting patients’ health outcomes.
“Much of the previous work in clinical decision-making has focused on outcomes such as mortality (likelihood of death), while this work predicts actionable treatments,” Suresh says. “In addition, the system is able to use a single model to predict many outcomes.”
ICU Intervene focuses on hourly prediction of five different interventions that cover a wide variety of critical care needs, such as breathing assistance, improving cardiovascular function, lowering blood pressure, and fluid therapy.
At each hour, the system extracts values from the data that represent vital signs, as well as clinical notes and other data points. All of the data are represented with values that indicate how far off a patient is from the average (to then evaluate further treatment).
Importantly, ICU Intervene can make predictions far into the future. For example, the model can predict whether a patient will need a ventilator six hours later rather than just 30 minutes or an hour later. The team also focused on providing reasoning for the model’s predictions, giving physicians more insight.
“Deep neural-network-based predictive models in medicine are often criticized for their black-box nature,” says Nigam Shah, an associate professor of medicine at Stanford University who was not involved in the paper. “However, these authors predict the start and end of medical interventions with high accuracy, and are able to demonstrate interpretability for the predictions they make.”
The team found that the system outperformed previous work in predicting interventions, and was especially good at predicting the need for vasopressors, a medication that tightens blood vessels and raises blood pressure.
In the future, the researchers will be trying to improve ICU Intervene to be able to give more individualized care and provide more advanced reasoning for decisions, such as why one patient might be able to taper off steroids, or why another might need a procedure like an endoscopy.
EHR Model Transfer
Another important consideration for leveraging ICU data is how it’s stored and what happens when that storage method gets changed. Existing machine-learning models need data to be encoded in a consistent way, so the fact that hospitals often change their EHR systems can create major problems for data analysis and prediction.
That’s where EHR Model Transfer comes in. The approach works across different versions of EHR platforms, using natural language processing to identify clinical concepts that are encoded differently across systems and then mapping them to a common set of clinical concepts (such as “blood pressure” and “heart rate”).
For example, a patient in one EHR platform could be switching hospitals and would need their data transferred to a different type of platform. EHR Model Transfer aims to ensure that the model could still predict aspects of that patient’s ICU visit, such as their likelihood of a prolonged stay or even of dying in the unit.
“Machine-learning models in health care often suffer from low external validity, and poor portability across sites,” says Shah. “The authors devise a nifty strategy for using prior knowledge in medical ontologies to derive a shared representation across two sites that allows models trained at one site to perform well at another site. I am excited to see such creative use of codified medical knowledge in improving portability of predictive models.”
With EHR Model Transfer, the team tested their model’s ability to predict two outcomes: mortality and the need for a prolonged stay. They trained it on one EHR platform and then tested its predictions on a different platform. EHR Model Transfer was found to outperform baseline approaches and demonstrated better transfer of predictive models across EHR versions compared to using EHR-specific events alone.
In the future, the EHR Model Transfer team plans to evaluate the system on data and EHR systems from other hospitals and care settings.
Both papers were supported, in part, by the Intel Science and Technology Center for Big Data and the National Library of Medicine. The paper detailing EHR Model Transfer was additionally supported by the National Science Foundation and Quanta Computer, Inc.
In this episode, Jack Rasiel speaks with Kostas Bekris, who introduces us to tensegrity robotics: a striking robotic design which straddles the boundary between hard and soft robotics. A structure uses tensegrity if it is made of a number of isolated rigid elements which are held in compression by a network of elements that are in tension. Bekris, an Associate Professor of Computer Science, draws from a diverse set of problems to find innovative new ways to control tensegrity robots.
Kostas Bekris
Kostas Bekris is an Associate Professor of Computer Science at Rutgers, the State University of New Jersey. He is working in the area of algorithmic robotics, especially on problems related to robot motion planning and coordination. He received his PhD from Rice University in 2008 under the guidance of Lydia Kavraki. He was an Assistant Professor at the University of Nevada, Reno until 2012. His research has been supported by NSF, NASA, the DoD and DHS, including an NASA Early Career Faculty award.
In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda).
EuRobotics regularly publishes video interviews with projects, so that you can find out more about their activities. This week features ILIAD: ntra-Logistics with Integrated Automatic Deployment: Safe and Scalable Fleets in Shared Spaces
Objectives
ILIAD is driven by the industry needs for highly flexible robot fleets operating in spaces shared with humans. The main objectives are care-free, fast, and scalable deployment; long-term operation while learning from observed activities; on-line, self-optimising fleet management; human-aware fleets that can learn human behaviour models; compliant unpacking and palletising of goods; and a systematic study of human safety in shared environments, setting the stage for future safety certification.
Expected Impact
ILIAD’s focus is on the rapidly expanding intralogistics domain, where there is a strong market pull for flexible automated solutions, especially ones that can blend with current operations. The innovations developed in ILIAD target key hindrances identified in the logistics domain, and are essential for independent and reliable operation of collaborative AGV fleets. The expected impact extends to most multiple-actor systems where robots and humans operate together.
by Anthony King
Stephen Hawking and Elon Musk fear that the robotic revolution may already be underway, but automation isn’t going to take over just yet – first machines will work alongside us.
Robots across the world help out in factories by taking on heavy lifting or repetitive jobs, but the walking, talking kind may soon collaborate with people, thanks to European robotics researchers building prototypes that anticipate human actions.
‘Ideally robots should be able to sense interactional forces, like carrying a table with someone,’ said Francesco Nori, who coordinates the EU-funded An.Dy project which aims to advance human-robot collaboration. ‘(Robots) need to know what the human is about to do and what they can do to help.’
In any coordinated activity, whether dancing or lifting a table together, timing is crucial and that means a robot needs to anticipate before a person acts.
‘Today, robots just react – half a second of anticipation might be enough,’ said Nori, who works at the Italian Institute of Technology which is renowned for its humanoid robot called iCub, that will be educated in human behaviour from data collected during the An.Dy project.
The data will flow from a special high-tech suit that lies at the heart of the project – the AndySuit. This tight suit is studded with sensors to track movement, acceleration of limbs and muscle power as a person performs actions alone or in combination with a humanoid robot.
This sends data to a robot similar to iCub so that it can recognise what the human is doing and predict the next action just ahead of time. The collaborative robot – also known as a cobot – would then be programmed to support the worker.
‘The robot would recognise a good posture and a bad posture and would work so that it gives you an object in the right way to avoid injury,’ explained Nori, adding that the cobot would adapt its own actions to maximise the comfort of the human.
The robot’s capabilities will come from its library of pre-programmed models of human movement, but also from internal sensors and a mobile phone app. Special sensors that communicate with the iCub are also being developed for the AndySuit, but at the moment it is more appropriate for the robotics lab rather than a factory floor.
To get the robot and AndySuit closer to commercialisation it will be tested in three different scenarios. First, in a workspace where a person works beside a cobot. Second, when a person wears an exoskeleton, which could be useful for workers who must lift heavy loads and can be assisted by a robust metal skeleton around them.
A third scenario will be where a humanoid robot offers assistance and could take turns performing tasks. In this situation, the robot would look like the archetype sci-fi robot; like Sonny from the film iRobot.
Silicon sidekick
A different project will see a human-like prototype robot reach out a helping hand to support technicians, under an EU-funded project called SecondHands led by Ocado Technology in the UK.
‘Ask it to pass the screw driver, and it will respond asking whether you meant the one on the table or in the toolbox.’ Duncan Russel, Ocado Technology
Ocado runs giant automated warehouses that fulfil grocery orders. Its warehouse in Hatfield, north of London, is the size of several football fields and must be temporarily shut down for regular maintenance.
Duncan Russell, research coordinator at Ocado Technology explained: ‘Parts need to be cleaned and parts need replacing. The robot system is being designed to help the technicians with those tasks.’
While the technician stands on a ladder, a robot below would watch what they are doing and provide the next tool or piece of equipment when asked.
‘The robot will understand instructions in regular language – it will be cleverer than you might expect,’ said Russell. ‘Ask it to pass the screw driver, and it will respond asking whether you meant the one on the table or in the toolbox.’
The robot will feature capabilities straight from the Inspector Gadget cartoon series. An extendable torso will allow it to move upwards and telescopic limbs will give it a three metre plus reach.
‘The arm span is 3.1 metres and the torso is around 1.8 metres, which gives it a dynamic reach. This will allow it to offer assistance to technicians up on a ladder,’ said Russell.
This futuristic scenario is being brought to reality by research partners around Europe. Robotics experts at Karlsruhe Institute of Technology in Germany have built a wheeled prototype robot. The plan is for a bipedal robot to be tested in the Ocado robots lab in Hatfield, and for it to be transferred to the warehouse floor for a stint with a real technician.
Karlsruhe is also involved in teaching the robot natural language and together with the Swiss Federal Institute of Technology in Lausanne it is developing a grasping hand, so the helper robot can wield tools with care. The visions system of this silicon sidekick is being developed by researchers at University College London, UK.
The handy robotic helper could also do cross-checking for the maintenance person, perhaps offering a reminder if a particular step is missed, for example.
‘The technician will get more done and faster, so that the shutdown times for maintenance can be shortened,’ said Russell.
If you are involved in the UK Robotics and Autonomous Systems (RAS) sector, we’d love to hear from you. Please fill in this survey.
In January this year, the UK Government published a Green Paper on “Building our Industrial Strategy” (https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/611705/building-our-industrial-strategy-green-paper.pdf). In it is set an ‘open door’ challenge to industry to come to the Government with proposals to transform and upgrade their sector through ‘Sector Deals’. Businesses rather than the Government are being encouraged to identify what companies need in order to enhance their competitiveness as a sector.
This is not about the Government providing additional funding; rather, it is an open call to business to organise behind strong leadership, like the automotive and aerospace sectors, to address shared challenges and opportunities.
Government is looking for businesses to collaborate with other stakeholders, such as universities and local leaders to produce a clear proposal for boosting the productivity of their sector, setting out detailed plans to address challenges such as:
delivering upgrades in productivity, including in supply chains;
promoting competition and innovation;
facilitating long term investment and coordination between suppliers and primes;
accelerating growth across the value chain, including by identifying where the greatest value can be gained from technology development and investment;
developing and growing the strengths of particular clusters;
increasing exports; and looking at how we can use trade and investment deals to help the sector;
commercialising research across sectors; and
boosting skills and the number of high value, high productivity jobs.
To help provide evidence for the proposed Robotics Sector Deal, we would like to understand what activities are taking place in the UK that are in alignment with the existing RAS Strategy, and what new ones could be enabled by Government action. To this end we are reaching out to the UK RAS Community to collect this information. All you need to do is fill in this short survey.
When answering the questions, please endeavour to be specific and thorough. Your answers will not be publicly published, and will only be used to inform the proposed Sector Deal (and will therefore remain confidential between the RAS Special Interested Group Advisory Board and the Government).
Please feel free to give us more than one set of answers to this questionnaire. We will collate the answers and provide a high-level synthesis of them, rather than providing the details, so please don’t worry about overwhelming Government with detail!
If you are not familiar with the way we use the terms Asset, Skills, Coordination, Clusters and Challenges, then please have a quick look at the RAS UK Strategy here.
Thanks very much for your help. Your input is greatly valued and will contribute to something that will be of huge benefit to our sector, as well as the wider community.
Last year, Intel partnered with Lady Gaga on the Super Bowl Halftime Show to showcase its latest aerial technology called “Shooting Star.” Intel did a reprise performance of its Shooting Star technology for Singapore’s 52nd birthday this past week. Instead of fireworks, the tech-savvy country celebrated its National Day Parade with a swarm of 300 LED drones animating the night sky with shapes, logos, and even a map of the country.
Intel’s global drone chief, Anil Nanduri, explained, “There’s considerably more operational complexity in handling a 300 drone fleet, compared with 100 drones in a show. It’s like juggling balls in your hand. You may be able to juggle three, but if you juggle nine, you may have to throw them higher and faster to get more time.” Earlier this year, Intel first showcased its 300 drone show at Coachella music festival on the heels of claiming the Guinness World Record of a 500 drone performance.
Choreographed drones are winning the hearts of Cirque du Soleil theatergoers with a fleet of flying acrobatic vehicles dancing around its human performers. These drones are the brain child of Professor Raffaello D’Andrea of the ETH Zurich, Switzerland and his new startup Verity Studios. D’Andrea is probably best known as one of the three founders of Kiva Systems and now he is taking the same machine intelligence that sorts and delivers goods within Amazon’s warehouses to safely wow audiences worldwide. The flying lampshades (shown in the video below) are actually autonomous drones that magically synchronize with the dancers, without safety nets or human operators.
Verity’s customer, Cirque du Soleil’s Chief Creative Officer Jean-Francois Bouchard, said D’Andrea’s “flying machines are unquestionably one of the most important statements of the PARAMOUR show.” The key to the flying machines’ success over 7,000 autonomous flights on Broadway is its proprietary technologies that enable multiple self-piloted drones to be synchronized within flight. Verity’s drone is part of a larger performance system called “Stage Flyers.”
The Stage Flyer platform has proven itself in the field, flying next to thousands of people each evening by having built-in redundancy to any single failure. According to Verity’s website, the system is “capable of continuing operation in spite of a failed battery, a failed motor, a failed connector, a failed propeller, a failed sensor, or a failure of any other component. This is achieved through the duplication of critical components and the use of proprietary algorithms, which enable safe emergency responses to component failures.” This means that the drones can operate safely around audiences and performers alike, carrying payloads of cameras, mirrors, and special lighting effects. As shown in the above diagram, the drone system includes a fleet of self-piloted drones that utilize one positioning system and control unit. The company boasts that its system only takes a few hours to install, calibrate and learn how to operate.
Drone swarms are not just for entertainment, as today there are a number of upstarts and established players utilizing these mechanics for e-commerce fulfillment centers. Last June, Amazon was issued a patent for a “Multi-Level Fulfillment Center for Unmanned Aerial Vehicles.” The beehive-looking distribution center (above) is designed to facilitate traffic between inbound and outbound delivery drones. The patent illustration details “multiple levels with multiple landing and take-off locations, depending on local zoning regulations.”
This is all part of Amazon’s larger plan to grow its robotic workforce over the next two to three years. Instead of human truck drivers, the patent displays delivery bays that open and close automatically based on the direction of the drones and interior platforms that cycle around the hive. CB Insights reports that the patent describes “impact dampeners,” such as nets, for receiving inbound drones and “launch assist mechanisms,” such as fans, for launching outbound drones. It appears that Amazon will be looking again to technology like D’Andrea’s research to revolutionize its global network of warehouses with synchronized swarms of drones that safely soar above human workers.
Amazon’s competitor Walmart announced last summer its plans to utilize swarms of indoor drones for inventory management, replacing the need for people climbing dangerous ladders to manually scan labels. The New York Times first reported last July that the retailer applied for a FAA exemption to begin testing drones inside its massive distribution centers. Shekar Natarajan, the vice president of last mile and emerging science for Walmart, demonstrated for the newspaper how swarms of drones could easily move up and down aisles, from floor to ceiling, to scan 30 barcode images a second (an efficiency that would be impossible for even the most agile humans). Walmart has publicly boasted that it will spend close to $3 billion on new technology and other cost-saving investments to bolster its e-commerce business which is growing, but at a slower pace than its nemesis Amazon.
The race to dominate the warehouse has led to increased investments in the logistics sector and even an accelerator dedicated solely to technology around distribution centers. Chattanooga, Tennessee-based Dynamo Accelerator showcased last May its second cohort of startups. One of the most successful showings was Chicago-based Corvus Robotics, a software company that uses indoor aerial drones to scan inventory (similar to the Walmart example above). According to Dynamo’s managing directors, Corvus is building enabling tools that allow operators to fly drones autonomously, scan & sync barcodes, and enter the SKU data into the existing warehouse management system.
Santosh Sankar, the director at Dynamo, explained his accelerator’s mission succinctly in a recent blog post: “We believe our focus and hands-on approach is one of our value-adds. As such, we’re leaning into being seed investors and upholding our commitment to transforming our industry by focusing on our founders and our corporate partners. We’ve opted to not hold a quota for our programs and hone in on companies we can truly help because that ultimately makes for good seed investments.” Sankar added that several of the program participants “are already well on their way to generate ($1 million or more) in annual revenue and/or have raised their initial round of capital.”
Corvus may be the latest indoor drone startup to enter an already crowded warehouse market, which includes established players like the Hardis Group, Smartx, and DJI. Drones continue to amuse, amaze and evolve as the growing need for more unmanned systems in our lives appears to be almost insatiable. Next month, we plan to dig deeper into the drone market with our RobotLabNYC event series on September 19th at 6pm in WeWork Grand Central. Joining us in the discussions include thought-leaders from NASA, AUVSI, and Genius NY, RSVP today, as space is limited.
Two U.S. airstrikes targeted members of al-Shabab in Somalia. In a statement, the U.S. Africa Command said the strikes were “conducted within the parameters of the proposal approved by the President in March 2017.” A spokesperson said the strikes were carried out by drones. (ABC News)
In a classified guidance issued last month, the Department of Defense authorized the U.S. military to seize or destroy drones that appear to endanger the airspace or pose a threat to military installations. Pentagon spokesperson Captain Jeff Davis told reporters that the move was a response to the “increase of commercial and private drones” in the U.S. (Reuters)
An Iranian drone interfered with a U.S. F/A-18E Super Hornet jet in the Persian Gulf. In a statement, the U.S. military said that the drone flew within 100 feet of the manned fighter as it was preparing to land on the USS Nimitz. (Washington Post)
Turkish authorities have detained a Russian citizen with ties to ISIS for allegedly planning to use a drone to attack a U.S. military base. Police in Adana claim that Renad Bakiev admitted to reconnoitering Incirlik Air Base in southern Turkey. (Associated Press)
Commentary, Analysis, and Art
At the Financial Times, Louise Lucas writes that China-based drone manufacturer DJI is considering making a move toward the commercial sector.
The Los Angeles Times editorial board argues that the Los Angeles County Sheriff should adopt a more transparent process for integrating drones into police work.
At the Wall Street Journal, Nicole Friedman writes that insurance companies are increasingly relying on drones to inspect physical damage to properties.
At the San Francisco Chronicle, Carolyn Said looks at how robots and drones are taking on more roles in food delivery.
At WFTV, Lauren Seabrook writes that a new state law in Florida that offers more opportunities for drone businesses conflicts with local drone ordinances.
Kratos Defense and Security Solutions announced that a classified military drone that it is developing for an unnamed program will enter into production this year. (FlightGlobal)
Northrop Grumman is using an X-47B drone as a testbed for the Navy’s MQ-25A Stingray aerial refueling drone. (Aviation Week) For more on the X-47B, click here.
The San Francisco Public Utilities Commission approved a policy to use drones for construction management, environmental monitoring, and inspection. (San Francisco Examiner)
The Worldview International Foundation is planning to use drones to plant tree seeds as part of a reforestation effort in Myanmar. (Fast Company)
The Kansas Department of Transportation and app maker AirMap are partnering to develop a drone air traffic management system. (The Wichita Eagle)
The Israeli Defence Ministry is investigating reports that defense firm Aeronautics was asked by Azerbaijan to carry out a live demonstration of a loitering munition drone against Armenian forces. (Jerusalem Post)
The Muriwai Surf Life Saving Club in New Zealand is planning to use drones for patrolling a popular beach. (News Hub)
Researchers who used a drone last year to collect pollutant samples from the open burning of waste at an Army ammunition facility in Virginia found arsenic and other pollutants. (Phys.org)
For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.
In episode six of season three we chat about the difference between frequentists and Bayesians, take a listener question about techniques for panel data, and have an interview with Katherine Heller of Duke.
If you enjoyed this episode, you may also want to listen to:
The U.S. Army has ordered its members to stop using drones made by Chinese manufacturer SZ DJI Technology because of “cyber vulnerabilities.” The directive applies to all DJI drones and systems that use DJI components or software. It requires service members to “cease all use, uninstall all DJI applications, remove all batteries and storage media and secure equipment for follow-on direction.”
DJI has about 70% of the global commercial and consumer drone market according to Goldman Sachs analysts. The market, including military, is expected to be worth more than $100 billion over the next five years.
The Army's move appears to follow studies conducted by the Army Research Laboratory and the Navy which said there were risks and vulnerabilities in DJI products. The directive cites a classified Army Research Laboratory report and a Navy memo, as references for the order to cease use of DJI drones and related equipment.
DJI responded with the following statement on their website:
Some recent news stories have claimed DJI routinely shares customer information and drone video with authorities in China, where DJI is headquartered. This is false. A junior DJI staffer misspoke during an impromptu interview with reporters who were touring the DJI headquarters; we have attempted to correct the facts since then, but inaccurate stories are still posted online.
We want to emphasize that DJI does not routinely share customer information or drone video with Chinese authorities — or any authorities Any claims to the contrary are false.
In other DJI-related news, 3D Robotics (3DR), a previous camera drone competitor, announced a product partnership with DJI. Its 'Site Scan' aerial data analytics software platform now works with DJI drones, and is aimed at large construction and engineering companies using drones.
DJI director of strategic partnerships Michael Perry stated: “This integration is a significant milestone for the AEC industry. We’re excited that 3DR Site Scan users can now use DJI drones to convert images into actionable data that helps project stakeholders save time and manage costs.”
As AI surpasses human abilities in Go and poker – two decades after Deep Blue trounced chess grandmaster Garry Kasparov – it is seeping into our lives in ever more profound ways. It affects the way we search the web, receive medical advice and whether we receive finance from our banks.
The most innovative AI breakthroughs, and the companies that promote them – such as DeepMind, Magic Pony, Aysadi, Wolfram Alpha and Improbable – have their origins in universities. Now AI will transform universities.
We believe AI is a new scientific infrastructure for research and learning that universities will need to embrace and lead, otherwise they will become increasingly irrelevant and eventually redundant.
Through their own brilliant discoveries, universities have sown the seeds of their own disruption. How they respond to this AI revolution will profoundly reshape science, innovation, education – and society itself.
Deep Mind was created by three scientists, two of whom met while working at University College London. Demis Hassabis, one of Deep Mind’s founders, who has a PhD in cognitive neuroscience from UCL and has undertaken postdoctoral studies at MIT and Harvard, is one of many scientists convinced that AI and machine learning will improve the process of scientific discovery.
It is already eight years since scientists at the University of Aberystwyth created a robotic system that carried out an entire scientific process on its own: formulating hypotheses, designing and running experiments, analysing data, and deciding which experiments to run next.
Complex data sets
Applied in science, AI can autonomously create hypotheses, find unanticipated connections, and reduce the cost of gaining insights and the ability to be predictive.
AI is being used by publishers such as Reed Elsevier for automating systematic academic literature reviews, and can be used for checking plagiarism and misuse of statistics. Machine learning can potentially flag unethical behaviour in research projects prior to their publication.
AI can combine ideas across scientific boundaries. There are strong academic pressures to deepen intelligence within particular fields of knowledge, and machine learning helps facilitate the collision of different ideas, joining the dots of problems that need collaboration between disciplines.
As AI gets more powerful, it will not only combine knowledge and data as instructed, but will search for combinations autonomously. It can also assist collaboration between universities and external parties, such as between medical research and clinical practice in the health sector.
The implications of AI for university research extend beyond science and technology.
Philosophical questions
In a world where so many activities and decisions that were once undertaken by people will be replaced or augmented by machines, profound philosophical questions arise about what it means to be human. Computing pioneer Douglas Engelbert – whose inventions include the mouse, windows and cross-file editing – saw this in 1962 when he wrote of “augmenting human intellect”.
Expertise in fields such as psychology and ethics will need to be applied to thinking about how people can more rewardingly work alongside intelligent machines and systems.
Research is needed into the consequences of AI on the levels and quality of employment and the implications, for example, for public policy and management.
When it comes to AI in teaching and learning, many of the more routine academic tasks (and least rewarding for lecturers), such as grading assignments, can be automated. Chatbots, intelligent agents using natural language, are being developed by universities such as the Technical University of Berlin; these will answer questions from students to help plan their course of studies.
Virtual assistants can tutor and guide more personalized learning. As part of its Open Learning Initiative (OLI), Carnegie Mellon University has been working on AI-based cognitive tutors for a number of years. It found that its OLI statistics course, run with minimal instructor contact, resulted in comparable learning outcomes for students with fewer hours of study. In one course at the Georgia Institute of Technology, students could not tell the difference between feedback from a human being and a bot.
Global classroom
Mixed reality and computer vision can provide a high-fidelity, immersive environment to stimulate interest and understanding. Simulations and games technology encourage student engagement and enhance learning in ways that are more intuitive and adaptive. They can also engage students in co-developing knowledge, involving them more in university research activities. The technologies also allow people outside of the university and from across the globe to participate in scientific discovery through global classrooms and participative projects such as Galaxy Zoo.
As well as improving the quality of education, AI can make courses available to many more people. Previously access to education was limited by the size of the classroom. With developments such as Massive Open Online Courses (MOOCs) over the last five years, tens of thousands of people can learn about a wide range of university subjects.
It still remains the case, however, that much advanced learning, and its assessment, requires personal and subjective attention that cannot be automated. Technology has ‘flipped the classroom’, forcing universities to think about where we can add real value – such as personalised tuition, and more time with hands-on research, rather than traditional lectures.
Monitoring performance
University administrative processes will benefit from utilising AI on the vast amounts of data they produce during their research and teaching activities. This can be used to monitor performance against their missions, be it in research, education or promotion of diversity, and can be produced frequently to assist more responsive management. It can enhance the quality of performance league tables, which are often based on data with substantial time lags. It can allow faster and more efficient applicant selection.
AI allows the tracking of individual student performance, and universities such as Georgia State and Arizona State are using it to predict marks and indicate when interventions are needed to allow students to reach their full potential and prevent them from dropping out.
Such data analytics of students and staff raises weighty questions about how to respect privacy and confidentiality, that require judicious codes of practice.
The blockchain is being used to record grades and qualifications of students and staff in an immediately available and incorruptible format, helping prevent unethical behaviour, and could be combined with AI to provide new insights into student and career progression.
Universities will need to be attuned to the new opportunities AI produces for supporting multidisciplinarity. In research this will require creating new academic departments and jobs, with particular demands for data scientists. Curricula will need to be responsive, educating the scientists and technologists who are creating and using AI, and preparing students in fields as diverse as medicine, accounting, law and architecture, whose future work and careers will depend on how successfully they ally their skills with the capabilities of machines.
New curricula should allow for the unpredictable path of AI’s development, and should be based on deep understanding, not on the immediate demands of companies.
Addressing the consequences
Universities are the drivers of disruptive technological change, like AI and automation. It is the duty of universities to reflect on their broader social role, and create opportunities that will make society resilient to this disruption.
We must address the consequences of technological unemployment, and universities can help provide skills and opportunities for people whose jobs have been adversely affected.
There is stiff competition for people skilled in the development and use of AI, and universities see many of their talented staff attracted to work in the private sector. One of the most pressing AI challenges for universities is the need for them to develop better employment conditions and career opportunities to retain and incentivize their own AI workers. They need to create workplaces that are flexible, agile and responsive to interactions with external sources of ideas, and are open to the mixing of careers as people move between universities and business.
The fourth industrial revolution is profoundly affecting all elements of contemporary societies and economies. Unlike the previous revolutions, where the structure and organization of universities were relatively unaffected, the combinations of technologies in AI is likely to shake them to their core. The very concept of ‘deep learning’, central to progress in AI, clearly impinges on the purpose of universities, and may create new competition for them.
If done right, AI can augment and empower what universities already do; but continuing their missions of research, teaching and external engagement will require fundamental reassessment and transformation. Are universities up to the task?
This article was originally posted on the World Economic Forum. Click here to view the original.
Top 3 Robotic Applications in Primary Food Processing
Primary processing involves handling raw food products, which are cleaned, sorted, chopped, packaged, etc. Some foods, like raw vegetables, will only undergo primary processing before they are packaged for the consumer. Other foods will undergo secondary processing before packaging.
Up until quite recently, robotic processing at this stage has been limited or non-existent. Raw foods are variable in size, weight and shape. This makes it difficult for robots to handle them. However, recent developments in sensing and soft gripping has made it possible for robots to handle many raw foods.
1. Robotic Butchery
Butchery is a very difficult task to automate. Every animal carcass is different. A skilled butcher will adapt each cut to the shape and position of bones and meat. Some butchery tasks are simpler to automate than others. For example, high-volume chicken leg deboning is an established part of the meat processing industry.
Beef butchery has traditionally been very difficult to automate. Recently,beef manufacturer JBS has started looking for ways to introduce robots into their factories. Parts of the process are very dangerous for human workers. Rib cutting, for example, involves operating a high-speed circular saw for several hours. JBS has managed to automate this action using robot manipulators and various vision sensors. The application has improved safety and product consistency.
2. Fruit and Vegetable Pick and Place
Fruits and vegetables are challenging to handle with a robot due to their variable sizes and shapes. They also require delicate handling to avoid damage. For these reasons they have traditionally been handled by human workers. However, recent developments in gripping technologies look to change all that. Soft Robotics Inc has introduced a flexible gripper which can handle very delicate foods, even individual lettuce leaves!
Another example is Lacquey’s gripper, which uses paddles to lift soft fruits and vegetables.
3. Robotic Cutting and Slicing
Some cutting and slicing tasks are easy to automate. For example, even kitchen food processors can slice vegetables into uniform shapes. Robots are not needed for this type of simple automation.
For more advanced cutting and slicing, however, the food industry has relied on human workers but robotics is starting to make its way into the industry. Fish cutting, for example, involves detecting and removing defects from the fish as well as cutting fillets to uniform shapes and sizes.
Top 3 Robotic Applications in Secondary Food Processing
Secondary processing involves handling products which have already undergone primary processing. Robots have been used for several applications for a long time, particularly pick and place. However, recent developments have opened the door to even more advanced applications.
1. Product Pick and Place
You may be familiar with the high speed delta robots which are used to move food products around a production line. If not, here is a video:
This is an example of secondary processing pick and place. It is distinct from the vegetable pick and place mentioned above because the products are more uniform in shape and size. Uniform foods are much easier to handle robotically, so this application has been available in the food industry for many years.
2. Cake Decorating
One impressive application is robotic cake decoration. This involves using a robotic arm much like a 3D printer to pipe icing onto a cake. The Deco-Bot from Unifiller can pipe hand-drawn decorations onto cakes on a moving conveyor.
Cake cutting can also be done robotically, like the Katana waterjet cutting robot which can cut out intricate shapes in cakes using high pressure water.
3. Pizza Making
Artisan food producers sometimes worry that adding robots their process will make their products less “hand-made.” However, Silicon Valley pizza producer Zume is showing how robots can be produced to look like they have the human touch. Their pizzeria uses two robots: a delta robot to spread the tomato sauce and a ABB manipulator to tend the pre-baking ovens. While their system is far from fully automated, their goal is to make the pizza delivery industry a labor-free business.
Finally… Washing Up!
Contaminated food causes 48 million people in the USA to become sick annually. Robotic food processing has the potential to reduce this, by removing human workers from parts of the process, but this is only possible if the robots themselves do not cause contamination.
One of the more challenging issues for food automation is the fact that every piece of machinery must be thoroughly cleaned to avoid contamination. Robot manufacturers have been working to make their robot casings smoother, with better ingress ratings and no loose wires. This allows them to be thoroughly washed down at the end of each cycle.
In this system from JMP Automation, the two robots wash down the workcell with high powered water, and even wash down each other: