Germany reportedly intends to acquire the Northrop Grumman MQ-4C Triton high-altitude surveillance drone, according to a story in Sueddeutsche Zeitung. In 2013, Germany cancelled a similar program to acquire Northrop Grumman’s RQ-4 Global Hawk, a surveillance drone on which the newer Triton is based, due to cost overruns. The Triton is a large, long-endurance system that was originally developed for maritime surveillance by the U.S. Navy. (Reuters)
The U.S. Army released a report outlining its strategy for obtaining and using unmanned ground vehicles. The Robotics and Autonomous Systems strategy outlines short, medium, and long-term goals for the service’s ground robot programs. The Army expects a range of advanced unmanned combat vehicles to be fielded in the 2020 to 2030 timeframe. (IHS Jane’s 360)
The U.S. Air Force announced that there are officially more jobs available for MQ-1 Predator and MQ-9 Reaper pilots than for any manned aircraft pilot position. Following a number of surges in drone operations, the service had previously struggled to recruit and retain drone pilots. The Air Force is on track to have more than 1,000 Predator and Reaper pilots operating its fleet. (Military.com)
At FlightGlobal, Dominic Perry writes that France’s Dassault is not concerned that the U.K. decision to leave the E.U. will affect a plan to develop a combat drone with BAE Systems.
At the Los Angeles Times, Bryce Alderton looks at how cities in California are addressing the influx of drones with new regulations.
At CBS News, Larry Light looks at how Bill Gates has reignited a debate over taxes on companies that use robots.
In an interview with the Wall Street Journal, Andrew Ng and Neil Jacobstein argue that artificial intelligence will bring about significant changes to commerce and society in the next 10 to 15 years.
In testimony before the House Armed Services Committee’s subcommittee on seapower, panelists urged the U.S. Navy to develop and field unmanned boats and railguns. (USNI News)
At DefenseTech.org, Richard Sisk looks at how a U.S.-made vehicle-mounted signals “jammer” is helping Iraqi forces prevent ISIS drone attacks in Mosul.
In a Drone Radio Show podcast, Steven Flynn discusses why prioritizing drone operators who comply with federal regulations is important for the drone industry.
At ABC News, Andrew Greene examines how a push by the Australian military to acquire armed drones has reignited a debate over targeted killings.
At Smithsonian Air & Space, Tim Wright profiles the NASA High Altitude Shuttle System, a glider drone that is being used to test communications equipment for future space vehicles.
Researchers at Virginia Tech are flying drones into crash-test dummies to evaluate the potential harm that a drone could cause if it hits a human. (Bloomberg)
Meanwhile, researchers at École Polytechnique Fédérale de Lausanne are developing flexible multi-rotor drones that absorb the impact of a collision without breaking. (Gizmodo)
Recent satellite images of Russia’s Gromov Flight Research Institute appear to show the country’s new Orion, a medium-altitude long-endurance military drone. (iHLS)
The Fire Department of New York used its tethered multi-rotor drone for the first time during an apartment fire in the Bronx. (Crain’s New York)
The Michigan State Police Bomb Squad used an unmanned ground vehicle to inspect the interior of two homes that were damaged by a large sinkhole. (WXYZ)
A video posted to YouTube appears to show a woman in Washington State firing a gun at a drone that was flying over her property. (Huffington Post)
Meanwhile, a bill being debated in the Oklahoma State Legislature would remove civil liability for anybody who shoots a drone down over their private property. (Ars Technica)
An Arizona man who leads an anti-immigration vigilante group is using a drone to patrol the U.S border with Mexico in search of undocumented crossings. (Voice of America)
A man who attempted to use a drone to smuggle drugs into a Scottish prison has been sentenced to five years in prison. (BBC)
Industry Intel
The Turkish military has taken a delivery of six Bayraktar TB-2 military drones, two of which are armed, for air campaigns against ISIL and Turkish forces. (Defense News)
General Atomics Aeronautical Systems awarded Hughes Network Systems a contract for satellite communications for the U.K.’s Predator B drones. (Space News)
Schiebel awarded CarteNav Solutions a contact for its AIMS-ISR software for the S-100 Camcopter unmanned helicopters destined for the Royal Australian Navy. (Press Release)
Defence Research and Development Canada awarded Ontario Drive & Gear a $1 million contract for trials of the Atlas J8 unmanned ground vehicle. (Canadian Manufacturing)
Deveron UAS will provide Thompsons, a subsidiary of Lansing Trade Group and The Andersons, with drone data for agricultural production through 2018. (Press Release)
Precision Vectors Aerial selected the Silent Falcon UAS for its beyond visual line-of-sight operations in Canada. (Shephard Media)
Rolls-Royce won a grant from Tekes, a Finnish government research funding agency, to continue developing remote and autonomous shipping technologies. (Shephard Media)
Israeli drone manufacturer BlueBird is submitting an updated MicroB UAV system for the Indian army small UAV competition. (FlightGlobal)
A Romanian court has suspended a planned acquisition of Aeronautics Defense Systems Orbiter 4 drones for the Romanian army. (FlightGlobal)
Deere & Co.—a.k.a. John Deere—announced that it will partner with Kespry, a drone startup, to market drones for the construction and forestry industries. (TechCrunch)
For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.
Intel announced plans to acquire Israel-based Mobileye, a developer of vision technology used in autonomous driving applications, for $15.3 billion. Mobileye share prices jumped from $47 to $61 (the tender offering price is $63.54) on the news, a 30% premium. The purchase marks the largest acquisition of an Israeli hi-tech company ever.
This transaction jumpstarts Intel’s efforts to enter the emerging autonomous driving marketplace, an arena much different than Intel’s present business model. The process to design and bring a chip to market involves multiple levels of safety checks and approvals as well as incorporation into car company design plans – a process that often takes 4 to 5 years – which is why it makes sense to acquire a company already versed in those activities. As can be seen in the Frost & Sullivan chart on the right, we are presently producing cars with Level 2 and Level 3 automated systems. Intel wants to be a strategic partner going forward to fully automated and driverless Level 4 and Level 5 cars.
Mobileye is a pioneer in the development of vision systems for on-board Driving Assistance Systems; providing data for decision making applications such as Mobileye’s Adaptive Cruise Control, Lane Departure Warning, Forward Collision Warning, Headway Monitoring, High Beam Assist and more. Mobileye technology is already included in BMW 5-Series, 6-Series, 7-Series, Volvo S80, XC70 and V70 models, and Buick Lucerne, Cadillac DTS and STS.
Last year, Intel reorganized and created a new Autonomous Driving Division which included strategic partnerships with, and investments in, Delphi, Mobileye and a bunch of smaller companies involved in the chipmaking and sensor process. Thus, with this acquisition, Intel gains the ability to offer automakers a larger package of all of the components they will need as vehicles become autonomous and perhaps gaining, as well, on their competitors in the field: NXP Semiconductors, Freescale Semiconductor, Cypress Semiconductor, and STMicroelectronics, the company that makes Mobileye’s chips.
Mobileye’s newest chip, the EyeQ4, designed for computer vision processing in ADAS applications, is a low-power supercomputer on a chip. The design features are described in this article by Imagination Technology.
Bottom line:
“They’re paying a huge premium in order to catch up, to get into the front of the line, rather than attempt to build from scratch,” said Mike Ramsey, an analyst with technology researcher Gartner in a BloombergTechnology article.
You probably know the Sphero robot. It is a small robot with the shape of a ball. In case that you have one, you must know that it is possible to control it using ROS, by installing in your computer the Sphero ROS packages developed by Melonee Wise and connecting to the robot using the bluetooth of the computer.
Now, you can use the ROS Development Studio to create ROS control programs for that robot, testing as you go by using the integrated simulation.
The ROS Development Studio (RDS) provides off-the-shelf a simulation of Sphero with a maze environment. The simulation provides the same interface as the ROS module created by Melonee, so you can test your develop and test the programs on the environment, and once working properly, transfer it to the real robot.
We created the simulation to teach ROS to the students of the Robot Ignite Academy. They have to learn ROS enough to make the Sphero get out of the maze by using odometry and IMU.
Using the simulation
To use the Sphero simulation on RDS go to rds.theconstructsim.com and sign in. If you select the Public simulations, you will quickly identify the Sphero simulation.
Press the red Play button. A new screen will appear giving you details about the simulation and asking you which launch file you want to launch. The main.launch selected by default is the correct one, so just press Run.
After a few seconds the simulation will appear together with the development environment for creating the programs for Sphero and testing them.
On the left hand side you have a notebook containing information about the robot and how to program it with ROS. This notebook contains just some examples, but it can be completed and/or modified at your will. As you can see it is an iPython notebook and follows its standard. So it is up to you to modify it, add new information or else. Remember that any change you do to the notebook will be saved in a simulation in your private area of RDS, so you can come back later and launch it with your modifications.
You must know that the code included in the notebook is directly executable by selecting the cell of the code (do a single click on it) and pressing the small play button at the top of the notebook. This means that, once you press that button, the code will be executed and start controlling the Sphero simulated robot for a few time-steps (remember to have the simulation activated (Play button of the simulation activated) to see the robot move).
On the center area, you can see the IDE. It is the development environment for developing the code. You can browse there all the packages related to the simulation or any other packages that you may create.
On the right hand side, you can see the simulation and beneath it, the shell. The simulation shows the Sphero robot as well as the environment of the maze. On the shell, you can issue commands in the computer that contains the simulation of the robot. For instance, you can use the shell to launch the keyboard controller and move the Sphero around. Try typing the following:
$ roslaunch sphero_gazebo keyboard_teleop.launch
Now you must be able to move the robot around the maze by pressing some keys of the keyboard (instructions provided on the screen).
You can also launch there Rviz, and then watch the robot, the frames and any other additional information you may want of the robot. Type the following:
$ rosrun rviz rviz
Then press the Screen red icon located at the bottom-left of the screen (named the graphical tools). A new tab should appear, showing how the Rviz is loading. After a while, you can configure the Rviz to show the information you desire.
There are many ways you can configure the screen to provide more focus to what interests you the most.
To end this post, I would like to indicate that you can download the simulation to your computer at any time, by doing right-click on the directories and selecting Download. You can also clone the The Construct simulations repository to download it (among other simulations available).
If you liked this tutorial, you may also enjoy these:
Today’s self-driving car isn’t exactly autonomous – the driver has to be able to take over in a pinch, and therein lies the roadblock researchers are trying to overcome. Automated cars are hurtling towards us at breakneck speed, with all-electric Teslas already running limited autopilot systems on roads worldwide and Google trialling its own autonomous pod cars.
However, before we can reply to emails while being driven to work, we have to have a foolproof way to determine when drivers can safely take control and when it should be left to the car.
‘Even in a limited number of tests, we have found that humans are not always performing as required,’ explained Dr Riender Happee, from Delft University of Technology in the Netherlands, who is coordinating the EU-funded HFAuto project to examine the problem and potential solutions.
‘We are close to concluding that the technology always has to be ready to resolve the situation if the driver doesn’t take back control.’
But in these car-to-human transitions, how can a computer decide whether it should hand back control?
‘Eye tracking can indicate driver state and attention,’ said Dr Happee. ‘We’re still to prove the practical usability, but if the car detects the driver is not in an adequate state, the car can stop in the safety lane instead of giving back control.’
Next level
It’s all a question of the level of automation. According to the scale of US-based standards organisation SAE International, Level 1 automation already exists in the form of automated braking and self-parking.
Level 4 & 5 automation, where you punch in the destination and sit back for a nap, is still on the horizon.
But we’ll soon reach Level 3 automation, where drivers can hand over control in situations like motorway driving and let their attention wander, as long as they can safely intervene when the car asks them to.
HFAuto’s 13 PhD students have been researching this human-machine transition challenge since 2013.
Backed with Marie Skłodowska-Curie action funding, the students have travelled Europe for secondments, to examine carmakers’ latest prototypes, and to carry out simulator and on-road tests of transition takeovers.
Alongside further trials of their transition interface, HFAuto partner Volvo has already started testing 100 highly automated Level 3 cars on Swedish public roads.
Another European research group is approaching the problem with a self-driving system that uses external sensors together with cameras inside the cab to monitor the driver’s attentiveness and actions.
Blink
‘Looking at what’s happening in the scene outside of the cars is nothing without the perspective of what’s happening inside the car,’ explained Dr Oihana Otaegui, head of the Vicomtech-IK4 applied research centre in San Sebastián, Spain.
She coordinates the work as part of the EU-funded VI-DAS project. The idea is to avoid high-risk transitions by monitoring factors like a driver’s gaze, blinking frequency and head pose — and combining this with real-time on-road factors to calculate how much time a driver needs to take the wheel.
Its self-driving system uses external cameras as affordable sensors, collecting data for the underlying artificial intelligence system, which tries to understand road situations like a human would.
VI-DAS is also studying real accidents to discern challenging situations where humans fail and using this to help train the system to detect and avoid such situations.
The group aims to have its first interface prototype working by September, with iterated prototypes appearing at the end of 2018 and 2019.
Dr Otaegui says the system could have potential security sector uses given its focus on creating artificial intelligence perception in any given environment, and hopes it could lead to fully automated driving.
‘It could even go down the path of Levels 4 and 5, depending on how well we can teach our system to react — and it will indeed be improving all the time we are working on this automation.’
The question of transitions is so important because it has an impact on liability – who is responsible in the case of an accident.
It’s clear that Level 2 drivers can be held liable if they cause a fender bender, while carmakers will take the rap once Level 4 is deployed. However, with Level 3 transitions, liability remains a burning question.
HFAuto’s Dr Happee believes the solution lies in specialist insurance options that will emerge.
‘Insurance solutions are expected (to emerge) where a car can be bought with risk insurance covering your own errors, and those which can be blamed on carmakers,’ he said.
Yet it goes further than that. Should a car choose to hit pedestrians in the road, or swerve into the path of an oncoming lorry, killing its occupants?
‘One thing coming out of our discussions is that no one would buy a car which will sacrifice its owner for the lives of others,’ said Dr Happee. ‘So it comes down to making these as safe as possible.’
The five levels of automation:
Driver Assistance: the car can either steer or regulate speed on its own.
Partial Automation: the vehicle can handle both steering and speed selection on its own in specific controlled situations, such as on a motorway.
Conditional Automation: the vehicle can be instructed to handle all aspects of driving, but the driver needs to be on standby to intervene if needed.
High Automation: the vehicle can be instructed to handle all aspects of driving, even if the driver is not available to intervene.
Level 5 – Full Automation: the vehicle handles all aspects of driving, all the time.
Beginning in 2006 beekeepers became aware that their honeybee populations were dying off at increasingly rapid rates. Scientists are also concerned about the dwindling populations of monarch butterflies. Researchers have been scrambling to come up with explanations and an effective strategy to save both insects or replicate their pollination functions in agriculture.
Although the Plan Bee drones pictured above are just one SCAD (Savannah College of Art and Design) student’s concept for how a swarm of drones could handle pollinating an indoor crop, scientists are considering different options for dealing with the crisis, using modern technology to replace living bees with robotic ones.Researchers from the Wyss Institute and the School of Engineering and Applied Sciences at Harvard introduced the first RoboBees in 2013, and other scientists around the world have been researching and designing their solutions ever since.
Honeybees pollinate almost a third of all the food we consume and, in the U.S., account for more than $15 billion worth of crops every year. Apples, berries, cucumbers and almonds rely on bees for their pollination. Butterflies also pollinate, but less efficiently than bees and mostly they pollinate wildflowers.
The National Academy of Sciences said:
“Honey bees enable the production of no fewer than 90 commercially grown crops as part of the large, commercial, beekeeping industry that leases honey bee colonies for pollination services in the United States.
Although overall honey bee colony numbers in recent years have remained relatively stable and sufficient to meet commercial pollination needs, this has come at a cost to beekeepers who must work harder to counter increasing colony mortality rates.”
Florida and California have been hit especially hard by decreasing bee colony populations. In 2006, California produced nearly twice as much honey as the next state. But in 2011, California’s honey production fell by nearly half. The recent severe drought in California has become an additional factor driving both its honey yield and bee numbers down as less rain means less flowers available to pollinate.
In the U.S., the Obama Administration created a task force which developed The National Pollinator Health Strategy plan to:
Restore honey bee colony health to sustainable levels by 2025.
Increase Eastern monarch butterfly populations to 225 million butterflies by year 2020.
Restore or enhance seven million acres of land for pollinators over the next five years.
For this story, I wrote to the EPA specialist for bee pollination asking whether funding was continuing under the Trump Administration or whether the program itself was to be continued. No answer.
Japan’s National Institute of Advanced Industrial Science and Technology scientists have invented a drone that transports pollen between flowers using horsehair coated in a special sticky gel. And scientists at the Universities of Sheffield and Sussex (UK) are attempting to produce the first accurate model of a honeybee brain, particularly those portions of the brain that enable vision and smell. Then they intend to create a flying robot able to sense and act as autonomously as a bee.
Bottom Line:
As novel and technologically interesting as these inventions may be, the metrics will need to be near to the present costs of pollination. Or, as biologist Dave Goulson said to a Popular Science reporter, “Even if bee bots are really cool, there are lots of things we can do to protect bees instead of replacing them with robots.”
Saul Cunningham, of the Australian National University, confirmed that sentiment by showing that today’s concepts are far from being economically feasible:
“If you think about the almond industry, for example, you have orchards that stretch for kilometres and each individual tree can support 50,000 flowers,” he says. “So the scale on which you would have to operate your robotic pollinators is mind-boggling.”
“Several more financially viable strategies for tackling the bee decline are currently being pursued including better management of bees through the use of fewer pesticides, breeding crop varieties that can self-pollinate instead of relying on cross-pollination, and the use of machines to spray pollen over crops.”
The National Science Foundation (NSF) announced a $6.1 million, five-year award to accelerate fundamental research on wireless communication and networking technologies through the foundation’s Platforms for Advanced Wireless Research (PAWR) program.
Through the PAWR Project Office (PPO), award recipients US Ignite, Inc. and Northeastern University will collaborate with NSF and industry partners to establish and oversee multiple city-scale testing platforms across the United States. The PPO will manage nearly $100 million in public and private investments over the next seven years.
“NSF is pleased to have the combined expertise from US Ignite, Inc. and Northeastern University leading the project office for our PAWR program,” said Jim Kurose, NSF assistant director for Computer and Information Science and Engineering. “The planned research platforms will provide an unprecedented opportunity to enable research in faster, smarter, more responsive, and more robust wireless communication, and move experimental research beyond the lab — with profound implications for science and society.”
Over the last decade, the use of wireless, internet-connected devices in the United States has nearly doubled. As the momentum of this exponential growth continues, the need for increased capacity to accommodate the corresponding internet traffic also grows. This surge in devices, including smartphones, connected tablets and wearable technology, places an unprecedented burden on conventional 4G LTE and public Wi-Fi networks, which may not be able to keep pace with the growing demand.
NSF established the PAWR program to foster use-inspired, fundamental research and development that will move beyond current 4G LTE and Wi-Fi capabilities and enable future advanced wireless networks. Through experimental research platforms that are at the scale of small cities and communities and designed by the U.S. academic and industry wireless research community, PAWR will explore robust new wireless devices, communication techniques, networks, systems and services that will revolutionize the nation’s wireless systems. These platforms aim to support fundamental research that will enhance broadband connectivity and sustain U.S. leadership and economic competitiveness in the telecommunications sector for many years to come.
“Leading the PAWR Project Office is a key component of US Ignite’s mission to help build the networking foundation for smart communities,” said William Wallace, executive director of US Ignite, Inc., a public-private partnership that aims to support ultra-high-speed, next-generation applications for public benefit. “This effort will help develop the advanced wireless networks needed to enable smart and connected communities to transform city services.”
Establishing the PPO with this initial award is the first step in launching a long-term, public-private partnership to support PAWR. Over the next seven years, PAWR will take shape through two multi-stage phases:
Design and Development. The PPO will assume responsibility for soliciting and vetting proposals to identify the platforms for advanced wireless research and work closely with sub-awardee organizations to plan the design, development, deployment and initial operations of each platform.
Deployment and Initial Operations. The PPO will establish and manage each platform and document best practices as it progresses through the lifecycle.
“We are delighted that our team of wireless networking researchers has been selected to take the lead of the PAWR Project Office in partnership with US Ignite, Inc.,” said Dr. Nadine Aubry, dean of the college of engineering and university distinguished professor at Northeastern University. “I believe that PAWR, by bringing together academia, industry, government and communities, has the potential to make a transformative impact through advances spanning fundamental research and field platforms in actual cities.”
The PPO will work closely with NSF, industry partners and the wireless research community in all aspects of PAWR planning, implementation and management. Over the next seven years, NSF anticipates investing $50 million in PAWR, combined with approximately $50 million in cash and in-kind contributions from over 25 companies and industry associations. The PPO will disperse these investments to support the selected platforms.
Additional information can be found on the PPO webpage.
This announcement will also be highlighted this week during the panel discussion, “Wireless Network Innovation: Smart City Foundation,” at the South by Southwest conference in Austin, Texas.
UgCS is easy-to-use software for planning and flying UAV drone-survey missions. It supports almost any UAV platform, providing convenient tools for areal and linear surveys and enabling direct drone control. What’s more, UgCS enables professional land survey mission planning using photogrammetry techniques.
How to plan photogrammetry mission with UgCS
Standard land surveying photogrammetry mission planning with UgCS can be divided into following steps :
Obtain input data
Plan mission
Deploy ground control points
Fly mission
Image geotagging
Data processing
Map import to UgCS (optional)
Step one: Obtain input data
Firstly, to reach the desired result, input settings have to be defined:
Required GSD (ground sampling distance – size of single pixel on ground),
Survey area boundaries,
Required forward and side overlap.
GSD and area boundaries are usually defined by the customer’s requirements for output material parameters, for example by scale and resolution of digital map. Overlap should be chosen according to specific conditions of surveying area and requirements of data processing software.
Each data processing software (e.g., Pix4D, Agisoft Photoscan, Dronedeploy, Acute 3d) has specific requirements for side and forward overlaps for different surfaces. To choose correct values, please refer to documentation of chosen software. In general, 75% forward and 60% side overlap will be a good choice. Overlapping should be increased for areas with small amount of visual cues, for example for deserts or forests.
Often, aerial photogrammetry beginners are excited about the option to produce digital maps with extremely high resolution (1-2cm/pixel), and to use very small GSD for mission planning. This is very bad practice. Small GSD will result in longer flight time, hundreds of photos for each acre, tens of hours of processing and heavy output files. GSD should be set according to the output requirements of the digital map.
Other limitations can occur. For example, GSD of 10cm/pixel is required, but designed to use a Sony A6000 camera. Based on mentioned GSD and camera’s parameters, the flight altitude would be set to 510 meters. In most countries, maximum allowed altitude of UAV’s (without special permission) is limited to 120m/400ft AGL (above ground). Taking into account the maximum allowed altitude, the maximum possible GSD in this case could be no more than 2.3cm.
Step two: Plan your mission
Mission planning consists of two stages:
Initial planning,
Route optimisation.
-Initial planning:
The first step is to set surveying area using UgCS Photogrammetry tool. Area can be set using visual cues on underlying map or using exact coordinates of edges. The result – survey area is marked with yellow boundaries (Figure 1).
The next step is to set GSD and overlapping for the camera in Photogrammetry tool’s settings window (Figure 2).
To take photos in Photogrammetry tool’s setting window, define the control action of the camera (Figure 3). Set camera by distance triggering action with default values.
At this point, initial route planning is completed. UgCS will automatically calculate photogrammetry route (see Figure 4).
-Route optimisation
To optimise the route, it’s calculated parameters should be known: altitude, estimated flight time, number of shots, etc.
Part of the route’s calculated information can be found in the elevation profile window. To access the elevation profile window (if it is not visible on screen) click the parameters icon on the route card (lower-right corner, see Figure 5), and from the drop-down menu select show elevation:
The elevation profile window will present an estimated route length, duration, waypoint count and min/max altitude data:
To get other calculated values, open route log by clicking on route status indicator: the green check-mark (upper-right corner, see Figure 7) of the route card:
Using route parameters, it can be optimised to be more efficient and safe.
-Survey line direction
By default, UgCS will trace survey lines from south to north. But, in most cases, it will be more optimal to fly parallel to the longest boundary line of the survey area. To change survey line direction, edit direction angle field in the photogrammetry tool. In the example, by changing angle to 135 degrees, the number of passes is reduced from five (Figure 4) to four (Figure 8) and route length is 1km instead of 1.3km.
-Altitude type
UgCS Photogrammetry tool has the option to define how to trace the route according to altitude, with constant altitude above ground (AGL) or above mean sea level (AMSL). Please refer to your data processing software requirements as to which altitude tracking method it recommend.
In the UgCS team’s experience, the choice of altitude type depends on desired result. For orthophotomap (standard aerial land survey output format) it is better to choose AGL to ensure constant GSD for the entire map. If the aim is to produce DEM or 3D reconstruction, use AMSL so the data processing software has more data to correctly determine ground elevation by photos in order to provide more qualitative output.
In this case, UgCS will calculate flight altitude based on the lowest point of the survey area.
If AGL is selected in photogrammetry tool’s settings, UgCS will calculate the altitude for each waypoint. But in this case, terrain following will be rough if no “additional waypoints” are added (see Figure 10).
Therefore, if AGL is used, add some “additional waypoints” flags and UgCS will calculate a flight plan with elevation profile accordingly (see Figure 11).
-Speed
In general, if flight speed is increased it will minimise flight time. But high speed in combination with large camera exposure can result in blurred images. In most cases 10m/s is the best choice.
-Camera control method
UgCS supports 3 camera control methods (actions):
Make a shot (trigger camera) in waypoint,
Make shot every N seconds,
Make shot every N meters.
Not all autopilots support all 3 camera control options. For example (quite old) DJI A2 does support all three options, but newer (starting from Phantom 3 and up to M600) cameras support only triggering in waypoints and by time. DJI promised to implement triggering by distance, but it’s not available yet.
Here are some benefits and drawbacks for all three methods:
In conclusion:
Trigger in waypoints should be preferred when possible
Trigger by time should be used only if no other method is possible
Trigger by distance should be used when triggering in waypoints is not possible to use
To select triggering method in UgCS Photogrammetry tool accordingly, use one of three available icons:
Set camera mode
Set camera by time
Set camera by distance
-Glibal control
Drones, e.g., DJI Phantom 3, Phantom 4, Inspire, M100 or M600 with integrated gimbal, have the option to control camera position as part of an automatic route plan.
It is advisable to set camera to nadir position in the first waypoint, and in horizontal position before landing to prevent lenses from potential damage.
To set camera position, select the waypoint preceding the photogrammetry area and click set camera attitude/zoom (Figure 12) and enter “90” in the “Tilt” field (Figure 13).
As described previously, this waypoint should be a Stop&Turn type, otherwise the drone could skip this action.
To set camera to horizontal position, select last waypoint of survey route and click set camera attitude/zoom and enter “0” in the “Tilt” field.
-Turn types
Most autopilots or multirotor drones support different turn types in waypoints. Most popular DJI drones have three turn-types:
Stop and Turn: drone flies to the fixed point accurately, stays at that fixed point and then flies to next fixed point.
Bank Turn: the drone would fly with constant speed from one point to another without stopping.
Adaptive Bank Turn: It is almost the same performance like Bank Turn mode (Figure 13), but the real flight routine will be more accurately than Bank Turn.
It is advisable not to use Bank Turn for photogrammetry missions. Drone interprets Bank Turns as “recommendation destination waypoint”. The drone will fly towards this direction but will almost never pass through the waypoint. Because drone will not pass the waypoint, no action will be executed, meaning the camera will not be triggered, etc.
Adaptive Bank Turn should be used with caution because a drone can miss waypoints and, again, no camera triggering will be initiated.
Sometimes, adaptive bank turn type has to be used in order to have shorter flight time compared to stop and turn. When using adaptive bank turns, it is recommended to use overshot (see below) for the photogrammetry area.
-Overshot
Initially overshot was implemented for fixed wing (airplane) drones in order to have enough space for manoeuvring a U-turn.
Overshot can be set in photogrammetry tool to add an extra segment to both ends of each survey line.
In the example (Figure 15) can be seen that UgCS added 40m additional segments to both ends of each survey line (comparing to Figure 8).
Adding overshot is useful for copter-UAVs in two situations:
When Adaptive Bank Turns are used (or similar method for non-DJI drones), adding overshot will increase the chance that drone will precisely enter survey line and camera control action will be triggered. UgCS Team recommends to specify overshot that is approximately equal to distance between the parallel survey lines.
When Stop and Turn type is in use in combination with action to trigger camera in waypoints, there is a possibility that before making the shot, drone will start rotation to next waypoint – it can result in having photos with wrong orientation or blurred. To avoid that, shorter overshot has to be set, for example 5m. Don’t specify too short value (< 3m) because some drones could ignore waypoints, that are too close.
-Takeoff point
It is important to check the takeoff area at site before flying any mission! To better explain best practice on how to set takeoff point, first discuss an example of how it should not be done. Supposing that the takeoff point in our example mission (Figure 17) would be from the point marked with the airplane-icon, and drone pilot would upload the route on the ground with set automatic mission for automatic take-off.
Most drones in automatic takeoff mode would climb to low altitude about 3-10meters and then fly straight towards the first waypoint. Other drones would fly towards first waypoint straight from ground. Looking closely at the example map (Figure 17), some trees between the takeoff point and the first waypoint can be noticed. In this example, the drone more likely will not reach a safe altitude and will hit the trees.
Not only the surroundings can affect takeoff planning. Drone manufacturers can change drones elevation behavior in drone firmware, therefore after firmware updates it is recommended that you check drones automatic takeoff mode.
Also, a very important consideration is that most small UAVs use relative altitude for mission planing. Altitude counted relatively according to first waypoint is a second reason why an actual takeoff point should be near the first waypoint, and on the same terrain level.
UgCS Team recommends placing the first waypoint as close as possible to actual takeoff point and specifying a safe takeoff altitude (≈30m in most situations will be above any trees, see Figure 18). This is the only method that warrants safe takeoff for any mission. It also protects from any weird drone behaviour and unpredictable firmware updates, etc.
-Entry point to the survey grid
In the previous example, (see Figure 18), it can be noticed, that after adding the takeoff point, the route’s survey grid entry point was changed. This is because if additional waypoint is added subsequently to the photogrammetry area, UgCS will plan to fly the survey grid starting from nearest corner to the previous waypoint.
To change the entry point to survey grid, set additional waypoint close to the desired starting corner (see Figure 19).
-Landing point
If no landing point will be added outside the photogrammetry area after the survey mission, the drone will fly and hover in the last waypoint. There are two options for landing:
Take manual control over the drone and fly to landing point manually,
Activate the Return Home command in UgCS or from Remote Controller (RC).
In situations when the radio link with the drone is lost, for example if the survey area is large or there are problems with the remote controller, depending on the drone and it’s settings, one of these actions can occur:
Drone will return to home location automatically if lost radio link with ground station,
Drone will fly to last waypoint of survey area and hover as long as battery capacity will enable that, then: drone will perform emergency landing, or it will try to fly to home location.
The recommendation is to add an explicit landing points to the route in order to avoid relying on unpredictable drone behavior or settings.
If the drone doesn’t support automatic landing, or the pilot prefers to land manually, place the route’s last waypoint over the planned landing point with an altitude for comfortable manual drone descending and landing above any obstacles in the surrounding area. In general 30m is best choice.
-Action execution
Photogrammetry tool has a magic parameter “Action Execution” with three possible values:
Every point
At start
Forward passes
This parameter defines how and where camera actions specified for photogrammetry tool will be executed.
The most useful option for photogrammetry/survey missions is to set forward passes, the drone will make photos only on survey lines, but will not make excess photos on perpendicular lines.
-Complex survey areas
UgCS enables photogrammetry/survey mission planning for irregular areas, having functionality to combine any number of photogrammetry areas in one route, avoiding splitting the area in separate routes.
For example, if a mission has to be planned for two fields connected in a T-shape, and if these two fields are marked as one photogrammetry area, the whole route will not be optimal, regardless any direction of survey lines.
If the survey area is marked as two photogrammetry areas within one route, survey lines for each area can be optimised individually (see Figure 21).
Step three: deploy ground control points
Ground control points are mandatory if the survey output map has to be precisely aligned to coordinates on Earth.
There are lots of discussions about the necessity of ground control points in cases when a drone is equipped with Real Time Kinematics (RTK) GPS receivers with centimeter-level accuracy. This is useful, but the drone coordinates are not in themselves sufficient because, for precise map aligning, image center coordinates are necessary.
Data processing softwares like Agisoft Photoscan, Dronedeplay, Pix4d, Icarus OneButton and others will produce very accurate maps using geotagged images, but the real precision of the map will not be known without ground control points.
Conclusion: ground control points have to be used to create survey-grade result. For a map with approximate precision, it is sufficient to rely just on RTK GPS and the capabilities of data processing software.
Step four: fly your mission
For carefully planned missions, flying it is the most straightforward step. Mission execution differs according to the type of UAV and equipment used, therefore it will not be described in detail in this article (please refer to equipment’s and UgCS documentation).
Important issues before flying:
In most countries there are strict regulations for UAV usage. Always comply with the regulations! Usually these rules can be found on web-site of local aviation authority.
In some countries special permission for any kind of aerial photo/video shooting is needed. Please check local regulations.
In most cases missions are planned before arriving to flying location (e.g., in office, at home) using satellite imaginary from Google maps, Bing, etc. Before flying always check actual circumstances at the location. There could be a need to adjust take-off/landing points, for example, to avoid tall obstacles (e.g., trees, masts, power lines) in your survey area.
Step five: image geotagging
Image geotagging is optional if ground control points were used, but almost any data processing software will require less time to process geotagged images.
Some of the latest and professional drones with integrated cameras can geotag images automatically during flight. In other cases images can be geotagged in UgCS after flight.
Very important: UgCS uses the telemetry log from drone, that is received via radio channel, to extract the drone’s altitude for any given moment (when pictures were taken). To geotag pictures using UgCS, assure robust telemetry reception during flight.
For detailed information how to geotag images using UgCS refer to UgCS User Manual.
Step six: data processing
For data processing, use third party software or services available on the market.
From UgCS Team experience, the most powerful and flexible software is Agisoft Photoscan (http://www.agisoft.com/), but sometimes too much user input is required to get necessary results. The most uncomplicated solution for users is online service Dronedeploy (https://www.dronedeploy.com/). All other software packages and services will fit somewhere between these two in terms of complexity and power.
Step seven (optional): import created map to UgCS
Should the need arise for the mission to be repeated in the future, UgCS enables importing the GeoTiff file as a map layer and using it for mission planning. More detailed instructions can be found in UgCS User Manual. See the result of an imported map created using UgCS photogrammetry tool imported as GeoTiff file in Figure 22.
Yesterday, the UK government announced their budget plans to invest in robotics, artificial intelligence, driverless cars, and faster broadband. The spending commitments include:
£16m to create a 5G hub to trial the forthcoming mobile data technology. In particular, the government wants there to better mobile network coverage over the country’s roads and railway lines
£200m to support local “full-fibre” broadband network projects that are designed to bring in further private sector investment
£270m towards disruptive technologies to put the UK “at the forefront” including cutting-edge artificial intelligence and robotics systems that will operate in extreme and hazardous environments, including off-shore energy, nuclear energy, space and deep mining; batteries for the next generation of electric vehicles; and biotech.
Investing £300 million to further develop the UK’s research talent, including through creating an additional 1,000 PhD places.
Several experts in the robotics community agree that progress is shifting in the right direction, however, more needs to happen if the UK is to remain competitive in the robotics sector:
“The UK understand the very real positive impact that RAS [robotics & autonomous systems] will have on our society from now, of all time. It continues to see the big picture and today’s announcement by the Chancellor is a clear indication of that. We can have better roads, cleaner cities, healthier oceans and bodies, safer skies, deeper mines, better jobs and more opportunity. That’s what machines are for.”
“We are at a real inflection point in the development of autonomous technology. The UK has a number of nascent world class companies in the area of self-driving vehicles, which have a huge potential to change the world, whilst creating jobs and producing exportable UK goods and services. We have a head start and now we need to take advantage of it.” [from FT]
“Some of the great robotics companies of the future are being launched by British entrepreneurs and the support announced in today’s budget will to strengthen their impact and global competitiveness. We’re currently seeing strong appetite from private investors to back locally-grown robotics businesses and this money will help bring even more interest in this space”
“This is welcome news for the many research organisations developing robotics applications. As a leading UK robotics research group specialising in extreme and challenging environments, we welcome the allocation of significant funding in this field as part of the Government’s evolving Industrial Strategy. RACE and the rest of the robotics R&D sector are looking forward to working with industry to fully utilise this funding.”
“Robotics and AI is set to be a driving force in increasing productivity, but also in solving societal and environmental challenges. It’s opening new frontiers in off-shore and nuclear energy, space and deep mining. Investment from government will be key in helping the UK stay at the forefront of this field.” [from BBC]
“We lost our best machine learning group to Amazon just recently. The money means there will be more resources for universities, which may help them retain their staff. But it’s not nearly enough for all of the disruptive technologies being developed in the UK. The government says it want this to be the leading robotics country in the world, but Google and others are spending far more, so it’s ultimately chicken feed by comparison.” [from BBC]
“I’m pleased by the additional funding, and, in fact, my group is a partner in a new £4.6M EPSRC grant to develop robots for nuclear decommissioning announced last week.
But having just returned from Tokyo (from AI in Asia: AI for Social Good), I’m well aware that other countries are investing much more heavily than the UK. China was for instance described as an emerging powerhouse of AI. A number of colleagues at that meeting also made the same point as Noel, that universities are haemorrhaging star AI/robotics academics to multi-national companies with very deep pockets.”
“I, like many others, was pleased to hear more money going into robotics and AI research, but I was disappointed – though completely unsurprised – to see nothing about how to restructure the economy to deal with the consequences of increasing research into and use of robots and AI. Hammond’s blunder on the relationship of productivity to wages – and it can’t be seen as anything other than a blunder – means that he doesn’t even seem to appreciate that there is a problem.
The truth is that increased automation means fewer jobs and lower wages and this needs to be addressed with some concrete measures. There will be benefits to society with increased automation, but we need to start thinking now (and taking action now) to ensure that those benefits aren’t solely economic gain for the already-wealthy. The ‘robot dividend’ needs to be shared across society, as it can have far-reaching consequences beyond economics: improving our quality of life, our standard of living, education, health and accessibility.”
“America has the American Manufacturing Initiative which, in 2015, was expanded to establish Fraunhofer-like research facilities around the US (on university campuses) that focus on particular aspects of the science of manufacturing.
Robotics were given $50 million of the $500 million for the initiative and one of the research facilities was to focus on robotics. Under the initiative, efforts from the SBIR, NSF, NASA and DoD/DARPA were to be coordinated in their disbursement of fundings for science in robotics. None of these fundings comes anywhere close to the coordinated funding programs and P-P-Ps found in the EU, Korea and Japan, nor the top-down incentivized directives of China’s 5-year plans. Essentially American robotic funding is (and has been) predominantly entrepreneurial with token support from the government.
In the new Trump Administration, there is no indication of any direction nor continuation (funding) of what little existing programs we have. At a NY Times editorial board sit-down with Trump after his election, he was quoted as saying that “Robotics is becoming very big and we’re going to do that. We’re going to have more factories. We can’t lose 70,000 factories. Just can’t do it. We’re going to start making things.” Thus far there is no followup to those statements nor has Trump hired replacements for the top executives at the Office of Science and Technology Policy, all of which are presently vacant.”
And finally, a few comments from the business sector on Twitter:
Whether an A.I. ought to be granted patent rights is a timely question given the increasing proliferation of A.I. in the workplace. Examples: Daimler-Benz has tested self-driving trucks on public roads[1], A.I. technology has been applied effectively in medical advancements, psycholinguistics, tourism and food preparation,[2] a film written by an A.I. recently debuted online[3] and A.I. has even found its way into the legal profession,[4] and current interest in the question of whether an A.I. can enjoy copyright rights with several articles having already being published on the subject of A.I. and copyright rights.[5]
In 2014 the U.S. Copyright Office updated its Compendium of U.S. Copyright Office Practices with, inter alia, a declaration that the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”[6]
To grant or not to grant: A human prerequisite?
One might argue that Intellectual Property (IP) laws and IP Rights were designed to exclusively benefit human creators and inventors[7] and thus would exclude non-humans from holding IP rights. The U.S. Copyright Office’s December 2014 update to the Compendium of U.S. Copyright Office Practices that added requirements for human authorship[8] certainly adds weight to this view.
However, many IP laws were drafted well before the emergence of A.I. and in any case, do not explicitly require that a creator or inventor be ‘human.’ The World Intellectual Property Organization’s (WIPOs) definition of Intellectual Property talks about creations of the mind[9] but does not specify whether it must be a human mind. Similarly, provisions in laws promoting innovation and IP rights, such as the so-called Intellectual Property Clause of the U.S. Constitution[10], also do not explicitly mention a ‘human’ requirement. Finally, it ought to be noted that while the U.S. Copyright Office declared it would not register works produced by a machine or mere mechanical process without human creative input, it did not explicitly state that an A.I. could not have copyright rights.[11]
Legal personhood
One might also argue that an A.I. is not human, and is therefore not a legal person and, thus, is not entitled to apply for much less be granted a patent. New Zealand’s Patents Act, for example, refers to a patent ‘applicant’ as a ‘person’.[12]
Yet this line of argument could be countered by an assertion that a legal ‘person’ need not be ‘human’ as is the case of a corporation and there are many examples of patents assigned to corporations.[13]
The underlying science
To answer the question of patent rights for an A.I. we need to examine how modern A.I. systems work and, as an example, consider how machine translation applications such as Google Translate function.
While such systems are marketed as if they’re “magic brains that just understand language”,[14]the problem is that there is currently no definitive scientific description for language[15] or language processing[16]. Thus, such language translation systems cannot function by mimicking the processes of the brain.
Rather, they employ a scheme known as Statistical Machine Translation (SMT) whereby online systems search the Internet identifying documents that have already been translated by human translators– for example books, and organizations like the United Nations, or websites. The system scans these texts for statistically significant patterns and once the computer finds a pattern it uses the pattern to translate similar text in the future.[17] This, as Jaron Lanier and others note, means that the people who created the translations and make translation systems possible are not paid to for their contributions[18].
Many modern A.I. systems are largely big data models that operate by defining a real world problem that needs to be solved, then conceiving a conceptual model to solve this problem which is typically a statistical analysis that falls into one of three categories: regression, classification or missing data. Data is then fed into the model and used to refine and calibrate the model. As the model is increasingly refined it is used to guide the location of data and, after a number of rounds of refinement finally results in a model capable of some predictive functionality.[19]
Big data models can be used to discover patterns in large data sets[20] but also can, as in the case of translation systems, exploit statistically significant correlations in data.
None of this, however, suggests that current A.I. systems are capable of inventive or creative capacity.
Patentability?
So to get a patent, an invention must:
Be novel in that it does not form part of the prior art[21]
Have an inventive step in that it not obvious to a person skilled in the art[22]
Be useful[23]
it must not fall into an excluded category that can include discoveries, presentations of information and mental processes or rules or methods for performing a mental act.[24]
Why discoveries are not inventions is tied with the issue of obviousness and as noted by Buckley J. in Reynolds v. Herbert Smith & Co., Ltd[25] who stated:
“Discovery adds to the amount of human knowledge, but it does so only by lifting the veil and disclosing something which before had been unseen or dimly seen. Invention also adds to human knowledge, but not merely by disclosing something. Invention necessarily involves also the suggestion of an act to be done, and it must be an act which results in a new product, or a new result, or a new process, or a new combination for producing an old product or an old result.”
Therefore in order to get a patent, an A.I. must first be capable of producing a patentable invention but, given current technology, is this even possible?
A thought exercise
Consider the following:
You believe that as a person exercises more, he/she consumes more oxygen and have tasked your A.I. with analyzing the relationship between oxygen consumption and exercise.
You provide the A.I. with a model suggesting that oxygen consumption increases with physical exertion and data that shows oxygen consumption among people performing little, moderate (e.g. walking briskly) and heavy exercise (e.g. running).
The A.I. reviews the data, refines the model, collects more data and comes up with a predictive model (e.g. when a person exercises X amount, he/she consumes Y amount of oxygen and when the person doubles his/her exertion, his oxygen consumption rate triples).
As this is essentially a statistical regression, the model will not always completely accurate in its predictions due to differences between individuals (i.e. for some persons the model will predict oxygen consumption fairly accurately, for others its results will be far off).
However, this particular model has another, more fundamental limitation – it fails to consider that a human cannot exercise beyond a certain point because his/her heart would be incapable of sustaining such levels of exertion[26] or because over-exercise may trigger an unexpected reaction (e.g. death).[27]
If one were to feed this model data of persons who have collapsed or died during exercise (and thus, in the latter case, not consume any oxygen), would the A.I. be able to ‘think outside its box’ and:
Question the cause of these data discrepancies and have the initiative to conduct further investigation?
Note and correct the limitation in the original model (which would require a significant amendment)?
Or would it simply alter the existing model by changing the slope of the regression line?
SMT and other A.I. have similar limitations, in the case of SMT, once the system is built, linguistic knowledge becomes necessary to achieve perfect translation at all grammatical levels[28] and SMT systems presently cannot translate cultural components of the source text into the target language, provide very literal, word for word translations that do not recognize idioms, slang, and terms that are not in the machine’s memory and lack human creativity[29] To do so would require a change to the underlying machine translation model, and the question arises whether this would have to be done by the human creators of the SMT or whether the SMT itself would be able to make the necessary corrections and adjustments to the model.
Should the SMT or, in the earlier example the A.I., be unable to improve and, in this case, innovate on the existing model does it have the creative or inventive capacity to conceive an invention is truly inventive? And if either the SMT or the AI can produce something that appears novel and inventive, given the nature of how A.I. presently operates (i.e. as big data models), would such a product be the result of an analysis of existing data to uncover hitherto unseen relationships – in other words, a discovery?
Returning to the original question about patent rights for an A.I., perhaps the question we should ask is not whether an A.I. should be able to get a patent, but whether an A.I., given current technology, can create a patentable invention in the first place and if the answer to that question is ‘no’, then the question of granting patent rights to an A.I. is moot.
[6] Copyright Office, Compendium of U.S. Copyright Office Practices (3d ed. 2014). § 313.2
[7] Hettinger argued that ‘the most powerful intuition supporting property rights is that people are entitled to the fruits of their labor’
See: Edwin Hettinger, “Justifying Intellectual Property’, Philosophy & Public Affairs, Vol. 18, No. 1, Winder 1989, p. 31-52
[8] Copyright Office, Compendium of U.S. Copyright Office Practices (3d ed. 2014). § 306.
[9] According to WIPO, ‘Intellectual property (IP) refers to creations of the mind, such as inventions; literary and artistic works; designs; and symbols, names and images used in commerce.’
[10] Article I, Section 8, Clause 8 of the United States Constitution states: The Congress shall have Power To…. promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
[11] According to the U.S. Copyright Office, ‘Copyright exists from the moment the work is created. You will have to register, however, if you wish to bring a lawsuit for infringement of a U.S. work’
[13] See for example, U.S. Patent 5,953,441 ‘Fingerprint sensor having spoof reduction features and related methods’ which was assigned to Harris Corporation.
[15] This was noted by Jaron Lanier who delivered the keynote address at the opening of the Conference on the Global Digital Content Market taking place from April 20-22, 2016 at WIPO Headquarters in Geneva, Switzerland.
International Women’s Day is raising discussion about the lack of diversity and role models in STEM and the potential negative outcomes of bias and stereotyping in robotics and AI. Let’s balance the words with positive actions. Here’s what we can all do to support women in robotics and AI, and thus improve diversity, innovation and reduce skills shortages for robotics and AI.
Join WomeninRobotics.org – a network of women working in robotics (or who aspire to work in robotics). We are a global discussion group supporting local events that bring women together for peer networking. We recognize that lack of support and mentorship in the workplace holds women back, particularly if there is only one woman in an organization/company.
Although the main group is only for women, we are going to start something for male ‘Allies’ or ‘Champions’. So men, you can join women in robotics too! Women need champions and while it would be ideal to have an equal number of women in leadership roles, until then, companies can improve their hiring and retention by having visible and vocal male allies. We all need mentors as our careers progress.
Women also need visibility and high profile projects for their careers to progress on par. One way of improving that is to showcase the achievements of women in robotics. Read and share all four year’s worth of our annual “25 Women in Robotics you need to know about” – that’s more than 100 women already because we have some groups in there. (There has always been a lot of women on the core team at Robohub.org, so we love showing our support.) Our next edition will come out on October 10 2017 to celebrate Ada Lovelace Day.
Change starts at the top of an organization. It’s very hard to hire women if you don’t have any women, or if they can’t see pathways for advancement in your organization. However, there are many things you can do to improve your hiring practices. Some are surprisingly simple, yet effective. I’ve collected a list and posted it at Silicon Valley Robotics – How to hire women.
And you can invest in women entrepreneurs. All the studies show that you get a higher rate of return, and higher likelihood of success from investments in female founders. And yet, proportionately investment is much less. You don’t need to be a VC to invest in women either. Kiva.org is matching loans today and $25 can empower an entrepreneur all over the world. #InvestInHer
And our next Silicon Valley/ San Francisco Women in Robotics event will be on March 22 at SoftBank Robotics – we’d love to see you there – or in support!