Category robots in business

Page 270 of 270
1 268 269 270

UgCS photogrammetry technique for UAV land surveying missions

 

Figure 15: Adding 40m overshot to both ends of each survey line

UgCS is easy-to-use software for planning and flying UAV drone-survey missions. It supports almost any UAV platform, providing convenient tools for areal and linear surveys and enabling direct drone control. What’s more, UgCS enables professional land survey mission planning using photogrammetry techniques.

How to plan photogrammetry mission with UgCS

Standard land surveying photogrammetry mission planning with UgCS can be divided into following steps :

  1. Obtain input data
  2. Plan mission
  3. Deploy ground control points
  4. Fly mission
  5. Image geotagging
  6. Data processing
  7. Map import to UgCS (optional)

Step one: Obtain input data

Firstly, to reach the desired result, input settings have to be defined:

  • Required GSD (ground sampling distance – size of single pixel on ground),
  • Survey area boundaries,
  • Required forward and side overlap.

GSD and area boundaries are usually defined by the customer’s requirements for output material parameters, for example by scale and resolution of digital map. Overlap should be chosen according to specific conditions of surveying area and requirements of data processing software.

Each data processing software (e.g., Pix4D, Agisoft Photoscan, Dronedeploy, Acute 3d) has specific requirements for side and forward overlaps for different surfaces. To choose correct values, please refer to documentation of chosen software. In general, 75% forward and 60% side overlap will be a good choice. Overlapping should be increased for areas with small amount of visual cues, for example for deserts or forests.

Often, aerial photogrammetry beginners are excited about the option to produce digital maps with extremely high resolution (1-2cm/pixel), and to use very small GSD for mission planning. This is very bad practice. Small GSD will result in longer flight time, hundreds of photos for each acre, tens of hours of processing and heavy output files. GSD should be set according to the output requirements of the digital map.

Other limitations can occur. For example, GSD of 10cm/pixel is required, but designed to use a Sony A6000 camera. Based on mentioned GSD and camera’s parameters, the flight altitude would be set to 510 meters. In most countries, maximum allowed altitude of UAV’s (without special permission) is limited to 120m/400ft AGL (above ground). Taking into account the maximum allowed altitude, the maximum possible GSD in this case could be no more than 2.3cm.

Step two: Plan your mission

Mission planning consists of two stages:

  • Initial planning,
  • Route optimisation.

-Initial planning:

The first step is to set surveying area using UgCS Photogrammetry tool. Area can be set using visual cues on underlying map or using exact coordinates of edges. The result – survey area is marked with yellow boundaries (Figure 1).

Figure 1: Setting the survey area

The next step is to set GSD and overlapping for the camera in Photogrammetry tool’s settings window (Figure 2).

Figure 2: Setting camera’s Ground Sampling Distance and overlapping

To take photos in Photogrammetry tool’s setting window, define the control action of the camera (Figure 3). Set camera by distance triggering action with default values.

Figure 3: Setting camera’s control action

At this point, initial route planning is completed. UgCS will automatically calculate photogrammetry route (see Figure 4).

Figure 4: Calculated photogrammetry survey route before optimisation

-Route optimisation

To optimise the route, it’s calculated parameters should be known: altitude, estimated flight time, number of shots, etc.

Part of the route’s calculated information can be found in the elevation profile window. To access the elevation profile window (if it is not visible on screen) click the parameters icon on the route card (lower-right corner, see Figure 5), and from the drop-down menu select show elevation:

Figure 5: Accessing elevation window from Route cards Parameters settings

The elevation profile window will present an estimated route length, duration, waypoint count and min/max altitude data:

Figure 6: Route values in elevation profile window

To get other calculated values, open route log by clicking on route status indicator: the green check-mark (upper-right corner, see Figure 7) of the route card:

Figure 7: Route card and status indicator, Route log

Using route parameters, it can be optimised to be more efficient and safe.

-Survey line direction

By default, UgCS will trace survey lines from south to north. But, in most cases, it will be more optimal to fly parallel to the longest boundary line of the survey area. To change survey line direction, edit direction angle field in the photogrammetry tool. In the example, by changing angle to 135 degrees, the number of passes is reduced from five (Figure 4) to four (Figure 8) and route length is 1km instead of 1.3km.

Figure 8: Changed survey line angle to be parallel to longest boundary

-Altitude type

UgCS Photogrammetry tool has the option to define how to trace the route according to altitude, with constant altitude above ground (AGL) or above mean sea level (AMSL). Please refer to your data processing software requirements as to which altitude tracking method it recommend.

In the UgCS team’s experience, the choice of altitude type depends on desired result. For orthophotomap (standard aerial land survey output format) it is better to choose AGL to ensure constant GSD for the entire map. If the aim is to produce DEM or 3D reconstruction, use AMSL so the data processing software has more data to correctly determine ground elevation by photos in order to provide more qualitative output.

Figure 9: Elevation profile with constant altitude above mean sea level (AMSL)

In this case, UgCS will calculate flight altitude based on the lowest point of the survey area.

If AGL is selected in photogrammetry tool’s settings, UgCS will calculate the altitude for each waypoint. But in this case, terrain following will be rough if no “additional waypoints” are added (see Figure 10).

Figure 10: Elevation profile with AGL without additional waypoints

Therefore, if AGL is used, add some “additional waypoints” flags and UgCS will calculate a flight plan with elevation profile accordingly (see Figure 11).

Figure 11: Elevation profile with AGL with additional waypoints

-Speed

In general, if flight speed is increased it will minimise flight time. But high speed in combination with large camera exposure can result in blurred images. In most cases 10m/s is the best choice.

-Camera control method

UgCS supports 3 camera control methods (actions):

  1. Make a shot (trigger camera) in waypoint,
  2. Make shot every N seconds,
  3. Make shot every N meters.

Not all autopilots support all 3 camera control options. For example (quite old) DJI A2 does support all three options, but newer (starting from Phantom 3 and up to M600) cameras support only triggering in waypoints and by time. DJI promised to implement triggering by distance, but it’s not available yet.

Here are some benefits and drawbacks for all three methods:

Table 1: Benefits and Drawback for camera triggering methods

In conclusion:

  • Trigger in waypoints should be preferred when possible
  • Trigger by time should be used only if no other method is possible
  • Trigger by distance should be used when triggering in waypoints is not possible to use

To select triggering method in UgCS Photogrammetry tool accordingly, use one of three available icons:

  • Set camera mode
  • Set camera by time
  • Set camera by distance

-Glibal control

Drones, e.g., DJI Phantom 3, Phantom 4, Inspire, M100 or M600 with integrated gimbal, have the option to control camera position as part of an automatic route plan.

It is advisable to set camera to nadir position in the first waypoint, and in horizontal position before landing to prevent lenses from potential damage.

To set camera position, select the waypoint preceding the photogrammetry area and click set camera attitude/zoom (Figure 12) and enter “90” in the “Tilt” field (Figure 13).

Figure 12: Setting camera attitude
Figure 13: Setting camera position

As described previously, this waypoint should be a Stop&Turn type, otherwise the drone could skip this action.

To set camera to horizontal position, select last waypoint of survey route and click set camera attitude/zoom and enter “0” in the “Tilt” field.

-Turn types

Most autopilots or multirotor drones support different turn types in waypoints. Most popular DJI drones have three turn-types:

  • Stop and Turn: drone flies to the fixed point accurately, stays at that fixed point and then flies to next fixed point.
  • Bank Turn: the drone would fly with constant speed from one point to another without stopping.
  • Adaptive Bank Turn: It is almost the same performance like Bank Turn mode (Figure 13), but the real flight routine will be more accurately than Bank Turn.

It is advisable not to use Bank Turn for photogrammetry missions. Drone interprets Bank Turns as “recommendation destination waypoint”. The drone will fly towards this direction but will almost never pass through the waypoint. Because drone will not pass the waypoint, no action will be executed, meaning the camera will not be triggered, etc.

Adaptive Bank Turn should be used with caution because a drone can miss waypoints and, again, no camera triggering will be initiated.

Figure 14: Illustration of typical DJI drone trajectories for Bank Turn and Adaptive Bank Turn types

Sometimes, adaptive bank turn type has to be used in order to have shorter flight time compared to stop and turn. When using adaptive bank turns, it is recommended to use overshot (see below) for the photogrammetry area.

-Overshot

Initially overshot was implemented for fixed wing (airplane) drones in order to have enough space for manoeuvring a U-turn.

Overshot can be set in photogrammetry tool to add an extra segment to both ends of each survey line.

Figure 15: Adding 40m overshot to both ends of each survey line

In the example (Figure 15) can be seen that UgCS added 40m additional segments to both ends of each survey line (comparing to Figure 8).

Adding overshot is useful for copter-UAVs in two situations:

  1. When Adaptive Bank Turns are used (or similar method for non-DJI drones), adding overshot will increase the chance that drone will precisely enter survey line and camera control action will be triggered. UgCS Team recommends to specify overshot that is approximately equal to distance between the parallel survey lines.
  2. When Stop and Turn type is in use in combination with action to trigger camera in waypoints, there is a possibility that before making the shot, drone will start rotation to next waypoint – it can result in having photos with wrong orientation or blurred. To avoid that, shorter overshot has to be set, for example 5m. Don’t specify too short value (< 3m) because some drones could ignore waypoints, that are too close.
Figure 16: Example of blurred image taken by drone in rotation to next waypoint

-Takeoff point

It is important to check the takeoff area at site before flying any mission! To better explain best practice on how to set takeoff point, first discuss an example of how it should not be done. Supposing that the takeoff point in our example mission (Figure 17) would be from the point marked with the airplane-icon, and drone pilot would upload the route on the ground with set automatic mission for automatic take-off.

Figure 17: Take-off point example

Most drones in automatic takeoff mode would climb to low altitude about 3-10meters and then fly straight towards the first waypoint. Other drones would fly towards first waypoint straight from ground. Looking closely at the example map (Figure 17), some trees between the takeoff point and the first waypoint can be noticed. In this example, the drone more likely will not reach a safe altitude and will hit the trees.

Not only the surroundings can affect takeoff planning. Drone manufacturers can change drones elevation behavior in drone firmware, therefore after firmware updates it is recommended that you check drones automatic takeoff mode.

Also, a very important consideration is that most small UAVs use relative altitude for mission planing. Altitude counted relatively according to first waypoint is a second reason why an actual takeoff point should be near the first waypoint, and on the same terrain level.

UgCS Team recommends placing the first waypoint as close as possible to actual takeoff point and specifying a safe takeoff altitude (≈30m in most situations will be above any trees, see Figure 18). This is the only method that warrants safe takeoff for any mission. It also protects from any weird drone behaviour and unpredictable firmware updates, etc.

Figure 18: Route with safe take-off

-Entry point to the survey grid

In the previous example, (see Figure 18), it can be noticed, that after adding the takeoff point, the route’s survey grid entry point was changed. This is because if additional waypoint is added subsequently to the photogrammetry area, UgCS will plan to fly the survey grid starting from nearest corner to the previous waypoint.

To change the entry point to survey grid, set additional waypoint close to the desired starting corner (see Figure 19).

Figure 19: Changing survey grid entry point by adding additional waypoint

-Landing point

If no landing point will be added outside the photogrammetry area after the survey mission, the drone will fly and hover in the last waypoint. There are two options for landing:

  1. Take manual control over the drone and fly to landing point manually,
  2. Activate the Return Home command in UgCS or from Remote Controller (RC).

In situations when the radio link with the drone is lost, for example if the survey area is large or there are problems with the remote controller, depending on the drone and it’s settings, one of these actions can occur:

  • Drone will return to home location automatically if lost radio link with ground station,
  • Drone will fly to last waypoint of survey area and hover as long as battery capacity will enable that, then: drone will perform emergency landing, or it will try to fly to home location.

The recommendation is to add an explicit landing points to the route in order to avoid relying on unpredictable drone behavior or settings.

If the drone doesn’t support automatic landing, or the pilot prefers to land manually, place the route’s last waypoint over the planned landing point with an altitude for comfortable manual drone descending and landing above any obstacles in the surrounding area. In general 30m is best choice.

-Action execution

Photogrammetry tool has a magic parameter “Action Execution” with three possible values:

  • Every point
  • At start
  • Forward passes

This parameter defines how and where camera actions specified for photogrammetry tool will be executed.

The most useful option for photogrammetry/survey missions is to set forward passes, the drone will make photos only on survey lines, but will not make excess photos on perpendicular lines.

-Complex survey areas

UgCS enables photogrammetry/survey mission planning for irregular areas, having functionality to combine any number of photogrammetry areas in one route, avoiding splitting the area in separate routes.

For example, if a mission has to be planned for two fields connected in a T-shape, and if these two fields are marked as one photogrammetry area, the whole route will not be optimal, regardless any direction of survey lines.

Figure 20: Complex survey area before optimisation

If the survey area is marked as two photogrammetry areas within one route, survey lines for each area can be optimised individually (see Figure 21).

Figure 21: Optimised survey flight passes for each part of a complex photogrammetry area

Step three: deploy ground control points

Ground control points are mandatory if the survey output map has to be precisely aligned to coordinates on Earth.

There are lots of discussions about the necessity of ground control points in cases when a drone is equipped with Real Time Kinematics (RTK) GPS receivers with centimeter-level accuracy. This is useful, but the drone coordinates are not in themselves sufficient because, for precise map aligning, image center coordinates are necessary.

Data processing softwares like Agisoft Photoscan, Dronedeplay, Pix4d, Icarus OneButton and others will produce very accurate maps using geotagged images, but the real precision of the map will not be known without ground control points.

Conclusion: ground control points have to be used to create survey-grade result. For a map with approximate precision, it is sufficient to rely just on RTK GPS and the capabilities of data processing software.

Step four: fly your mission

For carefully planned missions, flying it is the most straightforward step. Mission execution differs according to the type of UAV and equipment used, therefore it will not be described in detail in this article (please refer to equipment’s and UgCS documentation).

Important issues before flying:

  • In most countries there are strict regulations for UAV usage. Always comply with the regulations! Usually these rules can be found on web-site of local aviation authority.
  • In some countries special permission for any kind of aerial photo/video shooting is needed. Please check local regulations.
  • In most cases missions are planned before arriving to flying location (e.g., in office, at home) using satellite imaginary from Google maps, Bing, etc. Before flying always check actual circumstances at the location. There could be a need to adjust take-off/landing points, for example, to avoid tall obstacles (e.g., trees, masts, power lines) in your survey area.

Step five: image geotagging

Image geotagging is optional if ground control points were used, but almost any data processing software will require less time to process geotagged images.

Some of the latest and professional drones with integrated cameras can geotag images automatically during flight. In other cases images can be geotagged in UgCS after flight.

Very important: UgCS uses the telemetry log from drone, that is received via radio channel, to extract the drone’s altitude for any given moment (when pictures were taken). To geotag pictures using UgCS, assure robust telemetry reception during flight.

For detailed information how to geotag images using UgCS refer to UgCS User Manual.

Step six: data processing

For data processing, use third party software or services available on the market.

From UgCS Team experience, the most powerful and flexible software is Agisoft Photoscan (http://www.agisoft.com/), but sometimes too much user input is required to get necessary results. The most uncomplicated solution for users is online service Dronedeploy (https://www.dronedeploy.com/). All other software packages and services will fit somewhere between these two in terms of complexity and power.

Step seven (optional): import created map to UgCS

Should the need arise for the mission to be repeated in the future, UgCS enables importing the GeoTiff file as a map layer and using it for mission planning. More detailed instructions can be found in UgCS User Manual. See the result of an imported map created using UgCS photogrammetry tool imported as GeoTiff file in Figure 22.

Figure 22: Imported GeoTiff map as layer. The map is output of a Photogrammetry survey mission panned with UgCS

Visit the UgCS homepage

Download this tutorial as a PDF

If you liked this tutorial, you may also enjoy these:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Reactions from experts: Robotics and tech to receive funding boost from UK government

Yesterday, the UK government announced their budget plans to invest in robotics, artificial intelligence, driverless cars, and faster broadband. The spending commitments include:

  • £16m to create a 5G hub to trial the forthcoming mobile data technology. In particular, the government wants there to better mobile network coverage over the country’s roads and railway lines
  • £200m to support local “full-fibre” broadband network projects that are designed to bring in further private sector investment
  • £270m towards disruptive technologies to put the UK “at the forefront” including cutting-edge artificial intelligence and robotics systems that will operate in extreme and hazardous environments, including off-shore energy, nuclear energy, space and deep mining; batteries for the next generation of electric vehicles; and biotech.
  • Investing £300 million to further develop the UK’s research talent, including through creating an additional 1,000 PhD places.

The entire budget can be reviewed here.

Several experts in the robotics community agree that progress is shifting in the right direction, however, more needs to happen if the UK is to remain competitive in the robotics sector:

Prof Paul Newman, Founder, Oxbotica:

“The UK understand the very real positive impact that RAS [robotics & autonomous systems] will have on our society from now, of all time. It continues to see the big picture and today’s announcement by the Chancellor is a clear indication of that. We can have better roads, cleaner cities, healthier oceans and bodies, safer skies, deeper mines, better jobs and more opportunity. That’s what machines are for.”

Dr Graeme Smith, CEO, Oxbotica:

“We are at a real inflection point in the development of autonomous technology. The UK has a number of nascent world class companies in the area of self-driving vehicles, which have a huge potential to change the world, whilst creating jobs and producing exportable UK goods and services. We have a head start and now we need to take advantage of it.” [from FT]

Dominic Keen, Founder of Britbots:

“Some of the great robotics companies of the future are being launched by British entrepreneurs and the support announced in today’s budget will to strengthen their impact and global competitiveness.  We’re currently seeing strong appetite from private investors to back locally-grown robotics businesses and this money will help bring even more interest in this space”

Dr Rob Buckingham, Director of the UK Atomic Energy Authority’s RACE robotics centre:

“This is welcome news for the many research organisations developing robotics applications. As a leading UK robotics research group specialising in extreme and challenging environments, we welcome the allocation of significant funding in this field as part of the Government’s evolving Industrial Strategy. RACE and the rest of the robotics R&D sector are looking forward to working with industry to fully utilise this funding.”

Dr Sabine Hauert, University of Bristol:

“Robotics and AI is set to be a driving force in increasing productivity, but also in solving societal and environmental challenges. It’s opening new frontiers in off-shore and nuclear energy, space and deep mining. Investment from government will be key in helping the UK stay at the forefront of this field.” [from BBC]

Prof Noel Sharkey, University of Sheffield:

“We lost our best machine learning group to Amazon just recently. The money means there will be more resources for universities, which may help them retain their staff. But it’s not nearly enough for all of the disruptive technologies being developed in the UK. The government says it want this to be the leading robotics country in the world, but Google and others are spending far more, so it’s ultimately chicken feed by comparison.” [from BBC]

Prof Alan Winfield, UWE Bristol:

“I’m pleased by the additional funding, and, in fact, my group is a partner in a new £4.6M EPSRC grant to develop robots for nuclear decommissioning announced last week.

But having just returned from Tokyo (from AI in Asia: AI for Social Good), I’m well aware that other countries are investing much more heavily than the UK. China was for instance described as an emerging powerhouse of AI. A number of colleagues at that meeting also made the same point as Noel, that universities are haemorrhaging star AI/robotics academics to multi-national companies with very deep pockets.”

Michael Szollosy, Research Fellow at Sheffield Centre for Robotics, Dept of Psychology:

“I, like many others, was pleased to hear more money going into robotics and AI research, but I was disappointed – though completely unsurprised – to see nothing about how to restructure the economy to deal with the consequences of increasing research into and use of robots and AI. Hammond’s blunder on the relationship of productivity to wages – and it can’t be seen as anything other than a blunder – means that he doesn’t even seem to appreciate that there is a problem.

The truth is that increased automation means fewer jobs and lower wages and this needs to be addressed with some concrete measures. There will be benefits to society with increased automation, but we need to start thinking now (and taking action now) to ensure that those benefits aren’t solely economic gain for the already-wealthy. The ‘robot dividend’ needs to be shared across society, as it can have far-reaching consequences beyond economics: improving our quality of life, our standard of living, education, health and accessibility.”

Frank Tobe, Editor at The Robot Report:

“America has the American Manufacturing Initiative which, in 2015, was expanded to establish Fraunhofer-like research facilities around the US (on university campuses) that focus on particular aspects of the science of manufacturing.

Robotics were given $50 million of the $500 million for the initiative and one of the research facilities was to focus on robotics. Under the initiative, efforts from the SBIR, NSF, NASA and DoD/DARPA were to be coordinated in their disbursement of fundings for science in robotics. None of these fundings comes anywhere close to the coordinated funding programs and P-P-Ps found in the EU, Korea and Japan, nor the top-down incentivized directives of China’s 5-year plans. Essentially American robotic funding is (and has been) predominantly entrepreneurial with token support from the government.

In the new Trump Administration, there is no indication of any direction nor continuation (funding) of what little existing programs we have. At a NY Times editorial board sit-down with Trump after his election, he was quoted as saying that “Robotics is becoming very big and we’re going to do that. We’re going to have more factories. We can’t lose 70,000 factories. Just can’t do it. We’re going to start making things.” Thus far there is no followup to those statements nor has Trump hired replacements for the top executives at the Office of Science and Technology Policy, all of which are presently vacant.”

And finally, a few comments from the business sector on Twitter:

 

Should an artificial intelligence be allowed to get a patent?

Whether an A.I. ought to be granted patent rights is a timely question given the increasing proliferation of A.I. in the workplace. Examples: Daimler-Benz has tested self-driving trucks on public roads[1], A.I. technology has been applied effectively in medical advancements, psycholinguistics, tourism and food preparation,[2] a film written by an A.I. recently debuted online[3] and A.I. has even found its way into the legal profession,[4] and current interest in the question of whether an A.I. can enjoy copyright rights with several articles having already being published on the subject of A.I. and copyright rights.[5]

In 2014 the U.S. Copyright Office updated its Compendium of U.S. Copyright Office Practices with, inter alia, a declaration that  the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”[6]

To grant or not to grant: A human prerequisite?

One might argue that Intellectual Property (IP) laws and IP Rights were designed to exclusively benefit human creators and inventors[7] and thus would exclude non-humans from holding IP rights. The U.S. Copyright Office’s December 2014 update to the Compendium of U.S. Copyright Office Practices that added requirements for human authorship[8] certainly adds weight to this view.

However, many IP laws were drafted well before the emergence of A.I. and in any case, do not explicitly require that a creator or inventor be ‘human.’ The World Intellectual Property Organization’s (WIPOs) definition of Intellectual Property talks about creations of the mind[9] but does not specify whether it must be a human mind. Similarly, provisions in laws promoting innovation and IP rights, such as the so-called Intellectual Property Clause of the U.S. Constitution[10], also do not explicitly mention a ‘human’ requirement. Finally, it ought to be noted that while the U.S. Copyright Office declared it would not register works produced by a machine or mere mechanical process without human creative input, it did not explicitly state that an A.I. could not have copyright rights.[11]

Legal personhood

One might also argue that an A.I. is not human, and is therefore not a legal person and, thus, is not entitled to apply for much less be granted a patent. New Zealand’s Patents Act, for example, refers to a patent ‘applicant’ as a ‘person’.[12]

Yet this line of argument could be countered by an assertion that a legal ‘person’ need not be ‘human’ as is the case of a corporation and there are many examples of patents assigned to corporations.[13]

The underlying science

To answer the question of patent rights for an A.I. we need to examine how modern A.I. systems work and, as an example, consider how machine translation applications such as Google Translate function.

While such systems are marketed as if they’re “magic brains that just understand language”,[14]the problem is that there is currently no definitive scientific description for language[15] or language processing[16]. Thus, such language translation systems cannot function by mimicking the processes of the brain.

Rather, they employ a scheme known as Statistical Machine Translation (SMT) whereby online systems search the Internet identifying documents that have already been translated by human translators– for example books, and organizations like the United Nations, or websites. The system scans these texts for statistically significant patterns and once the computer finds a pattern it uses the pattern to translate similar text in the future.[17] This, as Jaron Lanier and others note, means that the people who created the translations and make translation systems possible are not paid to for their contributions[18].

Many modern A.I. systems are largely big data models that operate by defining a real world problem that needs to be solved, then conceiving a conceptual model to solve this problem which is typically a statistical analysis that falls into one of three categories: regression, classification or missing data. Data is then fed into the model and used to refine and calibrate the model. As the model is increasingly refined it is used to guide the location of data and, after a number of rounds of refinement finally results in a model capable of some predictive functionality.[19]

Big data models can be used to discover patterns in large data sets[20] but also can, as in the case of translation systems, exploit statistically significant correlations in data.

None of this, however, suggests that current A.I. systems are capable of inventive or creative capacity.

Patentability?

So to get a patent, an invention must:

  • Be novel in that it does not form part of the prior art[21]
  • Have an inventive step in that it not obvious to a person skilled in the art[22]
  • Be useful[23]
  • it must not fall into an excluded category that can include discoveries, presentations of information and mental processes or rules or methods for performing a mental act.[24]

Why discoveries are not inventions is tied with the issue of obviousness and as noted by Buckley J. in Reynolds v. Herbert Smith & Co., Ltd[25] who stated:

“Discovery adds to the amount of human knowledge, but it does so only by lifting the veil and disclosing something which before had been unseen or dimly seen. Invention also adds to human knowledge, but not merely by disclosing something. Invention necessarily involves also the suggestion of an act to be done, and it must be an act which results in a new product, or a new result, or a new process, or a new combination for producing an old product or an old result.”

Therefore in order to get a patent, an A.I. must first be capable of producing a patentable invention but, given current technology, is this even possible?

A thought exercise

Consider the following:

You believe that as a person exercises more, he/she consumes more oxygen and have tasked your A.I. with analyzing the relationship between oxygen consumption and exercise.

You provide the A.I. with a model suggesting that oxygen consumption increases with physical exertion and data that shows oxygen consumption among people performing little, moderate (e.g. walking briskly) and heavy exercise (e.g. running).

The A.I. reviews the data, refines the model, collects more data and comes up with a predictive model (e.g. when a person exercises X amount, he/she consumes Y amount of oxygen and when the person doubles his/her exertion, his oxygen consumption rate triples).

As this is essentially a statistical regression, the model will not always completely accurate in its predictions due to differences between individuals (i.e. for some persons the model will predict oxygen consumption fairly accurately, for others its results will be far off).

However, this particular model has another, more fundamental limitation – it fails to consider that a human cannot exercise beyond a certain point because his/her heart would be incapable of sustaining such levels of exertion[26] or because over-exercise may trigger an unexpected reaction (e.g. death).[27]

If one were to feed this model data of persons who have collapsed or died during exercise (and thus, in the latter case, not consume any oxygen), would the A.I. be able to ‘think outside its box’ and:

  • Question the cause of these data discrepancies and have the initiative to conduct further investigation?
  • Note and correct the limitation in the original model (which would require a significant amendment)?

Or would it simply alter the existing model by changing the slope of the regression line?

SMT and other A.I. have similar limitations, in the case of SMT, once the system is built, linguistic knowledge becomes necessary to achieve perfect translation at all grammatical levels[28] and SMT systems presently cannot translate cultural components of the source text into the target language, provide very literal, word for word translations that do not recognize idioms, slang, and terms that are not in the machine’s memory and lack human creativity[29] To do so would require a change to the underlying machine translation model, and the question arises whether this would have to be done by the human creators of the SMT or whether the SMT itself would be able to make the necessary corrections and adjustments to the model.

Should the SMT or, in the earlier example the A.I., be unable to improve and, in this case, innovate on the existing model does it have the creative or inventive capacity to conceive an invention is truly inventive? And if either the SMT or the AI can produce something that appears novel and inventive, given the nature of how A.I. presently operates (i.e. as big data models), would such a product be the result of an analysis of existing data to uncover hitherto unseen relationships – in other words, a discovery?

Returning to the original question about patent rights for an A.I., perhaps the question we should ask is not whether an A.I. should be able to get a patent, but whether an A.I., given current technology, can create a patentable invention in the first place and if the answer to that question is ‘no’, then the question of granting patent rights to an A.I. is moot.

References:

[1]  Jon Fingas, ‘Daimler tests a self-driving, mass-produced truck on real roads’, Engadget, Oct. 4, 2016.

[2] Sophie Curtis, ‘Cognitive Cooking: How Is A.I. Changing Foodtech?’ RE•WORK, April 19 2016.

[3] Analee Newitz, ‘Movie written by algorithm turns out to be hilarious and intense’ ArsTechnica, June 9, 2016.

[4]  Susan Beck, ‘AI Pioneer ROSS Intelligence Lands Its First Big Law Clients, The American Lawyer, May 6, 2016.

[5] See for example:

[6] Copyright Office, Compendium of U.S. Copyright Office Practices (3d ed. 2014). § 313.2

[7] Hettinger argued that ‘the most powerful intuition supporting property rights is that people are entitled to the fruits of their labor’

  • See: Edwin Hettinger, “Justifying Intellectual Property’, Philosophy & Public Affairs, Vol. 18, No. 1, Winder 1989, p. 31-52

[8] Copyright Office, Compendium of U.S. Copyright Office Practices (3d ed. 2014). § 306.

[9] According to WIPO‘Intellectual property (IP) refers to creations of the mind, such as inventions; literary and artistic works; designs; and symbols, names and images used in commerce.’

[10] Article I, Section 8, Clause 8 of the United States Constitution states: The Congress shall have Power To…. promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.

[11] According to the U.S. Copyright Office, ‘Copyright exists from the moment the work is created. You will have to register, however, if you wish to bring a lawsuit for infringement of a U.S. work’

See ‘Copyright in General’,

[12] s.5(1) Patents Act 2013

[13] See for example, U.S. Patent 5,953,441 ‘Fingerprint sensor having spoof reduction features and related methods’ which was assigned to Harris Corporation.

[14] Nilagia McCoy  ‘Jaron Lanier: The Digital Economy Since Who Owns The Future?’ October 8, 2015.

[15] This was noted by Jaron Lanier who delivered the keynote address at the opening of the Conference on the Global Digital Content Market taking place from April 20-22, 2016 at WIPO Headquarters in Geneva, Switzerland.

[16] See for example:

[17] Inside Google Translate, July 9, 2010.

[18] Catherine Jewell, ‘Digital pioneer, Jaron Lanier, on the dangers of “free” online culture,’ WIPO Magazine, April 2016.

[19] Noah Silverman.

[20] i.e. ‘data mining’.

[21] s 6, New Zealand Patents Act 2013

[22] s 7, New Zealand Patents Act 2013

[23] s 10, New Zealand Patents Act 2013

[24] See for example:. 93(2) Hong Kong Patents Ordinance, Cap 514 or the Canadian Patent Act R.S.C., 1985, c. P-4

[25] (1903), 20 R.P.C. 123

[26] Wikipedia, ‘Heart Rate’.

[27] This may be caused by a congenital condition but some also believe that prolonged excessive exercise may exacerbate matters. See:

[28] Mireia Farrus, Marta R. Costa-Jussa, Jose B. Marino, Marc Poch, Adolfo Hernandez, Carlos Henrıque,  Jose A. R. Fonollosa, ‘Overcoming statistical machine translation limitations: error analysis and proposed solutions for the Catalan–Spanish language pair’ Language Resources & Evaluation, DOI 10.1007/s10579-011-9137-0.

[29] ‘Machine Translation vs. Human Translators’ 

See also:

Supporting Women in Robotics on International Women’s Day.. and beyond.

International Women’s Day is raising discussion about the lack of diversity and role models in STEM and the potential negative outcomes of bias and stereotyping in robotics and AI. Let’s balance the words with positive actions. Here’s what we can all do to support women in robotics and AI, and thus improve diversity, innovation and reduce skills shortages for robotics and AI.

Join WomeninRobotics.org – a network of women working in robotics (or who aspire to work in robotics). We are a global discussion group supporting local events that bring women together for peer networking. We recognize that lack of support and mentorship in the workplace holds women back, particularly if there is only one woman in an organization/company.

Although the main group is only for women, we are going to start something for male ‘Allies’ or ‘Champions’. So men, you can join women in robotics too! Women need champions and while it would be ideal to have an equal number of women in leadership roles, until then, companies can improve their hiring and retention by having visible and vocal male allies. We all need mentors as our careers progress.

Women also need visibility and high profile projects for their careers to progress on par. One way of improving that is to showcase the achievements of women in robotics. Read and share all four year’s worth of our annual “25 Women in Robotics you need to know about” – that’s more than 100 women already because we have some groups in there. (There has always been a lot of women on the core team at Robohub.org, so we love showing our support.) Our next edition will come out on October 10 2017 to celebrate Ada Lovelace Day.

Change starts at the top of an organization. It’s very hard to hire women if you don’t have any women, or if they can’t see pathways for advancement in your organization. However, there are many things you can do to improve your hiring practices. Some are surprisingly simple, yet effective. I’ve collected a list and posted it at Silicon Valley Robotics – How to hire women.

And you can invest in women entrepreneurs. All the studies show that you get a higher rate of return, and higher likelihood of success from investments in female founders. And yet, proportionately investment is much less. You don’t need to be a VC to invest in women either. Kiva.org is matching loans today and $25 can empower an entrepreneur all over the world. #InvestInHer

And our next Silicon Valley/ San Francisco Women in Robotics event will be on March 22 at SoftBank Robotics – we’d love to see you there – or in support!

Why businesses must get ready for the era of robotic things

Internet_of_Things_Smart_phone_IoT_conveyor_Belt_industry_factory

We’ve entered a period of epic technological transformation that is impacting society in ways that are leaving even veteran tech observers speechless. In some ways, it might seem like 1998 all over again. The internet was then in its infancy and cyberspace was uncharted territory for much of the population. The Dot-com boom and eventual bust was inevitable and reflected the markets expanding and contracting with the newfound surge of interest, but obviously overinflated speculation, around the potential of the Internet to transform society. Following a tremendous growth spurt in the first 5-6 years, the World Wide Web ended its first decade by learning to become sociable.

Indeed, the rise of the digital social network through channels like Facebook, Twitter, Instagram, and Snapchat have transformed how businesses and individuals work, think, and communicate. And now that the Internet has crossed the threshold into young adulthood, it’s continuing to grow and drive massive transformations throughout all parts of business and society. Soon-to-be-released autonomous cars, wearable tech, drones, 3-D printing, smart machines, home automation, virtual assistants like Siri, you name it . . . the pace of change is staggering.

Graphic by Predikto on company blog.
Graphic by Predikto on company blog.

Many of the breakthroughs and innovations we’re seeing right now are a result of three major confluences over the past 8 years: mobile, cloud, and Big Data. These technologies have collectively resulted in quicker and more efficient means of collaboration, development, and production, which in turn have allowed businesses to achieve unprecedented levels of growth and expansion. New ways of working, often remotely, mean that more people can do their jobs outside of traditional corporate structures. We’ve entered not only the freelancer economy, but the startup one as well. Now everyone is an entrepreneur and innovation and new business growth is through the roof. What’s more is that technology processes and production cycles have become commoditized. This means that anyone with the right idea, system, skills, and network in place today can effectively build a billion dollar business with very low overhead costs.

This crazy rate of change is certainly great from the consumer standpoint, but also a bit unnerving for businesses simply trying to keep their head above the water. How are companies today to keep up with the market, their competitors, and with consumer expectations? What does all this change mean for startups struggling to gain traction, for more traditional brick and mortar businesses, and even for established enterprises that might be too big to pivot quickly?

These are more comprehensive questions that we’ll try to answer in another blog post. But for now, here is what we do know about the massive impacts over the next few years. The first thing is that Internet of Things is taking the world by storm with projections that 21 billion objects will be connected by the year 2020. That’s just about 3 for every man, woman, and child on the planet! A few years ago Cisco estimated that the IoT market would create $19 trillion of economic value in the next decade.

What’s more is that the global robotics industry is also undergoing a major transformation. Market intelligence firm Tractica released a report in November 2015 forecasting that global robotics will grow from $28.3 billion worldwide in 2015 to $151.7 billion by 2020. What’s especially significant is that this market share will encompass mostly non-industrial robots, including segments like consumer, enterprise, medical, military, UAVs, and autonomous vehicles. Tractica anticipates an increase in annual robots shipments from 8.8 million in 2015 to 61.4 million by 2020; in fact, by 2020 over half of this volume will come from consumer robots.

robot_device-676x507

Putting together the two major industry trends, it doesn’t take rocket science to figure out that the two industries – Internet of Things and Robotics – will together lead to a “perfect storm” of global market disruption, opportunities, and growth in the next 4 years and beyond. This confluence is part of a larger epic transformation, which has appropriately been called the Second Machine Age. Listen to how this FastCompany article sums it up:

The fact is we’re now on the cusp of a “Second Machine Age,” one powered not by clanging factory equipment but by automation, artificial intelligence, and robotics. Self-driving cars are expected to be widespread in the coming decade. Already, automated checkout technology has replaced cashiers, and computerized check-in is the norm at airports. Just like the Industrial Revolution more than 200 years ago, the AI and robotics revolution is poised to touch virtually every aspect of our lives—from health and personal relations to government and, of course, the workplace.

This is a mouthful but in case it’s not clear, let me spell it out: there’s never been a better time than now to get onboard with robotics and Internet of Things!

If you’re a startup or small business owner, and especially feeling behind the technology curve, you’re certainly not alone. But instead of commiserating about all of the changes, proactively start today to ask yourself what it will take to get your organization to the next level of innovation. Set yourself up with a 6 month, 12 month, 18 month and 2 year innovation plan which maps to a broader 2020 strategy. Time is of the essence but it’s not too late to pivot and get onboard with the robotics and IoT revolution. As the famous statement goes, “The journal of a thousand miles starts with one step.”

Page 270 of 270
1 268 269 270