Archive 07.09.2020

Page 5 of 5
1 3 4 5

RESPONSIBLE AI CAN EFFECTIVELY DEPLOY HUMAN-CENTERED MACHINE LEARNING MODELS

 Human-Centered Machine Learning

Artificial intelligence (AI) is developing quickly as an unbelievably amazing innovation with apparently limitless application. It has shown its capacity to automate routine tasks, for example, our everyday drive, while likewise augmenting human capacity with new insight. Consolidating human imagination and creativity with the adaptability of machine learning is propelling our insight base and comprehension at a remarkable pace.

However, with extraordinary power comes great responsibility. In particular, AI raises worries on numerous fronts because of its possibly disruptive effect. These apprehensions incorporate workforce uprooting, loss of protection, potential biases in decision-making and lack of control over automated systems and robots. While these issues are noteworthy, they are likewise addressable with the correct planning, oversight, and governance.

Numerous artificial intelligence systems that will come into contact with people should see how people behave and what they need. This will make them more valuable and furthermore more secure to utilize. There are at least two manners by which understanding people can benefit intelligent systems. To start with, the intelligent system must gather what an individual needs. For a long time to come, we will design AI frameworks that get their directions and objectives from people. However, people don’t always state precisely what they mean. Misunderstanding a person’s intent can result in perceived failure. Second, going past just failing to comprehend human speech or written language, consider the fact that entirely perceived directions can result in disappointment if part of the guidelines or objectives are implicit or understood.

Human-centered AI is likewise in acknowledgment of the fact that people can be similarly inscrutable to intelligent systems. When we consider intelligent frameworks understanding people, we generally consider normal language and speech processing whether an intelligent system can react suitably to utterances. Natural language processing, speech processing, and activity recognition are significant challenges in building helpful, intelligent systems. To be really effective, AI and ML systems need a theory of mind about humans.

Responsible AI research is a rising field that advocates for better practices and techniques in deploying machine learning models. The objective is to build trust while at the same time limiting potential risks not exclusively to the organizations deploying these models, yet additionally the users they serve.

Responsible AI is a structure for bringing a large number of these basic practices together. It centers around guaranteeing the ethical, transparent and accountable use of AI technologies in a way predictable with user expectations, authoritative qualities and cultural laws and standards. Responsible AI can guard against the utilization of one-sided information or algorithms, guarantee that automated decisions are advocated and reasonable, and help keep up user trust and individual privacy. By giving clear rules of engagement, responsible AI permits companies under public and congressional scrutiny to improve and understand the groundbreaking capability of AI that is both convincing and responsible.

Human-centric machine learning is one of the more significant concepts in the business to date. Leading organizations, for example, Stanford and MIT are setting up labs explicitly to encourage this science. MIT defines this concept as “the design, development and deployment of information systems that learn from and collaborate with humans in a deep, significant way.”

The future of work is frequently depicted as being dominated by a robotic apparatus and a large number of algorithms claiming to be people. However, actually AI adoption has been to a great extent planned for making processes more effective, upgrading products and services and making new products and services as per Deloitte’s recent study of corporate executives, who evaluated decreasing headcount as their least significant objective.

It is inconsequential to set up common sense failures in robotics and autonomous operators. For example, a robot goes to a drug store and gets a professionally prescribed medication. Since the human is sick, the individual might want the robot to return as fast as possible. If the robot goes directly to the drug store, goes behind the counter, gets the medication, and gets back, it will have succeeded and minimized execution time and money. We would likewise say it looted the drug store since it didn’t take an interest in the social construct of exchanging money for the product.

Commonsense knowledge, the procedural form of which can go about as a reason for the theory of mind for when interacting with humans, can make human collaboration more natural. Despite the fact that ML and AI decision-making algorithms work uniquely from human decision-making, the behavior of the framework is subsequently more conspicuous to individuals. It likewise makes interaction with individuals more secure: it can decrease common sense goal failures in light of the fact that the operator fills in an under-determined objective with commonsense procedural details; and a specialist that demonstrations as per a person’s expectations will inherently avoid conflict with an individual who is applying their theory of mind of human behavior to intelligent agents.

Artificial intelligence in radiology, for instance, can rapidly draw attention to discoveries as well as highlight the significantly more unpretentious areas that probably won’t be readily caught by the human eye. Responsible AI human-centricity becomes an integral factor when doctors and patients, not machines, settle on an ultimate decision on treatment. All things considered, augmenting medical professionals with deep quantitative insight furnishes them with priceless data to factor into the decision.

By keeping humans tuned in, organizations can all the more likely decide the degree of automation and augmentation they need and control a definitive impact of AI on their workforce. Therefore, companies can hugely mitigate their risk and build up a more profound comprehension of what kinds of circumstances might be the most challenging for their  AI deployments and machine learning applications.

How Is Artificial Intelligence Used in Analytics?

 How Is Artificial Intelligence Used in Analytics?



Analytics powers your marketing program, but how much value are you really getting out of your data?

Artificial intelligence can help.

AI is a collection of technologies that excel at extracting insights and patterns from large sets of data, then making predictions based on that information.

That includes your analytics data from places like Google Analytics, automation platforms, content management systems, CRMs, and more.

In fact, AI exists today that can help you get much more value out of the data you already have, unify that data, and actually make predictions about customer behaviors based on it.

That sounds great. But how do you actually get started?

This article is here to help you take your first step.

At Marketing AI Institute, we’ve spent years researching and applying AI. Since 2016, we've published more than 400 articles on the subject. And we've published stories on 50+ AI-powered vendors with more than $1 billion in total funding. We’re also tracking 1,500+ sales and marketing AI companies with combined funding north of $6.2 billion.

This article leans on that expertise to demystify AI.

And, it'll give you ideas on how to use AI for analytics and offer some tools to explore further.

What Is Artificial Intelligence?

Ask 10 different experts what AI is, and you'll get 10 different answers. A good definition comes from Demis Hassabis, CEO of DeepMind, an AI company that Google bought.

Hassabis calls AI the "science of making machines smart." Today, we can teach machines to be like humans. We can give them the ability to see, hear, speak, write, and move.

Your smartphone has tons of AI-powered capabilities. These include facial recognition that unlocks your phone with your face (AI that sees). They also include voice assistants (AI that hears and speaks). And, don't forget, predictive text (AI that writes).

Other types of AI systems even give machines the ability to move, like you see in self-driving cars.

Your favorite services, like Amazon and Netflix, use AI to offer product recommendations.

And email clients like Gmail even use AI to automatically write parts of emails for you.

In fact, you probably use AI every day, no matter where you work or what you do.

"Machine learning" powers AI's most impressive capabilities. Machine learning is a type of AI that identifies patterns based on large sets of data. The machine uses these patterns to make predictions. Then, it uses more and more data to improve those predictions over time.

The result?

Technology powered by machine learning gets better over time, often without human involvement.

This is very different from traditional software.

A typical non-AI system, like your accounting software, relies on human inputs to work. The system is hard-coded with rules by people. Then, it follows those rules exactly to help you do your taxes. The system only improves if human programmers improve it.

But machine learning tools can improve on their own. This improvement comes from a machine assessing its own performance and new data.

For instance, an AI tool exists that writes email subject lines for you. Humans train the tool's machine learning using samples of a company's marketing copy. But then the tool drafts its own email subject lines. Split-testing occurs, then the machine learns on its own what to improve based on the results. Over time, the machine gets better and better with little human involvement. This unlocks possibly unlimited performance potential.

Now, imagine this power applied to any piece of marketing technology that uses data. AI can actually make everything, from ads to analytics to content, more intelligent.

How Is AI Used in Analytics?

Here are just a few of the top use cases we’ve found for artificial intelligence in analytics today.

1. Find new insights from your analytics.

Artificial intelligence excels at finding insights and patterns in large datasets that humans just can't see. It also does this at scale and at speed.

Today, AI-powered tools exist that will answer questions you ask about your website data. (Think "Which channel had the highest conversion rate?") AI can also recommend actions based on opportunities its seeing in your analytics.

Some tools to check out here include:

2. Use analytics to predict outcomes.

AI systems exist that use analytics data to help you predict outcomes and successful courses of action.

AI-powered systems can analyze data from hundreds of sources and offer predictions about what works and what doesn't. It can also can deep dive into data about your customers and offer predictions about consumer preferences, product development, and marketing channels.

 

3. Unify analytics and customer data.

AI is also used to unify data across platforms. That includes using the speed and scale of AI to pull together all your customer data into a single, unified view. AI is also capable of unifying data across different sources, even hard-to-track ones like call data. 

What’s the Difference Between Robotics and Artificial Intelligence?

 It is Robotics part of AI? Is AI part of robotics? What is the difference between the two terms? We answer this fundamental question.

Robotics and artificial intelligence (AI) serve very different purposes. However, people often get them mixed up. 

A lot of people wonder if robotics is a subset of artificial intelligence. Others wonder if they are the same thing.

Since the first version of this article, which we published back in 2017, the question has gotten even more confusing. The rise in the use of the word "robot" in recent years to mean any sort of automation has cast even more doubt on how robotics and AI fit together (more on this at the end of the article).  

It's time to put things straight once and for all. 

artificial_intelligence.jpg

Are robotics and artificial intelligence the same thing?

The first thing to clarify is that robotics and artificial intelligence are not the same things at all. In fact, the two fields are almost entirely separate.

A Venn diagram of the two fields would look like this:

RoboticsAI.png

As you can see, there is one area small where the two fields overlap: Artificially Intelligent Robots. It is within this overlap that people sometimes confuse the two concepts. 

To understand how these three terms relate to each other, let's look at each of them individually.

What is robotics?

Robotics is a branch of technology that deals with physical robots. Robots are programmable machines that are usually able to carry out a series of actions autonomously, or semi-autonomously.

In my opinion, there are three important factors which constitute a robot:

  1. Robots interact with the physical world via sensors and actuators.
  2. Robots are programmable.
  3. Robots are usually autonomous or semi-autonomous.

I say that robots are "usually" autonomous because some robots aren't. Telerobots, for example, are entirely controlled by a human operator but telerobotics is still classed as a branch of robotics. This is one example where the definition of robotics is not very clear.

It is surprisingly difficult to get experts to agree on exactly what constitutes a "robot." Some people say that a robot must be able to "think" and make decisions. However, there is no standard definition of "robot thinking." Requiring a robot to "think" suggests that it has some level of artificial intelligence but the many non-intelligent robots that exist show that thinking cannot be a requirement for a robot. 

However you choose to define a robot, robotics involves designing, building and programming physical robots which are able to interact with the physical world. Only a small part of robotics involves artificial intelligence.

Example of a robot: Basic cobot

A simple collaborative robot (cobot) is a perfect example of a non-intelligent robot.

For example, you can easily program a cobot to pick up an object and place it elsewhere. The cobot will then continue to pick and place objects in exactly the same way until you turn it off. This is an autonomous function because the robot does not require any human input after it has been programmed. The task does not require any intelligence because the cobot will never change what it is doing. 

Most industrial robots are non-intelligent. 

What is artificial intelligence?

Artificial intelligence (AI) is a branch of computer science. It involves developing computer programs to complete tasks that would otherwise require human intelligence. AI algorithms can tackle learning, perception, problem-solving, language-understanding and/or logical reasoning.

AI is used in many ways within the modern world. For example, AI algorithms are used in Google searches, Amazon's recommendation engine, and GPS route finders. Most AI programs are not used to control robots. 

Even when AI is used to control robots, the AI algorithms are only part of the larger robotic system, which also includes sensors, actuators, and non-AI programming. 

Often — but not always — AI involves some level of machine learning, where an algorithm is "trained" to respond to a particular input in a certain way by using known inputs and outputs. We discuss machine learning in our article Robot Vision vs Computer Vision: What's the Difference?

The key aspect that differentiates AI from more conventional programming is the word "intelligence." Non-AI programs simply carry out a defined sequence of instructions. AI programs mimic some level of human intelligence. 

Example of a pure AI: AlphaGo

One of the most common examples of pure AI can be found in games. The classic example of this is chess, where the AI Deep Blue beat world champion, Gary Kasparov, in 1997.

A more recent example is AlphaGo, an AI which beat Lee Sedol the world champion Go player, in 2016. There were no robotic elements to AlphaGo. The playing pieces were moved by a human who watched the robot's moves on a screen. 

What are Artificially Intelligent Robots?

Artificially intelligent robots are the bridge between robotics and AI. These are robots that are controlled by AI programs.

Most robots are not artificially intelligent. Up until quite recently, all industrial robots could only be programmed to carry out a repetitive series of movements which, as we have discussed, do not require artificial intelligence. However, non-intelligent robots are quite limited in their functionality.

AI algorithms are necessary when you want to allow the robot to perform more complex tasks.

warehousing robot might use a path-finding algorithm to navigate around the warehouse. A drone might use autonomous navigation to return home when it is about to run out of battery. A self-driving car might use a combination of AI algorithms to detect and avoid potential hazards on the road. These are all examples of artificially intelligent robots. 

Example: Artificially intelligent cobot

You could extend the capabilities of a collaborative robot by using AI.

Imagine you wanted to add a camera to your cobot. Robot vision comes under the category of "perception" and usually requires AI algorithms.

Say that you wanted the cobot to detect the object it was picking up and place it in a different location depending on the type of object. This would involve training a specialized vision program to recognize the different types of objects. One way to do this is by using an AI algorithm called Template Matching, which we discuss in our article How Template Matching Works in Robot Vision.

In general, most artificially intelligent robots only use AI in one particular aspect of their operation. In our example, AI is only used in object detection. The robot's movements are not really controlled by AI (though the output of the object detector does influence its movements). 

Where it all gets confusing…

As you can see, robotics and artificial intelligence are really two separate things.

Robotics involves building robots physical whereas AI involves programming intelligence.

However, there is one area where everything has got rather confusing since I first wrote this article: software robots.

Why software robots are not robots

The term "software robot" refers to a type of computer program which autonomously operates to complete a virtual task. Examples include:

  • Search engine "bots" — aka "web crawlers." These roam the internet, scanning websites and categorizing them for search. 
  • Robotic Process Automation (RPA) — These have somewhat hijacked the word "robot" in the past few years, as I explained in this article
  • Chatbots — These are the programs that pop up on websites talk to you with a set of pre-written responses. 

Software bots are not physical robots they only exist within a computer. Therefore, they are not real robots. 

Some advanced software robots may even include AI algorithms. However, software robots are not part of robotics.

Hopefully, this has clarified everything for you. But, if you have any questions at all please ask them in the comments. 

Computer Vision

 This is a technology of AI with which the robots can see. The computer vision plays vital role in the domains of safety, security, health, access, and entertainment.

Computer vision automatically extracts, analyzes, and comprehends useful information from a single image or an array of images. This process involves development of algorithms to accomplish automatic visual comprehension.

Hardware of Computer Vision System

This involves −

  • Power supply
  • Image acquisition device such as camera
  • A processor
  • A software
  • A display device for monitoring the system
  • Accessories such as camera stands, cables, and connectors

Tasks of Computer Vision

  • OCR − In the domain of computers, Optical Character Reader, a software to convert scanned documents into editable text, which accompanies a scanner.

  • Face Detection − Many state-of-the-art cameras come with this feature, which enables to read the face and take the picture of that perfect expression. It is used to let a user access the software on correct match.

  • Object Recognition − They are installed in supermarkets, cameras, high-end cars such as BMW, GM, and Volvo.

  • Estimating Position − It is estimating position of an object with respect to camera as in position of tumor in human’s body.

Application Domains of Computer Vision

  • Agriculture
  • Autonomous vehicles
  • Biometrics
  • Character recognition
  • Forensics, security, and surveillance
  • Industrial quality inspection
  • Face recognition
  • Gesture analysis
  • Geoscience
  • Medical imagery
  • Pollution monitoring
  • Process control
  • Remote sensing
  • Robotics
  • Transport

Applications of Robotics

The robotics has been instrumental in the various domains such as −

  • Industries − Robots are used for handling material, cutting, welding, color coating, drilling, polishing, etc.

  • Military − Autonomous robots can reach inaccessible and hazardous zones during war. A robot named Daksh, developed by Defense Research and Development Organization (DRDO), is in function to destroy life-threatening objects safely.

  • Medicine − The robots are capable of carrying out hundreds of clinical tests simultaneously, rehabilitating permanently disabled people, and performing complex surgeries such as brain tumors.

  • Exploration − The robot rock climbers used for space exploration, underwater drones used for ocean exploration are to name a few.

  • Entertainment − Disney’s engineers have created hundreds of robots for movie making.

Artificial Intelligence – Robotics

 Robotics is a domain in artificial intelligence that deals with the study of creating intelligent and efficient robots.

What are Robots?

Robots are the artificial agents acting in real world environment.

Objective

Robots are aimed at manipulating the objects by perceiving, picking, moving, modifying the physical properties of object, destroying it, or to have an effect thereby freeing manpower from doing repetitive functions without getting bored, distracted, or exhausted.

What is Robotics?

Robotics is a branch of AI, which is composed of Electrical Engineering, Mechanical Engineering, and Computer Science for designing, construction, and application of robots.

Aspects of Robotics

  • The robots have mechanical construction, form, or shape designed to accomplish a particular task.

  • They have electrical components which power and control the machinery.

  • They contain some level of computer program that determines what, when and how a robot does something.

Difference in Robot System and Other AI Program

Here is the difference between the two −

AI ProgramsRobots
They usually operate in computer-stimulated worlds.They operate in real physical world
The input to an AI program is in symbols and rules.Inputs to robots is analog signal in the form of speech waveform or images
They need general purpose computers to operate on.They need special hardware with sensors and effectors.

Robot Locomotion

Locomotion is the mechanism that makes a robot capable of moving in its environment. There are various types of locomotions −

  • Legged
  • Wheeled
  • Combination of Legged and Wheeled Locomotion
  • Tracked slip/skid

Legged Locomotion

  • This type of locomotion consumes more power while demonstrating walk, jump, trot, hop, climb up or down, etc.

  • It requires more number of motors to accomplish a movement. It is suited for rough as well as smooth terrain where irregular or too smooth surface makes it consume more power for a wheeled locomotion. It is little difficult to implement because of stability issues.

  • It comes with the variety of one, two, four, and six legs. If a robot has multiple legs then leg coordination is necessary for locomotion.

The total number of possible gaits (a periodic sequence of lift and release events for each of the total legs) a robot can travel depends upon the number of its legs.

If a robot has k legs, then the number of possible events N = (2k-1)!.

In case of a two-legged robot (k=2), the number of possible events is N = (2k-1)! = (2*2-1)! = 3! = 6.

Hence there are six possible different events −

  • Lifting the Left leg
  • Releasing the Left leg
  • Lifting the Right leg
  • Releasing the Right leg
  • Lifting both the legs together
  • Releasing both the legs together

In case of k=6 legs, there are 39916800 possible events. Hence the complexity of robots is directly proportional to the number of legs.

AIRoboticsPro

Wheeled Locomotion

It requires fewer number of motors to accomplish a movement. It is little easy to implement as there are less stability issues in case of more number of wheels. It is power efficient as compared to legged locomotion.

  • Standard wheel − Rotates around the wheel axle and around the contact

  • Castor wheel − Rotates around the wheel axle and the offset steering joint.

  • Swedish 45o and Swedish 90o wheels − Omni-wheel, rotates around the contact point, around the wheel axle, and around the rollers.

  • Ball or spherical wheel − Omnidirectional wheel, technically difficult to implement.

AIRoboticsPro

Slip/Skid Locomotion

In this type, the vehicles use tracks as in a tank. The robot is steered by moving the tracks with different speeds in the same or opposite direction. It offers stability because of large contact area of track and ground.

AIRoboticsPro

Components of a Robot

Robots are constructed with the following −

  • Power Supply − The robots are powered by batteries, solar power, hydraulic, or pneumatic power sources.

  • Actuators − They convert energy into movement.

  • Electric motors (AC/DC) − They are required for rotational movement.

  • Pneumatic Air Muscles − They contract almost 40% when air is sucked in them.

  • Muscle Wires − They contract by 5% when electric current is passed through them.

  • Piezo Motors and Ultrasonic Motors − Best for industrial robots.

  • Sensors − They provide knowledge of real time information on the task environment. Robots are equipped with vision sensors to be to compute the depth in the environment. A tactile sensor imitates the mechanical properties of touch receptors of human fingertips.

AI Robotics Pro

 

Are you ready for more?

We are here to serve your needs. And if you’d like to learn more, let’s get started.

About

Our Vision

We can make Robots as smart as a human by using a cloud brain.
Helpful humanoid robots will be affordable for homes by 2025
.

This will be achieved by cloud-connected robots,
where diverse models of robots share a brain hosted on a cloud platform.

Your robot will have access to an ever-growing number of skills
similar your smart phone’s access to apps today.

Our Mission

Operating Smart Robots for People.

We make helpful robot services possible; and to make them safe, secure and affordable.

Our mission is to implement the Vision. As breakthroughs continue along the way to the Vision becoming reality, AIRoboticsPro is preparing to be an operator of diverse models robots
for people with a wide range of interests and needs.

We Make Robots SmarterTM

Have a robot?  We can make it smarter. 
Have AI skills?  We can integrate them into ever-expanding cloud brains.

AIRoboticsPro is the creator of an emerging fabric to connect a multitude of AI skills to cloud robots (and other smart devices).

We are a catalyst that increases the value of AI developed anywhere in the world
by creating seamless interoperability with robots (and other smart devices).


Let’s build something together!

When to buy and when to build AI

One of the most important questions when starting to work with and implement AI in your organization is also one of the most complicated to answer: Should you buy off-the-shelf AI products, build your own in-house or have it built custom by consultants?

There’s no one size fits all answer here, but there are some considerations that can help you to understand what is best for you. I’ll try to go through the considerations and let you decide in the end what suits your business the best.

Is AI strategic for your business?

First of all I believe you should ask yourself: Is AI development a strategic feature to my organization? That can be a bit of a vague question so I’ll boil it down to this: Will AI solutions provide you with a competitive advantage that you will try to protect and keep improving to stay a head?

If the AI is just something that is meant to make an improvement that it’s likely your competition can easily copy then you should definitely buy the solution off-the-shelf or have made from experts you hire in. Building up the needed know how and organizational capabilities to make an AI that is only here for a small tactical advantage is not necessary. That will take your focus away from the more important problems. So ask yourself the hard question: If the business would need to do cutbacks, would you keep investing in building your own AI as a strategic priority? If not, you should consider not to do it in the first place.

On the other hand if you believe that one or more AI-solutions can be a competitive advantage that your competition can not easily copy then you should try to build it in-house. In this case you have to be clear on what makes it hard for your competitors to copy. Do you have some access to data that they don’t? Do you have a better position to build the AI capability or something else? Make very sure that you are actually in a position to be competitive here. If not, your competition will copy you by buying from an experienced vendor at a lower cost than you paid to build your own AI.

Research the market

You will be surprised how many off-the-shelf AI solutions there are out there that solve all kinds of problems. People tend to in my experience not do the research and end up making expensive investments that take forever to get done and still it won’t compare to the products already on the market. You really have to have scale to make a business case for building your own solution when there’s already a lot available out there.

I actually once met someone building a solution in-house that was exactly what my AI company was doing. We needed massive scale to get anywhere near a good business case and yet these guys tried to do it themself. We had more than 14.000 business customers at the time and this one business wanted to make the same AI for their business only. They of course had to close their project since it was too big an investment but they still spent a lot of money. Once a project has been kicked off it can be hard to pull back since a lot of ego and prestige can go into corporate projects.

In for a penny in for a pound (of AI)

I have a rule of thumb that never fails me. “When an organization does something it doesn’t do regularly it will execute it poorly”. I made this rule of thumb to explain to myself why very competent organizations sometimes completely flops relatively simple endeavours. I guess the reason is that working in a new domain for an organization is not only not supported by the current processes and culture but might require the organization to work against them. Whatever the reason I see it consistently and I also see it being the case with AI. If you don’t do AI projects regularly you will see massive overhead and probably fail it. So if the frequency of your AI projects are low you should probably look to outsource as much as possible. This is not an attempt to scare anyone away from AI projects, but it takes effort to build the AI capability and that’s a conscious choice you have to make here.

Size matters

AI projects require a minimum investment that is usually larger than traditional IT projects. In AI the skills from engineers, machine learning developers, data scientists and product managers are quite unique. So as a result your organization just has to be a certain size for in-house AI projects to make sense. AI usually also is a trial and error workflow that doesn't promise revenue or profit right away.

There’s no fixed amount of employees or revenue but when the AI team has to be 4-5 people at minimum then you probably shouldn’t do it before you can handle a team of that size for a while not providing any revenue or cost saving for a good while.

Get your data straight

Data is a big part of many AI projects and I always recommend that you get your data straight before you go into the actual AI development. In my mind it’s more important(And more competitive) to get a smooth data operation with low costs and high quality data. I would always prefer to get the data operations in-house and the AI-development is second priority. Getting the data operations right is more of a competitive advantage than building the AI. It’s like supermarket chains competing - The chain with best purchasing of goods and more low cost warehouse operations can provide cheaper consumer prices and are more competitive. Data is the same way. If you can get better data at a better quality and a lower cost, your AI projects will be superior to your competitors even if their AI capabilities are superior to your businesses. So make data the priority if you have to choose.

Building AI is getting easier

One last thing I think you should take into account is that AI projects are getting easier and the barrier to get started is getting lower. AI used to be a very difficult domain to work in, requiring both Phds in data science, machine learning engineers and often thousands of hours of coding to make a useful AI. Today a lot of that can be done at a much lower buy-in with techniques such as Transfer learning and AutoML. It also seems that the bar for getting started is getting lower and lower. As a result building AI in-house is clearly becoming more accessible and with time more business should have a go at it.


That’s it. From here, the decision is yours.

Page 5 of 5
1 3 4 5