ROBOTS WITH COMMON SENSE AND COGNITIVE INTELLIGENCE: ARE WE THERE YET?

 

What Makes Us Superior To Robots When It Comes To Common Intelligence?

The debate about man vs robots is an evergreen and common thing now. While robots are viewed as an enabler of a dystopian future brought by digital disruption, the main question that has baffled minds is how smart are they. When it comes to human intelligence, there isn’t any other living being or ‘mechanical or AI mind’ that can draw parallel with us. Yet, robots powered by AI have been able to perform trivial, monotonous tasks with accuracy far better than us. It is important to note that this does not imply robots have acquired cognitive intelligence nor common sense which are intrinsic to humans, despite de facto of the recent marvels of robotics.

The main problem is that most of the algorithms that are written for robots are based on machine learning coding. These codes are collected from a particular type of data, and models are trained based on individual test conditions. Hence, when put in a situation that is not in their code nor algorithm, robots can fail terribly or draw a conclusion that can be catastrophic. This has highlighted in Stanley Kubrick’s landmark film 2001: A Space Odyssey. The movie features a supercomputer, HAL-9000, who is informed by its creators of the purpose of the mission: to reach Jupiter and search for signs of extra-terrestrial intelligence. When HAL makes an error, it refuses to admit this and alleges that it was caused due to human error. Therefore, astronauts decide to shut HAL down, but unfortunately, the AI discovers their plot by lip-reading. Conclusively, HAL arrives at a new conclusion that wasn’t part of its original programming, deciding to save itself by systematically killing off the people onboard.

Another illustration which experts often mention it that, while we can teach a robot on how to open a door by training it and feeding data on 500 different types of door, the robots will still fail when asked to open the 501st door. Also, this example is the best way to explain why robots don’t share the typical thought process and intelligence of humans. Humans don’t need to be ‘trained’ they observe and learn, or they experiment thanks to curiosity. Further, every time someone knocks the door, we don’t tend to open it, there is always an unfriendly neighbor we dislike. Again we don’t need to be reminded to lock the door either, but robots need a clear set of instruction. Let us consider other aspects of our life, robots and AI are trained on a particular set of data; hence they will function effectively when the input is something they have been trained or programmed for, beyond it the observation is different. For instance, if one uses the expression “Hit the road” while driving a car, she means to say to herself or the driver to begin the journey emphatically. If a robot does not know the phrasal meaning of the same expression, it may believe that the person is asking to ‘hit’ the road. This misunderstanding can lead to accidents. While researchers are working hard, devising algorithms, running codes, we are yet to see a robot that understands the way humans converse, all with accents, dialects, colloquy and jargons.

Michio Kaku, a futurist and theoretical physicist, once said that “Our robots today, have the collective intelligence and wisdom of a cockroach.” While robots of today can make salads on our command, or robots like Deep Blue or AlphaGo Zero can defeat humans in chess, it does not necessarily qualify as ‘common sense’ nor smartness. And let us not forget that Deep Blue and AlphaGo Zero were following instructions given by a team of ‘smart’ human scientists. These robots were designed by people who were smart enough to solve a seemingly impossible task. So to sum up, while robots are becoming smarter that, they are now able to fold laundry, impersonate as a person looking for dating online, they still lag when it comes to cognitive intelligence and common sense. It is a long wait till we find a robot we see in sci-fi movies, i.e. C3P0, R2D2 or WALL-E.

Experiments reveal why human-like robots elicit uncanny feelings

 

Androids, or robots with humanlike features, are often more appealing to people than those that resemble machines—but only up to a certain point. Many people experience an uneasy feeling in response to robots that are nearly lifelike, and yet somehow not quite "right." The feeling of affinity can plunge into one of repulsion as a robot's human likeness increases, a zone known as "the uncanny valley."

The journal Perception published new insights into the cognitive mechanisms underlying this phenomenon made by psychologists at Emory University.

Since the uncanny valley was first described, a common hypothesis developed to explain it. Known as the mind-perception theory, it proposes that when people see a  with human-like features, they automatically add a mind to it. A growing sense that a machine appears to have a mind leads to the creepy feeling, according to this theory.

"We found that the opposite is true," says Wang Shensheng, first author of the new study, who did the work as a graduate student at Emory and recently received his Ph.D. in psychology. "It's not the first step of attributing a mind to an  but the next step of 'dehumanizing' it by subtracting the idea of it having a mind that leads to the uncanny valley. Instead of just a one-shot process, it's a dynamic one."

The findings have implications for both the design of robots and for understanding how we perceive one another as humans.

"Robots are increasingly entering the social domain for everything from education to healthcare," Wang says. "How we perceive them and relate to them is important both from the standpoint of engineers and psychologists."

"At the core of this research is the question of what we perceive when we look at a face," adds Philippe Rochat, Emory professor of psychology and senior author of the study. "It's probably one of the most important questions in psychology. The ability to perceive the minds of others is the foundation of human relationships. "

The research may help in unraveling the mechanisms involved in mind-blindness—the inability to distinguish between humans and machines—such as in cases of extreme autism or some psychotic disorders, Rochat says.

Co-authors of the study include Yuk Fai Cheong and Daniel Dilks, both associate professors of psychology at Emory.

Anthropomorphizing, or projecting human qualities onto objects, is common. "We often see faces in a cloud for instance," Wang says. "We also sometimes anthropomorphize machines that we're trying to understand, like our cars or a computer."

Naming one's car or imagining that a cloud is an animated being, however, is not normally associated with an uncanny feeling, Wang notes. That led him to hypothesize that something other than just anthropomorphizing may occur when viewing an android.

To tease apart the potential roles of mind-perception and dehumanization in the  phenomenon the researchers conducted experiments focused on the temporal dynamics of the process. Participants were shown three types of images—human faces, mechanical-looking robot faces and android faces that closely resembled humans—and asked to rate each for perceived animacy or "aliveness." The exposure times of the images were systematically manipulated, within milliseconds, as the participants rated their animacy.

The results showed that perceived animacy decreased significantly as a function of exposure time for android faces but not for mechanical-looking robot or human faces. And in android faces, the perceived animacy drops at between 100 and 500 milliseconds of viewing time. That timing is consistent with previous research showing that people begin to distinguish between human and artificial faces around 400 milliseconds after stimulus onset.

A second set of experiments manipulated both the exposure time and the amount of detail in the images, ranging from a minimal sketch of the features to a fully blurred image. The results showed that removing details from the images of the android faces decreased the perceived animacy along with the perceived uncanniness.

"The whole process is complicated but it happens within the blink of an eye," Wang says. "Our results suggest that at first sight we anthropomorphize an android, but within milliseconds we detect deviations and dehumanize it. And that drop in perceived animacy likely contributes to the uncanny feeling."

A robot that controls highly flexible tools

 

RoboCut is also able to carve hearts. 

How do you calculate the coordinated movements of two robot arms so they can accurately guide a highly flexible tool? ETH researchers have integrated all aspects of the optimisation calculations into an algorithm. A hot-wire cutter will be used, among other things, to develop building blocks for a mortar-free structure.

A newborn moves its arms and hands largely in an undirected and random manner. It has to learn how to coordinate them step by step. Years of practice are required to master the finely balanced movements of a violinist or calligrapher. It is therefore no surprise that the advanced calculations for the optimal movement of two robot arms to guide a tool precisely involve extremely challenging optimisation tasks. The complexity also increases greatly when the tool itself is not rigid, but flexible in all directions and bends differently depending on its position and movement.

Simon Dünser from Stelian Coros' research group at the Institute for Intelligent Interactive Systems has worked with other researchers to develop a hot- cutter robot with a wire that bends flexibly as it works. This allows it to create much more  in significantly fewer cuts than previous systems, where the electrically heatable wire is rigid and is thus only able to cut ruled surfaces from fusible plastics with a straight line at every point.

Carving rabbits and designing façades

In contrast, the RoboCut from the ETH computer scientists is not limited to planes, cylinders, cones or saddle surfaces, but is also able to create grooves in a plastic block. The biggest advantage, however, is that the targeted bending of the wire means far fewer cuts are necessary than if the target shape had to be approximated using ruled surfaces. As a result, the bendable wire can be used to create the figure of a sitting rabbit from a polystyrene block through just ten cuts with wood carving-like accuracy. The outline of the rabbit becomes clearly recognizable after just two cuts.

In addition to the fundamental improvement on traditional hot-wire methods, the RoboCut project also has other specific application goals in mind. For example, in future the technology could be used in architecture to produce individual polystyrene molds for concrete parts. This would enable a more varied design of façades and the development of new types of modular building systems.

Three linked optimisations simultaneously

For Dünser, the scientific challenges were the focus of the project. "The complex optimisation calculations are what make RoboCut special. These are needed to find the most efficient tool paths possible while melting the desired shape from the polystyrene block as precisely as possible," explains the scientist.



ETH computer scientists have developed a hot-wire cutting robot that guides highly flexible tools so precisely that it is able to carve a rabbit. Credit: ETH Zürich / The Computational Robotics Lab

In order to move the wire in a controlled manner, it was attached to a two-armed Yumi robot from ABB. First, the reaction of the wire to the movements of the  had to be calculated. Positions that would lead to unstable wire placement or where there was a risk of wire breakage were determined by means of simulations and then eliminated.

ETH researchers were then able to develop the actual optimisation on this basis. This had to take into account three linked aspects simultaneously. On the physical level, it was important to predict the controlled bending and movement of the wire in order to carry out the desired cuts. In terms of the shape, a cutting sequence had to be determined that would effect a highly precise approximation of the surface to the target shape in as few steps as possible. Finally, collisions with robot parts or its environment and unintentional cuts had to be ruled out.

Preventing bad minima

Dünser is one of the first scientists to succeed in integrating all the parameters in this complex task into a global optimisation algorithm. To do this, he designed a structured methodology based on the primary objective that the wire should always cut as close as possible to the surface of the target object. All other restrictions were then assigned costs and these were optimized as a total.

Without further devices, however, such calculations always fall into local minima, which lead to a pointless end result. To prevent this, in a first step Dünser ironed out the cost function, so to speak, and began the calculation with a cut that was initially only roughly adapted to the target shape. The cutting simulation was then gradually brought closer towards the target shape until the desired accuracy was achieved.

Method with versatile potential

The method developed by Dünser is not just limited to hot-wire cutting. The design of tool paths for other cutting and milling technologies could also benefit from it in the future. The method creates a much greater degree of scope for simulation, particularly in the generation of complex non-rotationally symmetrical shapes.

Electrical discharge machining with wires could benefit directly from this, as this technology enables high-precision cutting of electrically conductive materials via spark ablation. In the future, this could involve bendable electrode wires. This means that—as with the hot-wire cutting of plastics—more complicated and thus more efficient cuts can be made more easily than with today's rigid wires.

One specific application for RoboCut is being planned jointly with a research group from EPF Lausanne. With the help of a large-scale version of the hot-wire cutting robot, systematic  for building structures free of mortar and fastening technologies will be developed. The elements themselves must hold together in a stable manner. In the future, the robot should also be used to cut the polystyrene molds with which the various bricks are cast in concrete. The clever plastic cutter thus also creates the concrete construction technology of tomorrow.

First-of-a-kind electronic skin mimics human pain response

 

Electronic skins that perform the same sensory functions as human skin could mean big things for the fields of robotics and medical devices, and scientists are not solely focused on just the pleasant ones. Researchers in Australia have succeeded in developing an artificial skin that responds to painful stimuli in the same way real skin does, which they see as an important step towards intelligent machines and prosthetics.

It mightn’t seem like the most practical of goals, but researchers have been working to develop electronic skins that allow robots and prostheses to feel pain for quite some time. These technologies could enable amputees to know if they are picking up something sharp or dangerous, for example, or could make robots more durable and safer for humans to be around.

The researchers behind the latest breakthrough, from Australia’s Royal Melbourne Institute of Technology, believe they have created a first of-a-kind device that can replicate the feedback loop of painful stimuli in unprecedented detail. Just as nerve signals travel to the brain at warp speed to inform it that we've encountered something sharp or hot, the new artificial skin does so with great efficiency, and with an ability to distinguish between less and more severe forms of pain.

“We’ve essentially created the first electronic somatosensors – replicating the key features of the body’s complex system of neurons, neural pathways and receptors that drive our perception of sensory stimuli,” says PhD researcher Md Ataur Rahman. “While some existing technologies have used electrical signals to mimic different levels of pain, these new devices can react to real mechanical pressure, temperature and pain, and deliver the right electronic response. It means our artificial skin knows the difference between gently touching a pin with your finger or accidentally stabbing yourself with it – a critical distinction that has never been achieved before electronically.”

The artificial skin actually incorporates three separate sensing technologies the team has been working on. It consists of a stretchable electronic material made of biocompatible silicone that is as thin as a sticker, temperature-reactive coatings that transform in response to heat, and electronic memory cells designed to mimic the way the brain stores information.

“We’re sensing things all the time through the skin but our pain response only kicks in at a certain point, like when we touch something too hot or too sharp,” says lead researcher Professor Madhu Bhaskaran. “No electronic technologies have been able to realistically mimic that very human feeling of pain – until now. Our artificial skin reacts instantly when pressure, heat or cold reach a painful threshold. It’s a critical step forward in the future development of the sophisticated feedback systems that we need to deliver truly smart prosthetics and intelligent robotics.”

With further work, the team imagines the electronic skin could one day also be used as an option for non-invasive skin grafts.

A technique allows robots to determine whether they are able to lift a heavy box

 

Humanoid robots, those with bodies that resemble humans, could soon help people to complete a wide variety of tasks. Many of the tasks that these robots are designed to complete involve picking up objects of different shapes, weights and sizes.

While many humanoid robots developed up to date are capable of picking up small and light objects, lifting bulky or heavy objects has often proved to be more challenging. In fact, if an  is too large or heavy, a robot might end up breaking or dropping it.

With this in mind, researchers at Johns Hopkins University and National University of Singapore (NUS) recently developed a technique that allows robots to determine whether or not they will be able to lift a heavy box with unknown physical properties. This technique, presented in a paper pre-published on arXiv, could enable the development of robots that can lift objects more efficiently, reducing the risk that they will pick up things that they cannot support or carry.

"We were particularly interested in how a humanoid robot can reason about the feasibility of lifting a box with unknown physical parameters," Yuanfeng Han, one of the researchers who carried out the study, told TechXplore."To achieve such a complex , the robot usually needs to first identify the physical parameters of the box, then generate a whole body motion trajectory that is safe and stable to lift up the box."

The process through which a robot generates motion trajectories that allow it to lift objects can be computationally demanding. In fact, humanoid robots typically have a high amount of degrees of freedom and the motion that their body needs to make to lift an object should meet several different constraints. This means that if a box is too heavy or its center of mass is too far away from the robot, the robot will most likely be unable to complete this motion.

"Think about us humans, when we try to reason about whether we can lift up a heavy object, such as a dumbbell," Han explained. "We first interact with the dumbbell to get a certain feeling of the object. Then, based on our previous experience, we kind of know if it is too heavy for us to lift or not. Similarly, our method starts by constructing a trajectory table, which saves different valid lifting motions for the robot corresponding to a range of physical parameters of the box using simulations. Then the robot considers this table as the knowledge of its previous experience."

The technique developed by Han, in collaboration with his colleague Ruixin Li and his supervisor Gregory S. Chirikjian (Professor and Head of the Department of Mechanical Engineering at NUS) allows a robot to get a sense of the inertia parameters of a box after briefly interacting with it. Subsequently, the robot looks back at the trajectory table generated by the method and checks whether it includes a lifting motion that would allow it to lift a box with these estimated parameters.

If this motion or trajectory exists, then lifting the box is considered to be feasible and the robot can immediately complete the task. If it does not exist, then the robot considers the task beyond its capacity.

"Essentially, the trajectory table that our method constructs offline saves the valid whole-body lifting motion trajectories according to a box's range of inertia parameters," Han said. "Subsequently, we developed a physical-interaction based algorithm that helps the  interact with the box safely and estimate the inertia parameters of the box."

The new technique allows robots to rapidly determine whether they are able to complete a lifting-related task. It thus saves time and computational power, as it prevents robots from having to generate whole-body motions before every lifting attempt, even unsuccessful ones.

Han and his colleagues evaluated the approach they developed in a series of tests using NAO, a renowned  developed by SoftBank Robotics. In these trials, NEO quickly and effectively identified objects that were impossible or very difficult to lift via the new technique. In the future, the same technique could be applied to other humanoid robots to make them more reliable and efficient in completing tasks that involve lifting large or heavy objects.

"Our method can significantly increase the working efficiency for practical pick-and-place tasks, especially for repeatable tasks," Han said. "In our future work, we plan to apply our approach to different objects or lifting tasks."

RESPONSIBLE AI CAN EFFECTIVELY DEPLOY HUMAN-CENTERED MACHINE LEARNING MODELS

 Human-Centered Machine Learning

Artificial intelligence (AI) is developing quickly as an unbelievably amazing innovation with apparently limitless application. It has shown its capacity to automate routine tasks, for example, our everyday drive, while likewise augmenting human capacity with new insight. Consolidating human imagination and creativity with the adaptability of machine learning is propelling our insight base and comprehension at a remarkable pace.

However, with extraordinary power comes great responsibility. In particular, AI raises worries on numerous fronts because of its possibly disruptive effect. These apprehensions incorporate workforce uprooting, loss of protection, potential biases in decision-making and lack of control over automated systems and robots. While these issues are noteworthy, they are likewise addressable with the correct planning, oversight, and governance.

Numerous artificial intelligence systems that will come into contact with people should see how people behave and what they need. This will make them more valuable and furthermore more secure to utilize. There are at least two manners by which understanding people can benefit intelligent systems. To start with, the intelligent system must gather what an individual needs. For a long time to come, we will design AI frameworks that get their directions and objectives from people. However, people don’t always state precisely what they mean. Misunderstanding a person’s intent can result in perceived failure. Second, going past just failing to comprehend human speech or written language, consider the fact that entirely perceived directions can result in disappointment if part of the guidelines or objectives are implicit or understood.

Human-centered AI is likewise in acknowledgment of the fact that people can be similarly inscrutable to intelligent systems. When we consider intelligent frameworks understanding people, we generally consider normal language and speech processing whether an intelligent system can react suitably to utterances. Natural language processing, speech processing, and activity recognition are significant challenges in building helpful, intelligent systems. To be really effective, AI and ML systems need a theory of mind about humans.

Responsible AI research is a rising field that advocates for better practices and techniques in deploying machine learning models. The objective is to build trust while at the same time limiting potential risks not exclusively to the organizations deploying these models, yet additionally the users they serve.

Responsible AI is a structure for bringing a large number of these basic practices together. It centers around guaranteeing the ethical, transparent and accountable use of AI technologies in a way predictable with user expectations, authoritative qualities and cultural laws and standards. Responsible AI can guard against the utilization of one-sided information or algorithms, guarantee that automated decisions are advocated and reasonable, and help keep up user trust and individual privacy. By giving clear rules of engagement, responsible AI permits companies under public and congressional scrutiny to improve and understand the groundbreaking capability of AI that is both convincing and responsible.

Human-centric machine learning is one of the more significant concepts in the business to date. Leading organizations, for example, Stanford and MIT are setting up labs explicitly to encourage this science. MIT defines this concept as “the design, development and deployment of information systems that learn from and collaborate with humans in a deep, significant way.”

The future of work is frequently depicted as being dominated by a robotic apparatus and a large number of algorithms claiming to be people. However, actually AI adoption has been to a great extent planned for making processes more effective, upgrading products and services and making new products and services as per Deloitte’s recent study of corporate executives, who evaluated decreasing headcount as their least significant objective.

It is inconsequential to set up common sense failures in robotics and autonomous operators. For example, a robot goes to a drug store and gets a professionally prescribed medication. Since the human is sick, the individual might want the robot to return as fast as possible. If the robot goes directly to the drug store, goes behind the counter, gets the medication, and gets back, it will have succeeded and minimized execution time and money. We would likewise say it looted the drug store since it didn’t take an interest in the social construct of exchanging money for the product.

Commonsense knowledge, the procedural form of which can go about as a reason for the theory of mind for when interacting with humans, can make human collaboration more natural. Despite the fact that ML and AI decision-making algorithms work uniquely from human decision-making, the behavior of the framework is subsequently more conspicuous to individuals. It likewise makes interaction with individuals more secure: it can decrease common sense goal failures in light of the fact that the operator fills in an under-determined objective with commonsense procedural details; and a specialist that demonstrations as per a person’s expectations will inherently avoid conflict with an individual who is applying their theory of mind of human behavior to intelligent agents.

Artificial intelligence in radiology, for instance, can rapidly draw attention to discoveries as well as highlight the significantly more unpretentious areas that probably won’t be readily caught by the human eye. Responsible AI human-centricity becomes an integral factor when doctors and patients, not machines, settle on an ultimate decision on treatment. All things considered, augmenting medical professionals with deep quantitative insight furnishes them with priceless data to factor into the decision.

By keeping humans tuned in, organizations can all the more likely decide the degree of automation and augmentation they need and control a definitive impact of AI on their workforce. Therefore, companies can hugely mitigate their risk and build up a more profound comprehension of what kinds of circumstances might be the most challenging for their  AI deployments and machine learning applications.

How Is Artificial Intelligence Used in Analytics?

 How Is Artificial Intelligence Used in Analytics?



Analytics powers your marketing program, but how much value are you really getting out of your data?

Artificial intelligence can help.

AI is a collection of technologies that excel at extracting insights and patterns from large sets of data, then making predictions based on that information.

That includes your analytics data from places like Google Analytics, automation platforms, content management systems, CRMs, and more.

In fact, AI exists today that can help you get much more value out of the data you already have, unify that data, and actually make predictions about customer behaviors based on it.

That sounds great. But how do you actually get started?

This article is here to help you take your first step.

At Marketing AI Institute, we’ve spent years researching and applying AI. Since 2016, we've published more than 400 articles on the subject. And we've published stories on 50+ AI-powered vendors with more than $1 billion in total funding. We’re also tracking 1,500+ sales and marketing AI companies with combined funding north of $6.2 billion.

This article leans on that expertise to demystify AI.

And, it'll give you ideas on how to use AI for analytics and offer some tools to explore further.

What Is Artificial Intelligence?

Ask 10 different experts what AI is, and you'll get 10 different answers. A good definition comes from Demis Hassabis, CEO of DeepMind, an AI company that Google bought.

Hassabis calls AI the "science of making machines smart." Today, we can teach machines to be like humans. We can give them the ability to see, hear, speak, write, and move.

Your smartphone has tons of AI-powered capabilities. These include facial recognition that unlocks your phone with your face (AI that sees). They also include voice assistants (AI that hears and speaks). And, don't forget, predictive text (AI that writes).

Other types of AI systems even give machines the ability to move, like you see in self-driving cars.

Your favorite services, like Amazon and Netflix, use AI to offer product recommendations.

And email clients like Gmail even use AI to automatically write parts of emails for you.

In fact, you probably use AI every day, no matter where you work or what you do.

"Machine learning" powers AI's most impressive capabilities. Machine learning is a type of AI that identifies patterns based on large sets of data. The machine uses these patterns to make predictions. Then, it uses more and more data to improve those predictions over time.

The result?

Technology powered by machine learning gets better over time, often without human involvement.

This is very different from traditional software.

A typical non-AI system, like your accounting software, relies on human inputs to work. The system is hard-coded with rules by people. Then, it follows those rules exactly to help you do your taxes. The system only improves if human programmers improve it.

But machine learning tools can improve on their own. This improvement comes from a machine assessing its own performance and new data.

For instance, an AI tool exists that writes email subject lines for you. Humans train the tool's machine learning using samples of a company's marketing copy. But then the tool drafts its own email subject lines. Split-testing occurs, then the machine learns on its own what to improve based on the results. Over time, the machine gets better and better with little human involvement. This unlocks possibly unlimited performance potential.

Now, imagine this power applied to any piece of marketing technology that uses data. AI can actually make everything, from ads to analytics to content, more intelligent.

How Is AI Used in Analytics?

Here are just a few of the top use cases we’ve found for artificial intelligence in analytics today.

1. Find new insights from your analytics.

Artificial intelligence excels at finding insights and patterns in large datasets that humans just can't see. It also does this at scale and at speed.

Today, AI-powered tools exist that will answer questions you ask about your website data. (Think "Which channel had the highest conversion rate?") AI can also recommend actions based on opportunities its seeing in your analytics.

Some tools to check out here include:

2. Use analytics to predict outcomes.

AI systems exist that use analytics data to help you predict outcomes and successful courses of action.

AI-powered systems can analyze data from hundreds of sources and offer predictions about what works and what doesn't. It can also can deep dive into data about your customers and offer predictions about consumer preferences, product development, and marketing channels.

 

3. Unify analytics and customer data.

AI is also used to unify data across platforms. That includes using the speed and scale of AI to pull together all your customer data into a single, unified view. AI is also capable of unifying data across different sources, even hard-to-track ones like call data. 

What’s the Difference Between Robotics and Artificial Intelligence?

 It is Robotics part of AI? Is AI part of robotics? What is the difference between the two terms? We answer this fundamental question.

Robotics and artificial intelligence (AI) serve very different purposes. However, people often get them mixed up. 

A lot of people wonder if robotics is a subset of artificial intelligence. Others wonder if they are the same thing.

Since the first version of this article, which we published back in 2017, the question has gotten even more confusing. The rise in the use of the word "robot" in recent years to mean any sort of automation has cast even more doubt on how robotics and AI fit together (more on this at the end of the article).  

It's time to put things straight once and for all. 

artificial_intelligence.jpg

Are robotics and artificial intelligence the same thing?

The first thing to clarify is that robotics and artificial intelligence are not the same things at all. In fact, the two fields are almost entirely separate.

A Venn diagram of the two fields would look like this:

RoboticsAI.png

As you can see, there is one area small where the two fields overlap: Artificially Intelligent Robots. It is within this overlap that people sometimes confuse the two concepts. 

To understand how these three terms relate to each other, let's look at each of them individually.

What is robotics?

Robotics is a branch of technology that deals with physical robots. Robots are programmable machines that are usually able to carry out a series of actions autonomously, or semi-autonomously.

In my opinion, there are three important factors which constitute a robot:

  1. Robots interact with the physical world via sensors and actuators.
  2. Robots are programmable.
  3. Robots are usually autonomous or semi-autonomous.

I say that robots are "usually" autonomous because some robots aren't. Telerobots, for example, are entirely controlled by a human operator but telerobotics is still classed as a branch of robotics. This is one example where the definition of robotics is not very clear.

It is surprisingly difficult to get experts to agree on exactly what constitutes a "robot." Some people say that a robot must be able to "think" and make decisions. However, there is no standard definition of "robot thinking." Requiring a robot to "think" suggests that it has some level of artificial intelligence but the many non-intelligent robots that exist show that thinking cannot be a requirement for a robot. 

However you choose to define a robot, robotics involves designing, building and programming physical robots which are able to interact with the physical world. Only a small part of robotics involves artificial intelligence.

Example of a robot: Basic cobot

A simple collaborative robot (cobot) is a perfect example of a non-intelligent robot.

For example, you can easily program a cobot to pick up an object and place it elsewhere. The cobot will then continue to pick and place objects in exactly the same way until you turn it off. This is an autonomous function because the robot does not require any human input after it has been programmed. The task does not require any intelligence because the cobot will never change what it is doing. 

Most industrial robots are non-intelligent. 

What is artificial intelligence?

Artificial intelligence (AI) is a branch of computer science. It involves developing computer programs to complete tasks that would otherwise require human intelligence. AI algorithms can tackle learning, perception, problem-solving, language-understanding and/or logical reasoning.

AI is used in many ways within the modern world. For example, AI algorithms are used in Google searches, Amazon's recommendation engine, and GPS route finders. Most AI programs are not used to control robots. 

Even when AI is used to control robots, the AI algorithms are only part of the larger robotic system, which also includes sensors, actuators, and non-AI programming. 

Often — but not always — AI involves some level of machine learning, where an algorithm is "trained" to respond to a particular input in a certain way by using known inputs and outputs. We discuss machine learning in our article Robot Vision vs Computer Vision: What's the Difference?

The key aspect that differentiates AI from more conventional programming is the word "intelligence." Non-AI programs simply carry out a defined sequence of instructions. AI programs mimic some level of human intelligence. 

Example of a pure AI: AlphaGo

One of the most common examples of pure AI can be found in games. The classic example of this is chess, where the AI Deep Blue beat world champion, Gary Kasparov, in 1997.

A more recent example is AlphaGo, an AI which beat Lee Sedol the world champion Go player, in 2016. There were no robotic elements to AlphaGo. The playing pieces were moved by a human who watched the robot's moves on a screen. 

What are Artificially Intelligent Robots?

Artificially intelligent robots are the bridge between robotics and AI. These are robots that are controlled by AI programs.

Most robots are not artificially intelligent. Up until quite recently, all industrial robots could only be programmed to carry out a repetitive series of movements which, as we have discussed, do not require artificial intelligence. However, non-intelligent robots are quite limited in their functionality.

AI algorithms are necessary when you want to allow the robot to perform more complex tasks.

warehousing robot might use a path-finding algorithm to navigate around the warehouse. A drone might use autonomous navigation to return home when it is about to run out of battery. A self-driving car might use a combination of AI algorithms to detect and avoid potential hazards on the road. These are all examples of artificially intelligent robots. 

Example: Artificially intelligent cobot

You could extend the capabilities of a collaborative robot by using AI.

Imagine you wanted to add a camera to your cobot. Robot vision comes under the category of "perception" and usually requires AI algorithms.

Say that you wanted the cobot to detect the object it was picking up and place it in a different location depending on the type of object. This would involve training a specialized vision program to recognize the different types of objects. One way to do this is by using an AI algorithm called Template Matching, which we discuss in our article How Template Matching Works in Robot Vision.

In general, most artificially intelligent robots only use AI in one particular aspect of their operation. In our example, AI is only used in object detection. The robot's movements are not really controlled by AI (though the output of the object detector does influence its movements). 

Where it all gets confusing…

As you can see, robotics and artificial intelligence are really two separate things.

Robotics involves building robots physical whereas AI involves programming intelligence.

However, there is one area where everything has got rather confusing since I first wrote this article: software robots.

Why software robots are not robots

The term "software robot" refers to a type of computer program which autonomously operates to complete a virtual task. Examples include:

  • Search engine "bots" — aka "web crawlers." These roam the internet, scanning websites and categorizing them for search. 
  • Robotic Process Automation (RPA) — These have somewhat hijacked the word "robot" in the past few years, as I explained in this article
  • Chatbots — These are the programs that pop up on websites talk to you with a set of pre-written responses. 

Software bots are not physical robots they only exist within a computer. Therefore, they are not real robots. 

Some advanced software robots may even include AI algorithms. However, software robots are not part of robotics.

Hopefully, this has clarified everything for you. But, if you have any questions at all please ask them in the comments. 

Computer Vision

 This is a technology of AI with which the robots can see. The computer vision plays vital role in the domains of safety, security, health, access, and entertainment.

Computer vision automatically extracts, analyzes, and comprehends useful information from a single image or an array of images. This process involves development of algorithms to accomplish automatic visual comprehension.

Hardware of Computer Vision System

This involves −

  • Power supply
  • Image acquisition device such as camera
  • A processor
  • A software
  • A display device for monitoring the system
  • Accessories such as camera stands, cables, and connectors

Tasks of Computer Vision

  • OCR − In the domain of computers, Optical Character Reader, a software to convert scanned documents into editable text, which accompanies a scanner.

  • Face Detection − Many state-of-the-art cameras come with this feature, which enables to read the face and take the picture of that perfect expression. It is used to let a user access the software on correct match.

  • Object Recognition − They are installed in supermarkets, cameras, high-end cars such as BMW, GM, and Volvo.

  • Estimating Position − It is estimating position of an object with respect to camera as in position of tumor in human’s body.

Application Domains of Computer Vision

  • Agriculture
  • Autonomous vehicles
  • Biometrics
  • Character recognition
  • Forensics, security, and surveillance
  • Industrial quality inspection
  • Face recognition
  • Gesture analysis
  • Geoscience
  • Medical imagery
  • Pollution monitoring
  • Process control
  • Remote sensing
  • Robotics
  • Transport

Applications of Robotics

The robotics has been instrumental in the various domains such as −

  • Industries − Robots are used for handling material, cutting, welding, color coating, drilling, polishing, etc.

  • Military − Autonomous robots can reach inaccessible and hazardous zones during war. A robot named Daksh, developed by Defense Research and Development Organization (DRDO), is in function to destroy life-threatening objects safely.

  • Medicine − The robots are capable of carrying out hundreds of clinical tests simultaneously, rehabilitating permanently disabled people, and performing complex surgeries such as brain tumors.

  • Exploration − The robot rock climbers used for space exploration, underwater drones used for ocean exploration are to name a few.

  • Entertainment − Disney’s engineers have created hundreds of robots for movie making.

Artificial Intelligence – Robotics

 Robotics is a domain in artificial intelligence that deals with the study of creating intelligent and efficient robots.

What are Robots?

Robots are the artificial agents acting in real world environment.

Objective

Robots are aimed at manipulating the objects by perceiving, picking, moving, modifying the physical properties of object, destroying it, or to have an effect thereby freeing manpower from doing repetitive functions without getting bored, distracted, or exhausted.

What is Robotics?

Robotics is a branch of AI, which is composed of Electrical Engineering, Mechanical Engineering, and Computer Science for designing, construction, and application of robots.

Aspects of Robotics

  • The robots have mechanical construction, form, or shape designed to accomplish a particular task.

  • They have electrical components which power and control the machinery.

  • They contain some level of computer program that determines what, when and how a robot does something.

Difference in Robot System and Other AI Program

Here is the difference between the two −

AI ProgramsRobots
They usually operate in computer-stimulated worlds.They operate in real physical world
The input to an AI program is in symbols and rules.Inputs to robots is analog signal in the form of speech waveform or images
They need general purpose computers to operate on.They need special hardware with sensors and effectors.

Robot Locomotion

Locomotion is the mechanism that makes a robot capable of moving in its environment. There are various types of locomotions −

  • Legged
  • Wheeled
  • Combination of Legged and Wheeled Locomotion
  • Tracked slip/skid

Legged Locomotion

  • This type of locomotion consumes more power while demonstrating walk, jump, trot, hop, climb up or down, etc.

  • It requires more number of motors to accomplish a movement. It is suited for rough as well as smooth terrain where irregular or too smooth surface makes it consume more power for a wheeled locomotion. It is little difficult to implement because of stability issues.

  • It comes with the variety of one, two, four, and six legs. If a robot has multiple legs then leg coordination is necessary for locomotion.

The total number of possible gaits (a periodic sequence of lift and release events for each of the total legs) a robot can travel depends upon the number of its legs.

If a robot has k legs, then the number of possible events N = (2k-1)!.

In case of a two-legged robot (k=2), the number of possible events is N = (2k-1)! = (2*2-1)! = 3! = 6.

Hence there are six possible different events −

  • Lifting the Left leg
  • Releasing the Left leg
  • Lifting the Right leg
  • Releasing the Right leg
  • Lifting both the legs together
  • Releasing both the legs together

In case of k=6 legs, there are 39916800 possible events. Hence the complexity of robots is directly proportional to the number of legs.

AIRoboticsPro

Wheeled Locomotion

It requires fewer number of motors to accomplish a movement. It is little easy to implement as there are less stability issues in case of more number of wheels. It is power efficient as compared to legged locomotion.

  • Standard wheel − Rotates around the wheel axle and around the contact

  • Castor wheel − Rotates around the wheel axle and the offset steering joint.

  • Swedish 45o and Swedish 90o wheels − Omni-wheel, rotates around the contact point, around the wheel axle, and around the rollers.

  • Ball or spherical wheel − Omnidirectional wheel, technically difficult to implement.

AIRoboticsPro

Slip/Skid Locomotion

In this type, the vehicles use tracks as in a tank. The robot is steered by moving the tracks with different speeds in the same or opposite direction. It offers stability because of large contact area of track and ground.

AIRoboticsPro

Components of a Robot

Robots are constructed with the following −

  • Power Supply − The robots are powered by batteries, solar power, hydraulic, or pneumatic power sources.

  • Actuators − They convert energy into movement.

  • Electric motors (AC/DC) − They are required for rotational movement.

  • Pneumatic Air Muscles − They contract almost 40% when air is sucked in them.

  • Muscle Wires − They contract by 5% when electric current is passed through them.

  • Piezo Motors and Ultrasonic Motors − Best for industrial robots.

  • Sensors − They provide knowledge of real time information on the task environment. Robots are equipped with vision sensors to be to compute the depth in the environment. A tactile sensor imitates the mechanical properties of touch receptors of human fingertips.

AI Robotics Pro

 

Are you ready for more?

We are here to serve your needs. And if you’d like to learn more, let’s get started.

About

Our Vision

We can make Robots as smart as a human by using a cloud brain.
Helpful humanoid robots will be affordable for homes by 2025
.

This will be achieved by cloud-connected robots,
where diverse models of robots share a brain hosted on a cloud platform.

Your robot will have access to an ever-growing number of skills
similar your smart phone’s access to apps today.

Our Mission

Operating Smart Robots for People.

We make helpful robot services possible; and to make them safe, secure and affordable.

Our mission is to implement the Vision. As breakthroughs continue along the way to the Vision becoming reality, AIRoboticsPro is preparing to be an operator of diverse models robots
for people with a wide range of interests and needs.

We Make Robots SmarterTM

Have a robot?  We can make it smarter. 
Have AI skills?  We can integrate them into ever-expanding cloud brains.

AIRoboticsPro is the creator of an emerging fabric to connect a multitude of AI skills to cloud robots (and other smart devices).

We are a catalyst that increases the value of AI developed anywhere in the world
by creating seamless interoperability with robots (and other smart devices).


Let’s build something together!