7 Questions to Ask When Choosing an AMR Multi-strand Conveyor Manufacturer
The AI hoax: The genius algorithm
Sometimes a very impressive algorithmic achievement is done and it should be celebrated. GPT-3 is a great example. GPT-3 is amazing engineering and data science and very well deserved it gets a lot of media attention. But for every GPT-3 there are hundreds of thousands of AI solutions, that are based on standard algorithms and not necessarily a genius achievement but a school book approach.
It might sound like I’m having a go at many AI solutions out there but in fact it’s the other way around. Going for a groundbreaking genius solution is for the vast majority of AI-cases not the right way to go. The standard algorithms can in most cases easily be sufficient for the task at hand and everything beyond that effort is usually bad business.
Beware when someone claims a genius or even special algorithm
Given all this I still hear a lot about the “unique”, “genius” or “special” algorithm that some company has developed to solve a problem. I often hear terms like this in the media and the fact that this is so popular makes a lot of sense. When you have a business that you want to market and sell your product at a high price. It also helps to scare competition away when they think that the entry barrier to make a certain product or solution is very high. But that is what the genius algorithm is 99 out of a 100 times. A marketing message.
In reality much of the AI out there is standard algorithms such as CNN’s, Random Forest or even logistics regression that some would claim isn’t even AI. These algorithms can be used by most novice developers by using freely available frameworks such as Tensorflow or Scikit learn.
My primary reason to write this post is the same as for a lot of our posts I’m writing. I want to demystify AI and by killing the narrative about the genius algorithm I hope more people will have the chance to utilize AI.
So when you hear these claims, be critical and don’t let it be the reason not to get started with AI.
The media is at fault
I’m not usually one to call “the fake media”, but in this case of AI I fell that the media has not lived up to it’s responsibility and a a naive way followed the hype and corporate press releases without taking the critical look that is in many ways what separates the news outlets from other information sources.
I often wonder how the danish(Where I live) news stations can have an Egypt correspondent but not an AI or even deep tech correspondent. The events in Egypt might not be as important to the everyday life in my and many other countries than AI is starting to have.
I really hope the media will improve here and not keep AI in a mystified aura.
The future of AI-algorithms
I’m pretty sure the future for AI-algorithms are a given. The big tech companies like OpenAI, Deepmind, Google, Facebook and Apple will be the ones to develop the genius algorithms and very often release them into the wild for everyone to use. It’s already happening and we will only see more of this. So claiming to have a genius algorithm is just not very likely a true claim in the future.
A robot that controls highly flexible tools
RoboCut is also able to carve hearts.
How do you calculate the coordinated movements of two robot arms so they can accurately guide a highly flexible tool? ETH researchers have integrated all aspects of the optimisation calculations into an algorithm. A hot-wire cutter will be used, among other things, to develop building blocks for a mortar-free structure.
A newborn moves its arms and hands largely in an undirected and random manner. It has to learn how to coordinate them step by step. Years of practice are required to master the finely balanced movements of a violinist or calligrapher. It is therefore no surprise that the advanced calculations for the optimal movement of two robot arms to guide a tool precisely involve extremely challenging optimisation tasks. The complexity also increases greatly when the tool itself is not rigid, but flexible in all directions and bends differently depending on its position and movement.
Simon Dünser from Stelian Coros' research group at the Institute for Intelligent Interactive Systems has worked with other researchers to develop a hot-wire cutter robot with a wire that bends flexibly as it works. This allows it to create much more complex shapes in significantly fewer cuts than previous systems, where the electrically heatable wire is rigid and is thus only able to cut ruled surfaces from fusible plastics with a straight line at every point.
Carving rabbits and designing façades
In contrast, the RoboCut from the ETH computer scientists is not limited to planes, cylinders, cones or saddle surfaces, but is also able to create grooves in a plastic block. The biggest advantage, however, is that the targeted bending of the wire means far fewer cuts are necessary than if the target shape had to be approximated using ruled surfaces. As a result, the bendable wire can be used to create the figure of a sitting rabbit from a polystyrene block through just ten cuts with wood carving-like accuracy. The outline of the rabbit becomes clearly recognizable after just two cuts.
In addition to the fundamental improvement on traditional hot-wire methods, the RoboCut project also has other specific application goals in mind. For example, in future the technology could be used in architecture to produce individual polystyrene molds for concrete parts. This would enable a more varied design of façades and the development of new types of modular building systems.
Three linked optimisations simultaneously
For Dünser, the scientific challenges were the focus of the project. "The complex optimisation calculations are what make RoboCut special. These are needed to find the most efficient tool paths possible while melting the desired shape from the polystyrene block as precisely as possible," explains the scientist.
In order to move the wire in a controlled manner, it was attached to a two-armed Yumi robot from ABB. First, the reaction of the wire to the movements of the robot arms had to be calculated. Positions that would lead to unstable wire placement or where there was a risk of wire breakage were determined by means of simulations and then eliminated.
ETH researchers were then able to develop the actual optimisation on this basis. This had to take into account three linked aspects simultaneously. On the physical level, it was important to predict the controlled bending and movement of the wire in order to carry out the desired cuts. In terms of the shape, a cutting sequence had to be determined that would effect a highly precise approximation of the surface to the target shape in as few steps as possible. Finally, collisions with robot parts or its environment and unintentional cuts had to be ruled out.
Preventing bad minima
Dünser is one of the first scientists to succeed in integrating all the parameters in this complex task into a global optimisation algorithm. To do this, he designed a structured methodology based on the primary objective that the wire should always cut as close as possible to the surface of the target object. All other restrictions were then assigned costs and these were optimized as a total.
Without further devices, however, such calculations always fall into local minima, which lead to a pointless end result. To prevent this, in a first step Dünser ironed out the cost function, so to speak, and began the calculation with a cut that was initially only roughly adapted to the target shape. The cutting simulation was then gradually brought closer towards the target shape until the desired accuracy was achieved.
Method with versatile potential
The method developed by Dünser is not just limited to hot-wire cutting. The design of tool paths for other cutting and milling technologies could also benefit from it in the future. The method creates a much greater degree of scope for simulation, particularly in the generation of complex non-rotationally symmetrical shapes.
Electrical discharge machining with wires could benefit directly from this, as this technology enables high-precision cutting of electrically conductive materials via spark ablation. In the future, this could involve bendable electrode wires. This means that—as with the hot-wire cutting of plastics—more complicated and thus more efficient cuts can be made more easily than with today's rigid wires.
One specific application for RoboCut is being planned jointly with a research group from EPF Lausanne. With the help of a large-scale version of the hot-wire cutting robot, systematic building blocks for building structures free of mortar and fastening technologies will be developed. The elements themselves must hold together in a stable manner. In the future, the robot should also be used to cut the polystyrene molds with which the various bricks are cast in concrete. The clever plastic cutter thus also creates the concrete construction technology of tomorrow.
#318: Humanized Intelligence in Academia and Industry, with Ayanna Howard
In this episode, Lauren Klein interviews Ayanna Howard, Professor and Chair of the School of Interactive Computing at Georgia Tech. Professor Howard describes her wide range of work in robotics, from robots that assist children with special needs to trust in autonomous systems. She also discusses her path through the field of robotics in both academia and business, and the importance of conducting in-the-wild robotics research.
Ayanna Howard
Ayanna Howard is a Professor and Chair of the School of Interactive Computing at Georgia Tech. Professor Howard is the director and founder of the Human-Automation Systems (HumAnS) Laboratory. Her research focuses on humanized intelligence, with a wide range of applications from Human-Robot Interaction to science-driven robotics. Prior to Georgia Tech, she led research projects at NASA’s Jet Propulsion Laboratory. Professor Howard is also a founder and the CTO of the educational robotics company Zyrobotics.
Links
A robot that controls highly flexible tools
IMTS Connects the Manufacturing Community Through IMTS Network and IMTS spark Digital Destinations
First-of-a-kind electronic skin mimics human pain response
Electronic skins that perform the same sensory functions as human skin could mean big things for the fields of robotics and medical devices, and scientists are not solely focused on just the pleasant ones. Researchers in Australia have succeeded in developing an artificial skin that responds to painful stimuli in the same way real skin does, which they see as an important step towards intelligent machines and prosthetics.
It mightn’t seem like the most practical of goals, but researchers have been working to develop electronic skins that allow robots and prostheses to feel pain for quite some time. These technologies could enable amputees to know if they are picking up something sharp or dangerous, for example, or could make robots more durable and safer for humans to be around.
The researchers behind the latest breakthrough, from Australia’s Royal Melbourne Institute of Technology, believe they have created a first of-a-kind device that can replicate the feedback loop of painful stimuli in unprecedented detail. Just as nerve signals travel to the brain at warp speed to inform it that we've encountered something sharp or hot, the new artificial skin does so with great efficiency, and with an ability to distinguish between less and more severe forms of pain.
“We’ve essentially created the first electronic somatosensors – replicating the key features of the body’s complex system of neurons, neural pathways and receptors that drive our perception of sensory stimuli,” says PhD researcher Md Ataur Rahman. “While some existing technologies have used electrical signals to mimic different levels of pain, these new devices can react to real mechanical pressure, temperature and pain, and deliver the right electronic response. It means our artificial skin knows the difference between gently touching a pin with your finger or accidentally stabbing yourself with it – a critical distinction that has never been achieved before electronically.”
The artificial skin actually incorporates three separate sensing technologies the team has been working on. It consists of a stretchable electronic material made of biocompatible silicone that is as thin as a sticker, temperature-reactive coatings that transform in response to heat, and electronic memory cells designed to mimic the way the brain stores information.
“We’re sensing things all the time through the skin but our pain response only kicks in at a certain point, like when we touch something too hot or too sharp,” says lead researcher Professor Madhu Bhaskaran. “No electronic technologies have been able to realistically mimic that very human feeling of pain – until now. Our artificial skin reacts instantly when pressure, heat or cold reach a painful threshold. It’s a critical step forward in the future development of the sophisticated feedback systems that we need to deliver truly smart prosthetics and intelligent robotics.”
With further work, the team imagines the electronic skin could one day also be used as an option for non-invasive skin grafts.
A technique allows robots to determine whether they are able to lift a heavy box
Humanoid robots, those with bodies that resemble humans, could soon help people to complete a wide variety of tasks. Many of the tasks that these robots are designed to complete involve picking up objects of different shapes, weights and sizes.
While many humanoid robots developed up to date are capable of picking up small and light objects, lifting bulky or heavy objects has often proved to be more challenging. In fact, if an object is too large or heavy, a robot might end up breaking or dropping it.
With this in mind, researchers at Johns Hopkins University and National University of Singapore (NUS) recently developed a technique that allows robots to determine whether or not they will be able to lift a heavy box with unknown physical properties. This technique, presented in a paper pre-published on arXiv, could enable the development of robots that can lift objects more efficiently, reducing the risk that they will pick up things that they cannot support or carry.
"We were particularly interested in how a humanoid robot can reason about the feasibility of lifting a box with unknown physical parameters," Yuanfeng Han, one of the researchers who carried out the study, told TechXplore."To achieve such a complex task, the robot usually needs to first identify the physical parameters of the box, then generate a whole body motion trajectory that is safe and stable to lift up the box."
The process through which a robot generates motion trajectories that allow it to lift objects can be computationally demanding. In fact, humanoid robots typically have a high amount of degrees of freedom and the motion that their body needs to make to lift an object should meet several different constraints. This means that if a box is too heavy or its center of mass is too far away from the robot, the robot will most likely be unable to complete this motion.
"Think about us humans, when we try to reason about whether we can lift up a heavy object, such as a dumbbell," Han explained. "We first interact with the dumbbell to get a certain feeling of the object. Then, based on our previous experience, we kind of know if it is too heavy for us to lift or not. Similarly, our method starts by constructing a trajectory table, which saves different valid lifting motions for the robot corresponding to a range of physical parameters of the box using simulations. Then the robot considers this table as the knowledge of its previous experience."
The technique developed by Han, in collaboration with his colleague Ruixin Li and his supervisor Gregory S. Chirikjian (Professor and Head of the Department of Mechanical Engineering at NUS) allows a robot to get a sense of the inertia parameters of a box after briefly interacting with it. Subsequently, the robot looks back at the trajectory table generated by the method and checks whether it includes a lifting motion that would allow it to lift a box with these estimated parameters.
If this motion or trajectory exists, then lifting the box is considered to be feasible and the robot can immediately complete the task. If it does not exist, then the robot considers the task beyond its capacity.
"Essentially, the trajectory table that our method constructs offline saves the valid whole-body lifting motion trajectories according to a box's range of inertia parameters," Han said. "Subsequently, we developed a physical-interaction based algorithm that helps the robot interact with the box safely and estimate the inertia parameters of the box."
The new technique allows robots to rapidly determine whether they are able to complete a lifting-related task. It thus saves time and computational power, as it prevents robots from having to generate whole-body motions before every lifting attempt, even unsuccessful ones.
Han and his colleagues evaluated the approach they developed in a series of tests using NAO, a renowned humanoid robot developed by SoftBank Robotics. In these trials, NEO quickly and effectively identified objects that were impossible or very difficult to lift via the new technique. In the future, the same technique could be applied to other humanoid robots to make them more reliable and efficient in completing tasks that involve lifting large or heavy objects.
"Our method can significantly increase the working efficiency for practical pick-and-place tasks, especially for repeatable tasks," Han said. "In our future work, we plan to apply our approach to different objects or lifting tasks."
A technique allows robots to determine whether they are able to lift a heavy box
Eyes on the world: Drones change our point of view and our truths
Eyes on the world: Drones change our point of view and our truths
RESPONSIBLE AI CAN EFFECTIVELY DEPLOY HUMAN-CENTERED MACHINE LEARNING MODELS
Artificial intelligence (AI) is developing quickly as an unbelievably amazing innovation with apparently limitless application. It has shown its capacity to automate routine tasks, for example, our everyday drive, while likewise augmenting human capacity with new insight. Consolidating human imagination and creativity with the adaptability of machine learning is propelling our insight base and comprehension at a remarkable pace.
However, with extraordinary power comes great responsibility. In particular, AI raises worries on numerous fronts because of its possibly disruptive effect. These apprehensions incorporate workforce uprooting, loss of protection, potential biases in decision-making and lack of control over automated systems and robots. While these issues are noteworthy, they are likewise addressable with the correct planning, oversight, and governance.
Numerous artificial intelligence systems that will come into contact with people should see how people behave and what they need. This will make them more valuable and furthermore more secure to utilize. There are at least two manners by which understanding people can benefit intelligent systems. To start with, the intelligent system must gather what an individual needs. For a long time to come, we will design AI frameworks that get their directions and objectives from people. However, people don’t always state precisely what they mean. Misunderstanding a person’s intent can result in perceived failure. Second, going past just failing to comprehend human speech or written language, consider the fact that entirely perceived directions can result in disappointment if part of the guidelines or objectives are implicit or understood.
Human-centered AI is likewise in acknowledgment of the fact that people can be similarly inscrutable to intelligent systems. When we consider intelligent frameworks understanding people, we generally consider normal language and speech processing whether an intelligent system can react suitably to utterances. Natural language processing, speech processing, and activity recognition are significant challenges in building helpful, intelligent systems. To be really effective, AI and ML systems need a theory of mind about humans.
Responsible AI research is a rising field that advocates for better practices and techniques in deploying machine learning models. The objective is to build trust while at the same time limiting potential risks not exclusively to the organizations deploying these models, yet additionally the users they serve.
Responsible AI is a structure for bringing a large number of these basic practices together. It centers around guaranteeing the ethical, transparent and accountable use of AI technologies in a way predictable with user expectations, authoritative qualities and cultural laws and standards. Responsible AI can guard against the utilization of one-sided information or algorithms, guarantee that automated decisions are advocated and reasonable, and help keep up user trust and individual privacy. By giving clear rules of engagement, responsible AI permits companies under public and congressional scrutiny to improve and understand the groundbreaking capability of AI that is both convincing and responsible.
Human-centric machine learning is one of the more significant concepts in the business to date. Leading organizations, for example, Stanford and MIT are setting up labs explicitly to encourage this science. MIT defines this concept as “the design, development and deployment of information systems that learn from and collaborate with humans in a deep, significant way.”
The future of work is frequently depicted as being dominated by a robotic apparatus and a large number of algorithms claiming to be people. However, actually AI adoption has been to a great extent planned for making processes more effective, upgrading products and services and making new products and services as per Deloitte’s recent study of corporate executives, who evaluated decreasing headcount as their least significant objective.
It is inconsequential to set up common sense failures in robotics and autonomous operators. For example, a robot goes to a drug store and gets a professionally prescribed medication. Since the human is sick, the individual might want the robot to return as fast as possible. If the robot goes directly to the drug store, goes behind the counter, gets the medication, and gets back, it will have succeeded and minimized execution time and money. We would likewise say it looted the drug store since it didn’t take an interest in the social construct of exchanging money for the product.
Commonsense knowledge, the procedural form of which can go about as a reason for the theory of mind for when interacting with humans, can make human collaboration more natural. Despite the fact that ML and AI decision-making algorithms work uniquely from human decision-making, the behavior of the framework is subsequently more conspicuous to individuals. It likewise makes interaction with individuals more secure: it can decrease common sense goal failures in light of the fact that the operator fills in an under-determined objective with commonsense procedural details; and a specialist that demonstrations as per a person’s expectations will inherently avoid conflict with an individual who is applying their theory of mind of human behavior to intelligent agents.
Artificial intelligence in radiology, for instance, can rapidly draw attention to discoveries as well as highlight the significantly more unpretentious areas that probably won’t be readily caught by the human eye. Responsible AI human-centricity becomes an integral factor when doctors and patients, not machines, settle on an ultimate decision on treatment. All things considered, augmenting medical professionals with deep quantitative insight furnishes them with priceless data to factor into the decision.
By keeping humans tuned in, organizations can all the more likely decide the degree of automation and augmentation they need and control a definitive impact of AI on their workforce. Therefore, companies can hugely mitigate their risk and build up a more profound comprehension of what kinds of circumstances might be the most challenging for their AI deployments and machine learning applications.
How Is Artificial Intelligence Used in Analytics?
How Is Artificial Intelligence Used in Analytics?
What’s the Difference Between Robotics and Artificial Intelligence?
It is Robotics part of AI? Is AI part of robotics? What is the difference between the two terms? We answer this fundamental question.
Robotics and artificial intelligence (AI) serve very different purposes. However, people often get them mixed up.
A lot of people wonder if robotics is a subset of artificial intelligence. Others wonder if they are the same thing.
Since the first version of this article, which we published back in 2017, the question has gotten even more confusing. The rise in the use of the word "robot" in recent years to mean any sort of automation has cast even more doubt on how robotics and AI fit together (more on this at the end of the article).
It's time to put things straight once and for all.
Are robotics and artificial intelligence the same thing?
The first thing to clarify is that robotics and artificial intelligence are not the same things at all. In fact, the two fields are almost entirely separate.
A Venn diagram of the two fields would look like this:
As you can see, there is one area small where the two fields overlap: Artificially Intelligent Robots. It is within this overlap that people sometimes confuse the two concepts.
To understand how these three terms relate to each other, let's look at each of them individually.
What is robotics?
Robotics is a branch of technology that deals with physical robots. Robots are programmable machines that are usually able to carry out a series of actions autonomously, or semi-autonomously.
In my opinion, there are three important factors which constitute a robot:
- Robots interact with the physical world via sensors and actuators.
- Robots are programmable.
- Robots are usually autonomous or semi-autonomous.
I say that robots are "usually" autonomous because some robots aren't. Telerobots, for example, are entirely controlled by a human operator but telerobotics is still classed as a branch of robotics. This is one example where the definition of robotics is not very clear.
It is surprisingly difficult to get experts to agree on exactly what constitutes a "robot." Some people say that a robot must be able to "think" and make decisions. However, there is no standard definition of "robot thinking." Requiring a robot to "think" suggests that it has some level of artificial intelligence but the many non-intelligent robots that exist show that thinking cannot be a requirement for a robot.
However you choose to define a robot, robotics involves designing, building and programming physical robots which are able to interact with the physical world. Only a small part of robotics involves artificial intelligence.
Example of a robot: Basic cobot
A simple collaborative robot (cobot) is a perfect example of a non-intelligent robot.
For example, you can easily program a cobot to pick up an object and place it elsewhere. The cobot will then continue to pick and place objects in exactly the same way until you turn it off. This is an autonomous function because the robot does not require any human input after it has been programmed. The task does not require any intelligence because the cobot will never change what it is doing.
Most industrial robots are non-intelligent.
What is artificial intelligence?
Artificial intelligence (AI) is a branch of computer science. It involves developing computer programs to complete tasks that would otherwise require human intelligence. AI algorithms can tackle learning, perception, problem-solving, language-understanding and/or logical reasoning.
AI is used in many ways within the modern world. For example, AI algorithms are used in Google searches, Amazon's recommendation engine, and GPS route finders. Most AI programs are not used to control robots.
Even when AI is used to control robots, the AI algorithms are only part of the larger robotic system, which also includes sensors, actuators, and non-AI programming.
Often — but not always — AI involves some level of machine learning, where an algorithm is "trained" to respond to a particular input in a certain way by using known inputs and outputs. We discuss machine learning in our article Robot Vision vs Computer Vision: What's the Difference?
The key aspect that differentiates AI from more conventional programming is the word "intelligence." Non-AI programs simply carry out a defined sequence of instructions. AI programs mimic some level of human intelligence.
Example of a pure AI: AlphaGo
One of the most common examples of pure AI can be found in games. The classic example of this is chess, where the AI Deep Blue beat world champion, Gary Kasparov, in 1997.
A more recent example is AlphaGo, an AI which beat Lee Sedol the world champion Go player, in 2016. There were no robotic elements to AlphaGo. The playing pieces were moved by a human who watched the robot's moves on a screen.
What are Artificially Intelligent Robots?
Artificially intelligent robots are the bridge between robotics and AI. These are robots that are controlled by AI programs.
Most robots are not artificially intelligent. Up until quite recently, all industrial robots could only be programmed to carry out a repetitive series of movements which, as we have discussed, do not require artificial intelligence. However, non-intelligent robots are quite limited in their functionality.
AI algorithms are necessary when you want to allow the robot to perform more complex tasks.
A warehousing robot might use a path-finding algorithm to navigate around the warehouse. A drone might use autonomous navigation to return home when it is about to run out of battery. A self-driving car might use a combination of AI algorithms to detect and avoid potential hazards on the road. These are all examples of artificially intelligent robots.
Example: Artificially intelligent cobot
You could extend the capabilities of a collaborative robot by using AI.
Imagine you wanted to add a camera to your cobot. Robot vision comes under the category of "perception" and usually requires AI algorithms.
Say that you wanted the cobot to detect the object it was picking up and place it in a different location depending on the type of object. This would involve training a specialized vision program to recognize the different types of objects. One way to do this is by using an AI algorithm called Template Matching, which we discuss in our article How Template Matching Works in Robot Vision.
In general, most artificially intelligent robots only use AI in one particular aspect of their operation. In our example, AI is only used in object detection. The robot's movements are not really controlled by AI (though the output of the object detector does influence its movements).
Where it all gets confusing…
As you can see, robotics and artificial intelligence are really two separate things.
Robotics involves building robots physical whereas AI involves programming intelligence.
However, there is one area where everything has got rather confusing since I first wrote this article: software robots.
Why software robots are not robots
The term "software robot" refers to a type of computer program which autonomously operates to complete a virtual task. Examples include:
- Search engine "bots" — aka "web crawlers." These roam the internet, scanning websites and categorizing them for search.
- Robotic Process Automation (RPA) — These have somewhat hijacked the word "robot" in the past few years, as I explained in this article.
- Chatbots — These are the programs that pop up on websites talk to you with a set of pre-written responses.
Software bots are not physical robots they only exist within a computer. Therefore, they are not real robots.
Some advanced software robots may even include AI algorithms. However, software robots are not part of robotics.
Hopefully, this has clarified everything for you. But, if you have any questions at all please ask them in the comments.