Category robots in business

Page 258 of 433
1 256 257 258 259 260 433

The three AI adoption strategies

AI comes in many different shapes and sizes. That applies to the use cases, the underlying technologies as well as the approaches to adopting AI in your organization. As many organizations are looking to adopt AI, an increasing need for tangible frameworks to understand the technology in a business perspective is requested by leaders in all industries.

 

Some of the key questions asked by leaders are simple. How much time and money is required to adopt AI and solve business problems via AI and what returns do we get for those efforts? That is more than reasonable questions but answering these questions have been an issue in two parts. Firstly the answers have been a moving target with the technology being in an exponential development and as a result the answers of yesterday seem antique today. Secondly the intangible and explorative nature of AI has made it hard to provide such answers at all.

 

But as AI has matured as a technology, and been packaged into products and ready-to-use solutions, these questions are ready to be answered. The products and solutions might come in different levels of abstractions but they are nevertheless ready for being applied to business problems without much hassle.

 

The three main AI approaches

 

To make it easy to understand the efforts and the outcomes of AI it can be divided into three core approaches; Off-the-shelf-AI, AutoAI and Custom AI. The idea is simple. AI has reached a point where some solutions are ready to use out of the box and others need a lot of work before being applied. All approaches come with their own benefits and drawbacks so the trick is to understand these properties and know when to apply what kind. These core AI adoption strategies provide a more concrete foundation for predicting costs, risks and returns when applying AI.

 

 

Off-the-shelf-AI



Some AI solutions are ready to use out of the box and need little to no adjustment. Examples can be the Siri in your iPhone, an invoice capture software or speech-to-text solutions. These solutions take minutes to get started and the business models are often AI-as-a-Service making the initial investments low. Often these services are pay-per-use models and consequently implies low risk. The challenge of course is that you get what you get and you shouldn’t get upset. The options for adjusting and making necessary changes for the business problem are usually as low as the costs. More and more of these AI-services are blooming in the AI ecosystem with the large cloud providers as Google and Microsoft taking the lead.

 

AutoAI



Also known as AutoML, a more technical name, this solution is the hybrid solution giving both freedom to shape the AI as one wishes to a certain extent but also not having to invent the wheel once again. With AutoAI a business can take it’s own data such as documents, customer data or even pictures of products. This data is then used to train AI’s in pre-made environments that have the ability to pick the right algorithms for the job and deploy the AI ready to use in the cloud. As it can be costly to acquire data so there are some efforts required with AutoAI but at least it rarely requires a small army of data scientists. The drawback is also the inflexibility that  is inherent in standardized tools. AutoAI also will be challenged when aiming for an AI with the highest possible accuracy.

 

Custom AI


With Custom AI almost everything is built from scratch. It is a job for data scientists, machine learning engineers and more of a task for R&D than any other place in the organization. The Custom AI approached is usually the weapon of choice when extremely high accuracy is required. Everything can be built and the possibilities are endless. This also usually means at least months or even years of work and experiments. Costly and time consuming.  As the AutoAI and Off-the-shelf-AI is becoming more and more available and advanced, Custom AI is more suitable for companies building AI solutions that compete with other AI solutions. With all these extra efforts you might get the small edge that will win you the market.

 

 

 

A final note

For solving problems with AI the most usual approach is advancing towards off-the-shelf and AutoAI. An even more likely future is that a combination of these will be the favorite choice for many organizations adopting AI.

 

The concept of these approaches is not unique to AI. Almost all other technologies have been through the same natural progression and now the time has come to AI. It is a sign of maturity and that AI is in a state of public property and no longer hidden behind the ivy walls of the top universities.

 

Of course these approaches are not set in stone and the boundaries between them are fluid and inconsistent. But applying this framework of approaches to the conversation when adopting AI helps the almost magic aura of artificial intelligence become closer to a tool in the toolbox business. And that is where the value starts mounting.

A robotic ball with playful intentions. What if you would use this?

Imagine a ball. When you step closer, the ball rolls away. When you try to catch him, he escapes. This is Fizzy, an autonomous, robotic ball that is programmed to play with children. He is ambiguous, does not like to be captured but does need attention. Little wheels inside the motor make sure that the movement is unconstrained and facilitate his playful character.

In this series of articles, we take robot innovations from their test-lab and bring them to a randomly selected workplace in the outside world. It turns out that Fizzy could flourish in cafés and restaurants, where cheerful attentiveness is a much sought after quality.

‘Children learn through play’. With this truism as a starting point, researcher Boudewijn Boon was asked to enhance the benefits of play for children in care at the Máxima hospital in the south east of the Netherlands. During his research, he found that one thing is essential to playing: it has to be intuitive. This means that all structured approaches to stimulate play, must eventually lead to more unstructured and spontaneous behaviour. Such a beautiful paradox. Who would have thought that the golden idea would be in robotics?

“Children are invited to chase, explore and imagine with this robotic ball, turning a sterile and quiet hospital into a world of playing”

Boudewijn Boon, researcher at the TU Delft

Boon found three core principles for initiating healthy play. First, it must spread out spatially, so kids will explore. The second imperative is for children to use their entire body. And thirdly, there is the need to introduce randomness and spontaneity in the structured environment of a hospital.

Fizzy ticks each box almost perfectly. This autonomous robotic ball does not like to be captured, so children chase after him. Sometimes he begs for attention, which can sparks hide and seek games. All this creates freedom. A big contrast with the structured life that children know during their illness. Boon says: “Children are invited to chase, explore and imagine with this robotic ball; turning a sterile and quiet hospital into a world of play”.

What’s inside a Fizzy. By TU DELFT© Thierry Schut and Guus Schoonewille
The Fizzy team the faculty of industrial desig engineering at Delft University of Technology. By TU DELFT© Thierry Schut and Guus Schoonewille
Little wheels enable unconstrained and playful movement. By TU DELFT© Thierry Schut and Guus Schoonewille

This application of robotics moves us. But at RoboHouse, we also know that innovation often moves in mysterious ways. Once an invention is sold in stores, people may buy it for purposes that researchers could never predict. So we took the Fizzy and traveled to a workplace, to seek out someone with a practical focus and an open mind. A professional, but outside of robotics. We asked her: “What would happen if you would use this?”.

Lisan Peddemors
Sandwich making at brasserie Barbaar in Delft

Lisan Peddemors considers our question. She has been working as a chef for several years in a cosy little brasserie in Delft, called Barbaar. We know her from making sugar sweet blondies and beautiful beetroot-hummus-sandwiches. But today, she helps us to anticipate the future.

 

Idea #1: Cheery and cheap kitchen cleaning

 

She would not trust a little robot to do any real cooking, she says: “Cooking is so subjective, it depends on thousands of variables whether you add some more salt”. Her first thoughts go out to all the things she would prefer to no longer do. Such as cleaning. The robot could be an excellent solution for that, with some minor adaptions. If you let it roll through soap and water first, Fizzy could clean the parts of the kitchen that Lisan can not get to. Or maybe when it is busy and there is a lot of trash, our little friend can take it out.

“Perhaps I could use it as a cheap kitchen assistant that’s never grumpy,” she says. “But then I wouldn’t have anyone to talk to.”

If the ball would truly have its own mind, it could also be a nuisance in the kitchen. Lisan explains that timing is everything. When it is busy, cooks dance around each other like a real tango. Could a robot ever fit in? But then it strikes her! The ball does not have to do everything right, it only has to sense when something is wrong.

 

Idea #2: The subtle alarm

 

Lisan says: “Suppose, when I’m getting something out of the cooler or have a coffee at the bar, Fizzy would give me a little notification that something is burning or a timer is done. Gently, without alarming any customers or colleagues. This cute little ball would bump against my leg and I would know I had to rush back to the kitchen!”.

Fizzy, the discreet warning system with a cute character. That’s how a restaurant chef looks at the robotic ball that Boudewijn Boon and his collaborators are developing. Where Boon sees a robot that adds spontaneity to the sterility of the hospital, Lisan sees a solution for distraction. Quite a difference, but that’s what happens when robotics enters the workplace – we are bound to be surprised, and sometimes maybe even delighted.


The post A robotic ball with playful intentions. What if you would use this? appeared first on RoboValley.

How tiny machines become capable of learning

Living organisms, from bacteria to animals and humans, can perceive their environment and process, store and retrieve this information. They learn how to react to later situations using appropriate actions. A team of physicists at Leipzig University led by Professor Frank Cichos, in collaboration with colleagues at Charles University Prague, have developed a method for giving tiny artificial microswimmers a certain ability to learn using machine learning algorithms. They recently published a paper on this topic in the journal Science Robotics.

Electronics-free DraBot dragonfly signals environmental disruptions

Engineers at Duke University have developed an electronics-free, entirely soft robot shaped like a dragonfly that can skim across water and react to environmental conditions such as pH, temperature or the presence of oil. The proof-of-principle demonstration could be the precursor to more advanced, autonomous, long-range environmental sentinels for monitoring a wide range of potential telltale signs of problems.

Automated Monitoring and Control for Water Purification and Storage System

With funding from the Tennessee Valley Authority, Watt Bar Utility District (WBUD) needed to streamline automation of its 32 tank and pump stations. System integrator Quality Controls LLC decided to use PLC, HMI, and SCADA products from CIMON to automate these facilities.

Researchers’ algorithm designs soft robots that sense

MIT researchers have developed a deep learning neural network to aid the design of soft-bodied robots, such as these iterations of a robotic elephant. Image: courtesy of the researchers

By Daniel Ackerman | MIT News Office

There are some tasks that traditional robots — the rigid and metallic kind — simply aren’t cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That’s a tall task for a soft robot that can deform in a virtually infinite number of ways.

MIT researchers have developed an algorithm to help engineers design soft robots that collect more useful information about their surroundings. The deep-learning algorithm suggests an optimized placement of sensors within the robot’s body, allowing it to better interact with its environment and complete assigned tasks. The advance is a step toward the automation of robot design. “The system not only learns a given task, but also how to best design the robot to solve that task,” says Alexander Amini. “Sensor placement is a very difficult problem to solve. So, having this solution is extremely exciting.”

The research will be presented during April’s IEEE International Conference on Soft Robotics and will be published in the journal IEEE Robotics and Automation Letters. Co-lead authors are Amini and Andrew Spielberg, both PhD students in MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Other co-authors include MIT PhD student Lillian Chin, and professors Wojciech Matusik and Daniela Rus.

Creating soft robots that complete real-world tasks has been a long-running challenge in robotics. Their rigid counterparts have a built-in advantage: a limited range of motion. Rigid robots’ finite array of joints and limbs usually makes for manageable calculations by the algorithms that control mapping and motion planning. Soft robots are not so tractable.

Soft-bodied robots are flexible and pliant — they generally feel more like a bouncy ball than a bowling ball. “The main problem with soft robots is that they are infinitely dimensional,” says Spielberg. “Any point on a soft-bodied robot can, in theory, deform in any way possible.” That makes it tough to design a soft robot that can map the location of its body parts. Past efforts have used an external camera to chart the robot’s position and feed that information back into the robot’s control program. But the researchers wanted to create a soft robot untethered from external aid.

“You can’t put an infinite number of sensors on the robot itself,” says Spielberg. “So, the question is: How many sensors do you have, and where do you put those sensors in order to get the most bang for your buck?” The team turned to deep learning for an answer.

The researchers developed a novel neural network architecture that both optimizes sensor placement and learns to efficiently complete tasks. First, the researchers divided the robot’s body into regions called “particles.” Each particle’s rate of strain was provided as an input to the neural network. Through a process of trial and error, the network “learns” the most efficient sequence of movements to complete tasks, like gripping objects of different sizes. At the same time, the network keeps track of which particles are used most often, and it culls the lesser-used particles from the set of inputs for the networks’ subsequent trials.

By optimizing the most important particles, the network also suggests where sensors should be placed on the robot to ensure efficient performance. For example, in a simulated robot with a grasping hand, the algorithm might suggest that sensors be concentrated in and around the fingers, where precisely controlled interactions with the environment are vital to the robot’s ability to manipulate objects. While that may seem obvious, it turns out the algorithm vastly outperformed humans’ intuition on where to site the sensors.

The researchers pitted their algorithm against a series of expert predictions. For three different soft robot layouts, the team asked roboticists to manually select where sensors should be placed to enable the efficient completion of tasks like grasping various objects. Then they ran simulations comparing the human-sensorized robots to the algorithm-sensorized robots. And the results weren’t close. “Our model vastly outperformed humans for each task, even though I looked at some of the robot bodies and felt very confident on where the sensors should go,” says Amini. “It turns out there are a lot more subtleties in this problem than we initially expected.”

Spielberg says their work could help to automate the process of robot design. In addition to developing algorithms to control a robot’s movements, “we also need to think about how we’re going to sensorize these robots, and how that will interplay with other components of that system,” he says. And better sensor placement could have industrial applications, especially where robots are used for fine tasks like gripping. “That’s something where you need a very robust, well-optimized sense of touch,” says Spielberg. “So, there’s potential for immediate impact.”

“Automating the design of sensorized soft robots is an important step toward rapidly creating intelligent tools that help people with physical tasks,” says Rus. “The sensors are an important aspect of the process, as they enable the soft robot to “see” and understand the world and its relationship with the world.”

This research was funded, in part, by the National Science Foundation and the Fannie and John Hertz Foundation.

Back to Robot Coding part 3: testing the EBB

In part 2 a few weeks ago I outlined a Python implementation of the ethical black box. I described the key data structure – a dictionary which serves as both specification for the type of robot, and the data structure used to deliver live data to the EBB. I also mentioned the other key robot specific code:

# Get data from the robot and store it in data structure spec
def getRobotData(spec):

Having reached this point I needed a robot – and a way of communicating with it – so that I could both write getRobotData(spec)  and test the EBB. But how to do this? I’m working from home during lockdown, and my e-puck robots are all in the lab. Then I remembered that the excellent robot simulator V-REP (now called CoppeliaSim) has a pretty good e-puck model and some nice demo scenes. V-REP also offers multiple ways of communicating between simulated robots and external programs (see here). One of them – TCP/IP sockets – appeals to me as I’ve written sockets code many times, for both real-world and research applications. Then a stroke of luck: I found that a team at Ensta-Bretagne had written a simple demo which does more or less what I need – just not for the e-puck. So, first I got that demo running and figured out how it works, then used the same approach for a simulated e-puck and the EBB. Here is a video capture of the working demo.

So, what’s going on in the demo? The visible simulation views in the V-REP window show an e-puck robot following a black line which is blocked by both a potted plant and an obstacle constructed from 3 cylinders. The robot has two behaviours: line following and wall following. The EBB requests data from the e-puck robot once per second, and you can see those data in the Python shell window. Reading from left to right you will see first the EBB date and time stamp, then robot time botT, then the 3 line following sensors lfSe, followed by the 8 infra red proximity sensors irSe. The final two fields show the joint (i.e. wheel angles) jntA, in degrees, then the motor commands jntD. By watching these values as the robot follows its line and negotiates the two obstacles you can see how the line and infra red sensor values change, resulting in updated motor commands.

Here is the code – which is custom written both for this robot and the means of communicating with it – for requesting data from the robot.


# Get data from the robot and store it in spec[]
# while returning one of the following result codes
ROBOT_DATA_OK = 0
CANNOT_CONNECT = 1
SOCKET_ERROR = 2
BAD_DATA = 3
def getRobotData(spec):
    # This function connects, via TCP/IP to an ePuck robot running in V-REP

    # create a TCP/IP socket and connect it to the simulated robot
    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    try:
        sock.connect(server_address_port)
    except:
        return CANNOT_CONNECT
    sock.settimeout(0.1) # set connection timeout
    
    # pack a dummy packet that will provoke data in response
    #   this is, in effect, a ‘ping’ to ask for a data record
    strSend = struct.pack(‘fff’,1.0,1.0,1.0)
    sock.sendall(strSend) # and send it to V-REP
    # wait for data back from V-REP
    #   expect a packet with 1 time, 2 joints, 2 motors, 3 line sensors, 8 irSensors  
    #   all floats because V-REP
    #   total packet size = 16 x 4 = 64 bytes
    data = b”
    nch_rx = 64 # expect this many bytes from  V-REP 
    try:
        while len(data) < nch_rx:
            data += sock.recv(nch_rx)
    except:
        sock.close()
        return SOCKET_ERROR
    # unpack the received data
    if len(data) == nch_rx:
        # V-REP packs and unpacks in floats only so
        vrx = struct.unpack(‘ffffffffffffffff’,data)
        # now move data from vrx[] into spec[], while rounding the floats
        spec[“botTime”] = [ round(vrx[0],2) ] 
        spec[“jntDemands”] = [ round(vrx[1],2), round(vrx[2],2) ]
        spec[“jntAngles”] = [round(vrx[3]*180.0/math.pi,2)
                             round(vrx[4]*180.0/math.pi,2) ]
        spec[“lfSensors”] = [ round(vrx[5],2), round(vrx[6],2), round(vrx[7],2) ]
        for i in range(8):
            spec[“irSensors”][i] = round(vrx[8+i],3)       
        result = ROBOT_DATA_OK
    else:       
        result = BAD_DATA
    sock.close()
    return result

The structure of this function is very simple: first create a socket then open it, then make a dummy packet and send it to V-REP to request EBB data from the robot. Then, when a data packet arrives, unpack it into spec. The most complex part of the code is data wrangling.

Would a real EBB collect data in this way? Well if the EBB is embedded in the robot then probably not. Communication between the robot controller and the EBB might be via ROS messages, or even more directly, by – for instance – allowing the EBB code to access a shared memory space which contains the robot’s sensor inputs, command outputs and decisions. But an external EBB, either running on a local server or in the cloud, would most likely use TCP/IP to communicate with the robot, so getRobotData() would look very much like the example here.

Underwater swimming robot responds with feedback from soft ‘lateral line’

A team of scientists from the Max Planck Institute for Intelligent Systems (MPI-IS) in Germany, from Seoul National University in Korea and from the Harvard University in the US, successfully developed a predictive model and closed-loop controller of a soft robotic fish, designed to actively adjust its undulation amplitude to changing flow conditions and other external disturbances. Their work "Modeling and Control of a Soft Robotic Fish with Integrated Soft Sensing" was published in Wiley's Advanced Intelligent Systems journal, in a special issue on "Energy Storage and Delivery in Robotic Systems."

Researchers’ algorithm designs soft-bodied robots that sense their own positions in space

There are some tasks that traditional robots—the rigid and metallic kind—simply aren't cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That's a tall task for a soft robot that can deform in a virtually infinite number of ways.
Page 258 of 433
1 256 257 258 259 260 433