Archive 24.03.2021

Page 2 of 5
1 2 3 4 5

Automated Monitoring and Control for Water Purification and Storage System

With funding from the Tennessee Valley Authority, Watt Bar Utility District (WBUD) needed to streamline automation of its 32 tank and pump stations. System integrator Quality Controls LLC decided to use PLC, HMI, and SCADA products from CIMON to automate these facilities.

Researchers’ algorithm designs soft robots that sense

MIT researchers have developed a deep learning neural network to aid the design of soft-bodied robots, such as these iterations of a robotic elephant. Image: courtesy of the researchers

By Daniel Ackerman | MIT News Office

There are some tasks that traditional robots — the rigid and metallic kind — simply aren’t cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That’s a tall task for a soft robot that can deform in a virtually infinite number of ways.

MIT researchers have developed an algorithm to help engineers design soft robots that collect more useful information about their surroundings. The deep-learning algorithm suggests an optimized placement of sensors within the robot’s body, allowing it to better interact with its environment and complete assigned tasks. The advance is a step toward the automation of robot design. “The system not only learns a given task, but also how to best design the robot to solve that task,” says Alexander Amini. “Sensor placement is a very difficult problem to solve. So, having this solution is extremely exciting.”

The research will be presented during April’s IEEE International Conference on Soft Robotics and will be published in the journal IEEE Robotics and Automation Letters. Co-lead authors are Amini and Andrew Spielberg, both PhD students in MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Other co-authors include MIT PhD student Lillian Chin, and professors Wojciech Matusik and Daniela Rus.

Creating soft robots that complete real-world tasks has been a long-running challenge in robotics. Their rigid counterparts have a built-in advantage: a limited range of motion. Rigid robots’ finite array of joints and limbs usually makes for manageable calculations by the algorithms that control mapping and motion planning. Soft robots are not so tractable.

Soft-bodied robots are flexible and pliant — they generally feel more like a bouncy ball than a bowling ball. “The main problem with soft robots is that they are infinitely dimensional,” says Spielberg. “Any point on a soft-bodied robot can, in theory, deform in any way possible.” That makes it tough to design a soft robot that can map the location of its body parts. Past efforts have used an external camera to chart the robot’s position and feed that information back into the robot’s control program. But the researchers wanted to create a soft robot untethered from external aid.

“You can’t put an infinite number of sensors on the robot itself,” says Spielberg. “So, the question is: How many sensors do you have, and where do you put those sensors in order to get the most bang for your buck?” The team turned to deep learning for an answer.

The researchers developed a novel neural network architecture that both optimizes sensor placement and learns to efficiently complete tasks. First, the researchers divided the robot’s body into regions called “particles.” Each particle’s rate of strain was provided as an input to the neural network. Through a process of trial and error, the network “learns” the most efficient sequence of movements to complete tasks, like gripping objects of different sizes. At the same time, the network keeps track of which particles are used most often, and it culls the lesser-used particles from the set of inputs for the networks’ subsequent trials.

By optimizing the most important particles, the network also suggests where sensors should be placed on the robot to ensure efficient performance. For example, in a simulated robot with a grasping hand, the algorithm might suggest that sensors be concentrated in and around the fingers, where precisely controlled interactions with the environment are vital to the robot’s ability to manipulate objects. While that may seem obvious, it turns out the algorithm vastly outperformed humans’ intuition on where to site the sensors.

The researchers pitted their algorithm against a series of expert predictions. For three different soft robot layouts, the team asked roboticists to manually select where sensors should be placed to enable the efficient completion of tasks like grasping various objects. Then they ran simulations comparing the human-sensorized robots to the algorithm-sensorized robots. And the results weren’t close. “Our model vastly outperformed humans for each task, even though I looked at some of the robot bodies and felt very confident on where the sensors should go,” says Amini. “It turns out there are a lot more subtleties in this problem than we initially expected.”

Spielberg says their work could help to automate the process of robot design. In addition to developing algorithms to control a robot’s movements, “we also need to think about how we’re going to sensorize these robots, and how that will interplay with other components of that system,” he says. And better sensor placement could have industrial applications, especially where robots are used for fine tasks like gripping. “That’s something where you need a very robust, well-optimized sense of touch,” says Spielberg. “So, there’s potential for immediate impact.”

“Automating the design of sensorized soft robots is an important step toward rapidly creating intelligent tools that help people with physical tasks,” says Rus. “The sensors are an important aspect of the process, as they enable the soft robot to “see” and understand the world and its relationship with the world.”

This research was funded, in part, by the National Science Foundation and the Fannie and John Hertz Foundation.

Back to Robot Coding part 3: testing the EBB

In part 2 a few weeks ago I outlined a Python implementation of the ethical black box. I described the key data structure – a dictionary which serves as both specification for the type of robot, and the data structure used to deliver live data to the EBB. I also mentioned the other key robot specific code:

# Get data from the robot and store it in data structure spec
def getRobotData(spec):

Having reached this point I needed a robot – and a way of communicating with it – so that I could both write getRobotData(spec)  and test the EBB. But how to do this? I’m working from home during lockdown, and my e-puck robots are all in the lab. Then I remembered that the excellent robot simulator V-REP (now called CoppeliaSim) has a pretty good e-puck model and some nice demo scenes. V-REP also offers multiple ways of communicating between simulated robots and external programs (see here). One of them – TCP/IP sockets – appeals to me as I’ve written sockets code many times, for both real-world and research applications. Then a stroke of luck: I found that a team at Ensta-Bretagne had written a simple demo which does more or less what I need – just not for the e-puck. So, first I got that demo running and figured out how it works, then used the same approach for a simulated e-puck and the EBB. Here is a video capture of the working demo.

So, what’s going on in the demo? The visible simulation views in the V-REP window show an e-puck robot following a black line which is blocked by both a potted plant and an obstacle constructed from 3 cylinders. The robot has two behaviours: line following and wall following. The EBB requests data from the e-puck robot once per second, and you can see those data in the Python shell window. Reading from left to right you will see first the EBB date and time stamp, then robot time botT, then the 3 line following sensors lfSe, followed by the 8 infra red proximity sensors irSe. The final two fields show the joint (i.e. wheel angles) jntA, in degrees, then the motor commands jntD. By watching these values as the robot follows its line and negotiates the two obstacles you can see how the line and infra red sensor values change, resulting in updated motor commands.

Here is the code – which is custom written both for this robot and the means of communicating with it – for requesting data from the robot.


# Get data from the robot and store it in spec[]
# while returning one of the following result codes
ROBOT_DATA_OK = 0
CANNOT_CONNECT = 1
SOCKET_ERROR = 2
BAD_DATA = 3
def getRobotData(spec):
    # This function connects, via TCP/IP to an ePuck robot running in V-REP

    # create a TCP/IP socket and connect it to the simulated robot
    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    try:
        sock.connect(server_address_port)
    except:
        return CANNOT_CONNECT
    sock.settimeout(0.1) # set connection timeout
    
    # pack a dummy packet that will provoke data in response
    #   this is, in effect, a ‘ping’ to ask for a data record
    strSend = struct.pack(‘fff’,1.0,1.0,1.0)
    sock.sendall(strSend) # and send it to V-REP
    # wait for data back from V-REP
    #   expect a packet with 1 time, 2 joints, 2 motors, 3 line sensors, 8 irSensors  
    #   all floats because V-REP
    #   total packet size = 16 x 4 = 64 bytes
    data = b”
    nch_rx = 64 # expect this many bytes from  V-REP 
    try:
        while len(data) < nch_rx:
            data += sock.recv(nch_rx)
    except:
        sock.close()
        return SOCKET_ERROR
    # unpack the received data
    if len(data) == nch_rx:
        # V-REP packs and unpacks in floats only so
        vrx = struct.unpack(‘ffffffffffffffff’,data)
        # now move data from vrx[] into spec[], while rounding the floats
        spec[“botTime”] = [ round(vrx[0],2) ] 
        spec[“jntDemands”] = [ round(vrx[1],2), round(vrx[2],2) ]
        spec[“jntAngles”] = [round(vrx[3]*180.0/math.pi,2)
                             round(vrx[4]*180.0/math.pi,2) ]
        spec[“lfSensors”] = [ round(vrx[5],2), round(vrx[6],2), round(vrx[7],2) ]
        for i in range(8):
            spec[“irSensors”][i] = round(vrx[8+i],3)       
        result = ROBOT_DATA_OK
    else:       
        result = BAD_DATA
    sock.close()
    return result

The structure of this function is very simple: first create a socket then open it, then make a dummy packet and send it to V-REP to request EBB data from the robot. Then, when a data packet arrives, unpack it into spec. The most complex part of the code is data wrangling.

Would a real EBB collect data in this way? Well if the EBB is embedded in the robot then probably not. Communication between the robot controller and the EBB might be via ROS messages, or even more directly, by – for instance – allowing the EBB code to access a shared memory space which contains the robot’s sensor inputs, command outputs and decisions. But an external EBB, either running on a local server or in the cloud, would most likely use TCP/IP to communicate with the robot, so getRobotData() would look very much like the example here.

Underwater swimming robot responds with feedback from soft ‘lateral line’

A team of scientists from the Max Planck Institute for Intelligent Systems (MPI-IS) in Germany, from Seoul National University in Korea and from the Harvard University in the US, successfully developed a predictive model and closed-loop controller of a soft robotic fish, designed to actively adjust its undulation amplitude to changing flow conditions and other external disturbances. Their work "Modeling and Control of a Soft Robotic Fish with Integrated Soft Sensing" was published in Wiley's Advanced Intelligent Systems journal, in a special issue on "Energy Storage and Delivery in Robotic Systems."

Researchers’ algorithm designs soft-bodied robots that sense their own positions in space

There are some tasks that traditional robots—the rigid and metallic kind—simply aren't cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That's a tall task for a soft robot that can deform in a virtually infinite number of ways.

Expressing some doubts: Comparative analysis of human and android faces could lead to improvements

Researchers from the Graduate School of Engineering and Symbiotic Intelligent Systems Research Center at Osaka University used motion capture cameras to compare the expressions of android and human faces. They found that the mechanical facial movements of the robots, especially in the upper regions, did not fully reproduce the curved flow lines seen in the faces of actual people. This research may lead to more lifelike and expressive artificial faces.

Robot Race: The World´s Top 10 automated countries

The average robot density in the manufacturing industry hit a new global record of 113 units per 10,000 employees. By regions, Western Europe (225 units) and the Nordic European countries (204 units) have the most automated production, followed by North America (153 units) and South East Asia (119 units).

The world´s top 10 most automated countries are: Singapore (1), South Korea (2), Japan (3), Germany (4), Sweden (5), Denmark (6), Hong Kong (7), Chinese Taipei (8), USA (9) and Belgium and Luxemburg (10). This is according to the latest World Robotics statistics, issued by the International Federation of Robotics (IFR).

“Robot density is the number of operational industrial robots relative to the number of workers,” says Milton Guerry, President of the International Federation of Robotics. “This level measurement allows comparisons of countries with different economic sizes in the dynamic automation race over time.”

The country with the highest robot density by far remains Singapore with 918 units per 10,000 employees in 2019. The electronics industry, especially semiconductors and computer peripherals, is the primary customer of industrial robots in Singapore with shares of 75% of the total operational stock.

South Korea comes second with 868 units per 10,000 employees in 2019. Korea is a market leader in LCD and memory chip manufacturing with companies such as Samsung and LG on top and also a major production site for motor vehicles and the manufacturing of batteries for electric cars.

Japan (364 robots per 10,000 employees) and Germany (346 units), rank third and fourth respectively. Japan is the world´s predominant robot manufacturing country – where even robots assemble robots: 47% of the global robot production are made in Nippon. The electrical and electronics industry has a share of 34%, the automotive industry 32%, and the metal and machinery industry 13% of the operational stock. Germany is by far the largest robot market in Europe with 38% of Europe’s industrial robots operating in factories here. Robot density in the German automotive industry is among the highest in the world. Employment in this sector rose continuously from 720,000 people in 2010 to almost 850,000 people in 2019.

Sweden remains in 5th position with a robot density of 274 units operating with a share of 35% in the metal industry and another 35% in the automotive industry.

Robot density in the United States increased to 228 robots. In 2019, the US car market was again the second largest car market in the world, following China, with the second largest production volume of cars and light vehicles. Both USA and China are considered highly competitive markets for car manufacturers worldwide.

The development of robot density in China continues dynamically: Today, China’s robot density in the manufacturing industry ranks 15th worldwide. Next to car production, China is also a major producer of electronic devices, batteries, semiconductors, and microchips.


Please find graph and press releases in other languages for download in the original article.

First collaborative robot to work with vehicles in motion

The Ph.D. thesis by Daniel Teso-Fernández de Betoño of the UPV/EHU Faculty of Engineering in Vitoria-Gasteiz has resulted in a mobile, collaborative platform capable of performing tasks in motion at the Mercedes-Benz plant in the capital of Alava. The research opens up a new field for improving the ergonomics of these workstations and for the robot and human to collaborate by performing tasks together.

System detects errors when medication is self-administered

The new technology pairs wireless sensing with artificial intelligence to determine when a patient is using an insulin pen or inhaler, and it flags potential errors in the patient’s administration method. | Image: courtery of the researchers

From swallowing pills to injecting insulin, patients frequently administer their own medication. But they don’t always get it right. Improper adherence to doctors’ orders is commonplace, accounting for thousands of deaths and billions of dollars in medical costs annually. MIT researchers have developed a system to reduce those numbers for some types of medications.

The new technology pairs wireless sensing with artificial intelligence to determine when a patient is using an insulin pen or inhaler, and flags potential errors in the patient’s administration method. “Some past work reports that up to 70% of patients do not take their insulin as prescribed, and many patients do not use inhalers properly,” says Dina Katabi, the Andrew and Erna Viteri Professor at MIT, whose research group has developed the new solution. The researchers say the system, which can be installed in a home, could alert patients and caregivers to medication errors and potentially reduce unnecessary hospital visits.

The research appears today in the journal Nature Medicine. The study’s lead authors are Mingmin Zhao, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), and Kreshnik Hoti, a former visiting scientist at MIT and current faculty member at the University of Prishtina in Kosovo. Other co-authors include Hao Wang, a former CSAIL postdoc and current faculty member at Rutgers University, and Aniruddh Raghu, a CSAIL PhD student.

Some common drugs entail intricate delivery mechanisms. “For example, insulin pens require priming to make sure there are no air bubbles inside. And after injection, you have to hold for 10 seconds,” says Zhao. “All those little steps are necessary to properly deliver the drug to its active site.” Each step also presents opportunity for errors, especially when there’s no pharmacist present to offer corrective tips. Patients might not even realize when they make a mistake — so Zhao’s team designed an automated system that can.

Their system can be broken down into three broad steps. First, a sensor tracks a patient’s movements within a 10-meter radius, using radio waves that reflect off their body. Next, artificial intelligence scours the reflected signals for signs of a patient self-administering an inhaler or insulin pen. Finally, the system alerts the patient or their health care provider when it detects an error in the patient’s self-administration.

Wireless sensing technology could help improve patients’ technique with inhalers and insulin pens. | Image: Christine Daniloff, MIT

The researchers adapted their sensing method from a wireless technology they’d previously used to monitor people’s sleeping positions. It starts with a wall-mounted device that emits very low-power radio waves. When someone moves, they modulate the signal and reflect it back to the device’s sensor. Each unique movement yields a corresponding pattern of modulated radio waves that the device can decode. “One nice thing about this system is that it doesn’t require the patient to wear any sensors,” says Zhao. “It can even work through occlusions, similar to how you can access your Wi-Fi when you’re in a different room from your router.”

The new sensor sits in the background at home, like a Wi-Fi router, and uses artificial intelligence to interpret the modulated radio waves. The team developed a neural network to key in on patterns indicating the use of an inhaler or insulin pen. They trained the network to learn those patterns by performing example movements, some relevant (e.g. using an inhaler) and some not (e.g. eating). Through repetition and reinforcement, the network successfully detected 96 percent of insulin pen administrations and 99 percent of inhaler uses.

Once it mastered the art of detection, the network also proved useful for correction. Every proper medicine administration follows a similar sequence — picking up the insulin pen, priming it, injecting, etc. So, the system can flag anomalies in any particular step. For example, the network can recognize if a patient holds down their insulin pen for five seconds instead of the prescribed 10 seconds. The system can then relay that information to the patient or directly to their doctor, so they can fix their technique.

“By breaking it down into these steps, we can not only see how frequently the patient is using their device, but also assess their administration technique to see how well they’re doing,” says Zhao.

The researchers say a key feature of their radio wave-based system is its noninvasiveness. “An alternative way to solve this problem is by installing cameras,” says Zhao. “But using a wireless signal is much less intrusive. It doesn’t show peoples’ appearance.”

He adds that their framework could be adapted to medications beyond inhalers and insulin pens — all it would take is retraining the neural network to recognize the appropriate sequence of movements. Zhao says that “with this type of sensing technology at home, we could detect issues early on, so the person can see a doctor before the problem is exacerbated.”

Page 2 of 5
1 2 3 4 5