Archive 30.06.2017

Page 1 of 5
1 2 3 5

Can we test robocars the way we tested regular cars?

I’ve written a few times that perhaps the biggest unsolved problem in robocars is how to know we have made them safe enough. While most people think of that in terms of government certification, the truth is that the teams building the cars are very focused on this, and know more about it than any regulator, but they still don’t know enough. The challenge is going to be convincing your board of directors that the car is safe enough to release, for if it is not, it could ruin the company that releases it, at least if it’s a big company with a reputation.

We don’t even have a good definition of what “safe enough” is though most people are roughly taking that as “a safety record superior to the average human.” Some think it should be much more, few think it should be less. Tesla, now with the backing of the NTSB, has noted that their autopilot system — combined with a mix of mostly attentive but some inattentive humans, may have a record superior to the average human, for example, even though with the inattentive humans it is worse.

Last week I attended a conference in Stuttgart devoted to robocar safety testing, part of a larger auto show including an auto testing show. It was interesting to see the main auto testing show — scores of expensive and specialized machines and tools that subject cars to wear and tear, slamming doors thousands of times, baking the surfaces, rattling and vibrating everything. And testing the electronics, too.

In Europe, the focus of testing is very strongly on making sure you are compliant with standards and regulations. That’s true in the USA but not quite as much. It was in Europe some time ago that I learned the word “homologation” which names this process.


There is a lot to be learned from the previous regimes of testing. They have built a lot of tools and learned techniques. But robocars are different beasts, and will fail in different ways. They will definitely not fail the way human drivers do, where usually small things are always going wrong, and an accident happens when 2 or 3 things go wrong at once. The conference included a lot of people working on simulation, which I have been promoting for many years. The one good thing in the NHTSA regulations — the open public database of all incidents — may vanish in the new rules, and it would have made for a great simulator. The companies making the simulators (and the academic world) would have put every incident into a shared simulator so every new car could test itself in every known problem situation.

Still, we will see lots of simulators full of scenarios, and also ways to parameterize them. That means that instead of just testing how a car behaves if somebody cuts it off, you test what it does if it gets cut off with a gap of 1cm, or 10cm, or 1m, or 2m, and by different types of vehicles, and by two at once etc. etc. etc. The nice thing about computers is you can test just about every variation you can think of, and test it in every road situation and every type of weather, at least if your simulator is good enough,

Yoav Hollander, who I met when he came as a student to the program at Singularity U, wrote a report on the approaches to testing he saw at the conference that contains useful insights, particularly on this question of new and old thinking, and what regulations drive vs. liability and fear of the public. He puts it well — traditional and certification oriented testing has a focus on assuring you don’t have “expected bugs” but is poor at finding unexpected ones. Other testing is about finding unexpected bugs. Expected bugs are of the “we’ve seen this sort of thing before, we want to be sure you don’t suffer from it” kind. Unexpected bugs are “something goes wrong that we didn’t know to look for.”

Avoiding old thinking

I believe that we are far from done on the robocar safety question. I think there are startups who have not yet been founded who, in the future, will come up with new techniques both for promoting safety and testing it that nobody has yet thought of. As such, I strongly advise against thinking that we know very much about how to do it yet.

A classic example of things going wrong is the movement towards “explainable AI.” Here, people are concerned that we don’t really know how “black box” neural network tools make the decisions they do. Car regulations in Europe are moving towards banning software that can’t be explained in cars. In the USA, the draft NHTSA regulations also suggest the same thing, though not as strongly.

We may find ourselves in a situation where we take to systems for robocars, one explainable and the other not. We put them through the best testing we can, both in simulator and most importantly in the real world. We find the explainable system has a “safety incident” every 100,000 miles, and the unexplainable system has an incident every 150,000 miles. To me it seems obvious that it would be insane to make a law that demands the former system which, when deployed, will hurt more people. We’ll know why it hurt them. We might be better at fixing the problems, but we also might not — with the unexplainable system we’ll be able to make sure that particular error does not happen again, but we won’t be sure that others very close it it are eliminated.

Testing in sim is a challenge here. In theory, every car should get no errors in sim, because any error found in sim will be fixed or judged as not really an error or so rare as to be unworthy of fixing. Even trained machine learning systems will be retrained until they get no errors in sim. The only way to do this sort of testing in sim will be to have teams generate brand new scenarios in sim that the cars have never seen, and see how they do. We will do this, but it’s hard. Particularly because as the sims get better, there will be fewer and fewer real world situations they don’t contain. At best, the test suite will offer some new highly unusual situations, which may not be the best way to really judge the quality of the cars. In addition, teams will be willing to pay simulator companies well for new and dangerous scenarios in sim for their testing — more than the government agencies will pay for such scenarios. And of course, once a new scenario displays a problem, every customer will fix it and it will become much less valuable. Eventually, as government regulations become more prevalent, homologation companies will charge to test your compliance rate on their test suites, but again, they will need to generate a new suite every time since everybody will want the data to fix any failure. This is not like emissions testing, where they tell you that you went over the emissions limit, and it’s worth testing the same thing again.

The testing was interesting, but my other main focus was on the connected car and security sessions. More on that to come.

The Robot Academy: Lessons in inverse kinematics and robot motion

The Robot Academy is a new learning resource from Professor Peter Corke and the Queensland University of Technology (QUT), the team behind the award-winning Introduction to Robotics and Robotic Vision courses. There are over 200 lessons available, all for free.

The lessons were created in 2015 for the Introduction to Robotics and Robotic Vision courses. We describe our approach to creating the original courses in the article, An Innovative Educational Change: Massive Open Online Courses in Robotics and Robotic Vision. The courses were designed for university undergraduate students but many lessons are suitable for anybody, as you can easily see the difficulty rating for each lesson. Below are lessons from inverse kinematics and robot motion.

You can watch the entire masterclass on the Robot Academy website.

Introduction

In this video lecture, we will learn about inverse kinematics, that is, how to compute the robot’s joint angles given the desired pose of their end-effector and knowledge about the dimensions of its links. We will also learn about how to generate paths that lead to a smooth coordinated motion of the end-effector.

Inverse kinematics for a 2-joint robot arm using geometry

In this lesson, we revisit the simple 2-link planar robot and determine the inverse kinematic function using simple geometry and trigonometry.

Inverse kinematics for a 2-joint robot arm using algebra

You can watch the entire masterclass on the Robot Academy website.

If you liked this article, you may also enjoy:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

A robotic doctor is gearing up for action

A new robot under development can send information on the stiffness, look and feel of a patient to a doctor located kilometres away. Image credit: Accrea

A robotic doctor that can be controlled hundreds of kilometres away by a human counterpart is gearing up for action. Getting a check-up from a robot may sound like something from a sci-fi film, but scientists are closing in on this real-life scenario and have already tested a prototype.

‘The robot at the remote site has different force, humidity and temperature sensors, all capturing information that a doctor would get when they are directly palpating (physically examining) a patient,’ explains Professor Angelika Peer, a robotics researcher at the University of the West of England, UK.

Prof. Peer is also the project coordinator of the EU-funded ReMeDi project, which is developing the robotic doctor to allow medical professionals to examine patients over huge distances.

Through a specially designed surface mounted on a robotic arm, stiffness data of the patient’s abdomen is displayed to the human, allowing the doctor to feel what the remote robot feels. This is made possible thanks to a tool called a haptic device, which has a soft surface reminiscent of skin that can recreate the sense of touch through force and changing its shape.

During the examination, the doctor sits at a desk facing three screens, one showing the doctor’s hand on the faraway patient and a second for teleconferencing with the patient, which will remain an essential part of the exchange.

The third screen displays a special capability of the robot doctor – ultrasonography. This is a medical technique that sends sound pulses into a patient’s body to create a window into the patient. It reveals areas of different densities in the body and is often used to examine pregnant women.

Ultrasonography is also important for flagging injuries or disease in organs such as the heart, liver, kidneys or spleen and can find indications for some types of cancer, too.

‘The system allows a doctor from a remote location to do a first assessment of a patient and make a decision about what should be done, whether to transfer them to hospital or undergo certain treatments,’ said Prof. Peer.

The robot currently resides in a hospital in Poland but scientists have shown the prototype at medical conferences around the world. And they have already been approached by doctors from Australia and Canada where it can take several hours to transfer rural patients to a doctor’s office or hospital.

With the help of a robot, a doctor can talk to a patient, manoeuvre robotic arms, feel what the robot senses and get ultrasounds. Image credit: ReMeDi

‘This is to support an initial diagnosis. The human is still in the loop, but this allows them to perform an examination remotely,’ said Prof. Peer.

Telemedicine

The ReMeDi project could speed up a medical exam and save time for patients and clinics. Another EU-funded project – United4Health (U4H) – looks at a different technology that could be used to remotely diagnose or treat people.

‘We need to transform how we deliver health and care,’ said Professor George Crooks, director of the Scottish Centre for Telehealth & Telecare, UK, which provides services via telephone, web and digital television and coordinates U4H.

This approach is crucial as Europe faces an ageing population and a rise in long-term health conditions like diabetes and heart disease. Telemedicine empowers these types of patients to take steps to help themselves at home, while staying in touch with medical experts via technology. Previous studies showed those with heart failure can be successfully treated this way.

These patients were given equipment to monitor their vital signs and send data back to a hospital. A trial in the UK comparing this self-care group to the standard-care group showed a reduction in mortality, hospital admissions and bed days, says Prof. Crooks.

A similar result was shown in the demonstration sites of the U4H project which tested the telemedicine approach in 14 regions for patients with heart failure, diabetes and chronic obstructive pulmonary disease (COPD). For diabetic patients in Scotland, they kept in touch with the hospital using text messages. For COPD, some patients used video consultations.

Prof. Crooks stresses that it is not all about the electronics – what matters is the service wraparound that makes the technology acceptable and easy to use for patients and clinical teams.

‘It can take two or three hours out of your day to go along to a 15 minute medical appointment and then to be told to keep taking your medication. What we do is, by using technology, patients monitor their own parameters, such as blood sugar in the case of diabetes, how they are feeling, diet and so on, and then they upload these results,’ said Prof. Crooks.

‘It doesn’t mean you never go to see a doctor, but whereas you might have gone seven or eight times a year, you may go just once or twice.’

Crucially, previous research has shown these patients fare better and the approach is safe.

‘There can be an economic benefit, but really this is about saving capacity. It frees up healthcare professionals to see the more complex cases,’ said Prof. Crooks.

It also empowers patients to take more responsibility for their health and results in fewer unplanned visits to the emergency room.

‘Patient satisfaction rates were well over 90 %,’ said Prof. Crooks.

Using MATLAB for hardware-in-the-loop prototyping #1 : Message passing systems

MATLAB© is a programming language and environment designed for scientific computing. It is one of the best languages for developing robot control algorithms and is widely used in the research community. While it is often thought of as an offline programming language, there are several ways to interface with it to control robotic hardware ‘in the loop’. As part of our own development we surveyed a number of different projects that accomplish this by using a message passing system and we compared the approaches they took. This post focuses on bindings for the following message passing frameworks: LCM, ROS, DDS, and ZeroMQ.

The main motivation for using MATLAB to prototype directly on real hardware is to dramatically accelerate the development cycle by reducing the time it takes to find out out whether an algorithm can withstand ubiquitous real-world problems like noisy and poorly-calibrated sensors, imperfect actuator controls, and unmodeled robot dynamics. Additionally, a workflow that requires researchers to port prototype code to another language before being able to test on real hardware can often lead to weeks or months being lost in chasing down new technical bugs introduceed by the port. Finally, programming in a language like C++ can pose a significant barrier to controls engineers who often have a strong electro-mechanical background but are not as strong in computer science or software engineering.

We have also noticed that over the past few years several other groups in the robotics community also experience these problems and have started to develop ways to control hardware directly from MATLAB.

The Need for External Languages

The main limitation when trying to use MATLAB to interface with hardware stems from the fact that its scripting language is fundamentally single threaded. It has been designed to allow non-programmers to do complex math operations without needing to worry about programming concepts like multi-threading or synchronization. However, this poses a problem for real-time control of hardware because all communication is forced to happen synchronously in the main thread. For example, if a control loop runs at 100Hz and it takes a message ~8ms for a round-trip, the main thread ends up wasting 80% of the available time budget waiting for a response without doing any actual work.

A second hurdle is that while MATLAB is very efficient in the execution of math operations, it is not particularly well suited for byte manipulation. This makes it difficult to develop code that can efficiently create and parse binary message formats that the target hardware can understand. Thus, after having the main thread spend its time waiting for and parsing the incoming data, there may not be any time left for performing interesting math operations.

comms single threaded.png
Figure 1. Communications overhead in the main MATLAB thread

Pure MATLAB implementations can work for simple applications, such as interfacing with an Arduino to gather temperature data or blink an LED, but it is not feasible to control complex robotic systems (e.g. a humanoid) at high rates (e.g. 100Hz-1KHz). Fortunately, MATLAB does have the ability to interface with other programming languages that allow users to create background threads that can offload the communications aspect from the main thread.

comms multi threaded.png
Figure 2. Communications overhead offloaded to other threads

Out of the box MATLAB provides two interfaces to other languages: MEX for calling C/C++ code, and the Java Interface for calling Java code. There are some differences between the two, but at the end of the day the choice effectively comes down to personal preference. Both provide enough capabilities for developing sophisticated interfaces and have orders of magnitude better performance than required. There are additional interfaces to other languages, but those require additional setup steps.

Message Passing Frameworks

Message passing frameworks such as Robot Operating System (ROS) and Lightweight Communication and Marshalling (LCM) have been widely adopted in the robotics research community. At the core they typically consist of two parts: a way to exchange data between processes (e.g. UDP/TCP), as well as a defined binary format for encoding and decoding the messages. They allow systems to be built with distributed components (e.g. processes) that run on different computers, different operating systems, and different programming languages.

The resulting systems are very extensible and provide convenient ways for prototyping. For example, a component communicating with a physical robot can be exchanged with a simulator without affecting the rest of the system. Similarly, a new walking controller could be implemented in MATLAB and communicate with external processes (e.g. robot comms) through the exchange of messages. With ROS and LCM in particular, their flexibility, wide-spread adoption, and support for different languages make them a nice starting point for a MATLAB-hardware interface.

Lightweight Communication and Marshalling (LCM)

LCM was developed in 2006 at MIT for their entry to DARPA’s Urban Challenge. In recent years it has become a popular alternative to ROS-messaging, and it was as far as we know the first message passing framework for robotics that supported MATLAB as a core language.

The snippet below shows how the MATLAB code for sending a command message could look like. The code creates a struct-like message, sets desired values, and publishes it on an appropriate channel.

%% MATLAB code for sending an LCM message
% Setup
lc = lcm.lcm.LCM.getSingleton();

% Fill message
cmd = types.command();
cmd.position = [1 2 3];
cmd.velocity = [1 2 3];

% Publish
lc.publish('COMMAND_CHANNEL', cmd);

Interestingly, the backing implementation of these bindings was done in pure Java and did not contain any actual MATLAB code. The exposed interface consisted of two Java classes as well as auto-generated message types.

  • The LCM class provides a way to publish messages and subscribe to channels
  • The generated Java messages handle the binary encoding and exposed fields that MATLAB can access
  • The MessageAggregator class provides a way to receive messages on a background thread and queue them for MATLAB.

Thus, even though the snippet looks similar to MATLAB code, all variables are actually Java objects. For example, the struct-like command type is a Java object that exposes public fields as shown in the snippet below. Users can access them the same way as fields of a standard MATLAB struct (or class properties) resulting in nice syntax. The types are automatically converted according to the type mapping.

/**
 * Java class that behaves like a MATLAB struct
 */
public final class command implements lcm.lcm.LCMEncodable
{
    public double[] position;
    public double[] velocity;
    // etc. ...
}

Receiving messages is done by subscribing an aggregator to one or more channels. The aggregator receives messages from a background thread and stores them in a queue that MATLAB can access in a synchronous manner using aggregator.getNextMessage(). Each message contains the raw bytes as well as some meta data for selecting an appropriate type for decoding.

%% MATLAB code for receiving an LCM message
% Setup
lc = lcm.lcm.LCM.getSingleton();
aggregator = lcm.lcm.MessageAggregator();
lc.subscribe('FEEDBACK_CHANNEL', aggregator);

% Continuously check for new messages
timeoutMs = 1000;
while true

    % Receive raw message
    msg = aggregator.getNextMessage(timeoutMs);

    % Ignore timeouts
    if ~isempty(msg)

        % Select message type based on channel name
        if strcmp('FEEDBACK_CHANNEL', char(msg.channel))

            % Decode raw bytes to a usable type
            fbk = types.feedback(msg.data);

            % Use data
            position = fbk.position;
            velocity = fbk.velocity;

        end

    end
end

The snippet below shows a simplified version of the backing Java code for the aggregator class. Since Java is limited to a single return argument, the getNextMessage call returns a Java type that contains the received bytes as well as meta data to identify the type, i.e., the source channel name.

/**
 * Java class for receiving messages in the background
 */
public class MessageAggregator implements LCMSubscriber {

    /**
     * Value type that combines multiple return arguments
     */
    public static class Message {

        final public byte[] data; // raw bytes
        final public String channel; // source channel name

        public Message(String channel_, byte[] data_) {
            data = data_;
            channel = channel_;
        }
    }

    /**
     * Method that gets called from MATLAB to receive new messages
     */
    public synchronized Message getNextMessage(long timeout_ms) {

		if (!messages.isEmpty()) {
		    return messages.removeFirst();
        }

        if (timeout_ms == 0) { // non-blocking
            return null;
        }

        // Wait for new message until timeout ...
    }

}

Note that the getNextMessage method requires a timeout argument. In general it is important for blocking Java methods to have a timeout in order to prevent the main thread from getting stuck permanently. Being in a Java call prohibits users from aborting the execution (ctrl-c), so timeouts should be reasonably short, i.e., in the low seconds. Otherwise this could cause the UI to become unresponsive and users may be forced to close MATLAB without being able to save their workspace. Passing in a timeout of zero serves as a non-blocking interface that immediately returns empty if no messages are available. This is often useful for working with multiple aggregators or for integrating asynchronous messages with unknown timing, such as user input.

Overall, we thought that this was a well thought out API and a great example for a minimum viable interface that works well in practice. By receiving messages on a background thread and by moving the encoding and decoding steps to the Java language, the main thread is able to spend most of its time on actually working with the data. Its minimalistic implementation is comparatively simple and we would recommend it as a starting point for developing similar interfaces.

Some minor points for improvement that we found were:

  • The decoding step fbk = types.feedback(msg.data) forces two unnecessary translations due to msg.data being a byte[], which automatically gets converted to and from int8. This could result in a noticeable performance hit when receiving larger messages (e.g. images) and could be avoided by adding an overload that accepts a non-primitive type that does not get translated, e.g., fbk = types.feedback(msg).
  • The Java classes did not implement Serializable, which could become bothersome when trying to save the workspace.
  • We would prefer to select the decoding type during the subscription step, e.g., lc.subscribe(‘FEEDBACK_CHANNEL’, aggregator, ‘types.feedback’), rather than requiring users to instantiate the type manually. This would clean up the parsing code a bit and allow for a less confusing error message if types are missing.

Robot Operating System (ROS)

ROS is by far the most widespread messaging framework in the robotics research community and has been officially supported by Mathworks’ Robotics System Toolbox since 2014. While the Simulink code generation uses ROS C++, the MATLAB implementation is built on the less common RosJava.

The API was designed such that each topic requires dedicated publishers and subscribers, which is different from LCM where each subscriber may listen to multiple channels/topics. While this may result in potentially more subscribers, the specification of the expected type at initialization removes much of the boiler plate code necessary for dealing with message types.

%% MATLAB code for publishing a ROS message
% Setup Publisher
chatpub = rospublisher('/chatter', 'std_msgs/String');

% Fill message
msg = rosmessage(chatpub);
msg.Data = 'Some test string';

% Publish
chatpub.send(msg);

Subscribers support three different styles to access messages: blocking calls, non-blocking calls, and callbacks.

%% MATLAB code for receiving a ROS message
% Setup Subscriber
laser = rossubscriber('/scan');

% (1) Blocking receive
scan = laser.receive(1); % timeout [s]

% (2) Non-blocking latest message (may not be new)
scan = laser.LatestMessage;

% (3) Callback
callback = @(msg) disp(msg);
subscriber = rossubscriber('/scan', @callback);

Contrary to LCM, all objects that are visible to users are actually MATLAB classes. Even though the implementation is using Java underneath, all exposed functionality is wrapped in MATLAB classes that hide all Java calls. For example, each message type is associated with a generated wrapper class. The code below shows a simplified example of a wrapper for a message that has a Name property.

%% MATLAB code for wrapping a Java message type
classdef WrappedMessage

    properties (Access = protected)
        % The underlying Java message object (hidden from user)
        JavaMessage
    end

    methods

        function name = get.Name(obj)
            % value = msg.Name;
            name = char(obj.JavaMessage.getName);
        end

        function set.Name(obj, name)
            % msg.Name = value;
            validateattributes(name, {'char'}, {}, 'WrappedMessage', 'Name');
            obj.JavaMessage.setName(name); % Forward to Java method
        end

        function out = doSomething(obj)
            % msg.doSomething() and doSomething(msg)
            try
                out = obj.JavaMessage.doSomething(); % Forward to Java method
            catch javaException
                throw(WrappedException(javaException)); % Hide Java exception
            end
        end

    end
end

Due to the implementation being closed-source, we were only able to look at the public toolbox files as well as the compiled Java bytecode. As far as we could tell they built a small Java library that wrapped RosJava functionality in order to provide an interface that is easier to call from MATLAB. Most of the actual logic seemed to be implemented in MATLAB code, but we also found several calls to various Java libraries for problems that would have been difficult to implement in pure MATLAB, e.g., listing networking interfaces or doing in-memory decompression of images.

Overall, we found that the ROS support toolbox looked very nice and was a great example of how seamless external languages could be integrated with MATLAB. We also really liked that they offered a way to load log files (rosbags).

One concern we had was that there did not seem to be a simple non-blocking way to check for new messages, e.g., a hasNewMessage() method or functionality equivalent to LCM’s getNextMessage(0). We often found this useful for applications that combined data from multiple topics that arrived at different rates (e.g. sensor feedback and joystick input events). We checked whether this behavior could be emulated by specifying a very small timeout in the receive method (shown in the snippet below), but any value below 0.1s seemed to never successfully return.

%% Trying to check whether a new message has arrived without blocking
try
msg = sub.receive(0.1); % below 0.1s always threw an error
% ... use message ...
catch ex
% ignore
end

Data Distribution Service (DDS)

In 2014 Mathworks also added a support package for DDS, which is the messaging middleware that ROS 2.0 is based on. It supports MATLAB and Simulink, as
well as code generation. Unfortunately, we did not have all the requirements to get it setup, and we could not find much information about the underlying implementation. After looking at some of the intro videos, we believe that the resulting code should look as follows.

%% MATLAB code for sending and receiving DDS messages
% Setup
DDS.import('ShapeType.idl','matlab');
dp = DDS.DomainParticipant

% Create message
myTopic = ShapeType;
myTopic.x = int32(23);
myTopic.y = int32(35);

% Send Message
dp.addWriter('ShapeType', 'Square');
dp.write(myTopic);

% Receive message
dp.addReader('ShapeType', 'Square');
readTopic = dp.read();

ZeroMQ

ZeroMQ is another asynchonous messaging library that is popular for building distributed systems. It only handles the messaging aspect, so users need to supply their own wire format. ZeroMQ-matlab is a MATLAB interface to ZeroMQ that was developed at UPenn between 2013-2015. We were not able to find much documentation, but as far as we could tell the resulting code should look similar to following snippet.

%% MATLAB code for sending and receiving ZeroMQ data
% Setup
subscriber = zmq( 'subscribe', 'tcp', '127.0.0.1', 43210 );
publisher = zmq( 'publish', 'tcp', 43210 );

% Publish data
bytes = uint8(rand(100,1));
nbytes = zmq( 'send', publisher, bytes );

% Receive data
receiver = zmq('poll', 1000); // polls for next message
[recv_data, has_more] = zmq( 'receive', receiver );

disp(char(recv_data));

It was implemented as a single MEX function that selects appropriate sub-functions based on a string argument. State was maintained by using socket IDs that were passed in by the user at every call. The code below shows a simplified snippet of the send action.

// Parsing the selected ZeroMQ action behind the MEX barrier
// Grab command String
if ( !(command = mxArrayToString(prhs[0])) )
	mexErrMsgTxt("Could not read command string. (1st argument)");

// Match command String with desired action (e.g. 'send')
if (strcasecmp(command, "send") == 0){
	// ... (argument validation)

	// retrieve arguments
	socket_id = *( (uint8_t*)mxGetData(prhs[1]) );
	size_t n_el = mxGetNumberOfElements(prhs[2]);
	size_t el_sz = mxGetElementSize(prhs[2]);
	size_t msglen = n_el*el_sz;

	// send data
	void* msg = (void*)mxGetData(prhs[2]);
	int nbytes = zmq_send( sockets[ socket_id ], msg, msglen, 0 );

	// ... check outcome and return
}
// ... other actions

Other Frameworks

Below is a list of APIs to other frameworks that we looked at but could not cover in more detail.

Project Notes

Simple Java wrapper for RabbitMQ with callbacks into MATLAB

Seems to be deprecated

Final Notes

Contrary to the situation a few years ago, nowadays there exist interfaces for most of the common message passing frameworks that allow researchers to do at least basic hardware-in-the-loop prototyping directly from MATLAB. However, if none of the available options work for you and you are planning on developing your own, we recommend the following:

  • If there is no clear pre-existing preference between C++ and Java, we recommend to start with a Java implementation. MEX interfaces require a lot of conversion code that Java interfaces would handle automatically.
  • We would recommend starting with a minimalstic LCM-like implementation and then add complexity when necessary.
  • While interfaces that only expose MATLAB code can provide a better and more consistent user experience (e.g. help documentation), there is a significant cost associated with maintaing all of the involved layers. We would recommend holding off on creating MATLAB wrappers until the API is relatively stable.

Finally, even though message passing systems are very widespread in the robotics community, they do have drawbacks and are not appropriate for every application. Future posts in this series will focus on some of the alternatives.

Snake robots slither into our hearts, literally

Snake robot at the Robotics institute. Credit: Jiuguang Wang/Flickr

The biblical narrative of the Garden of Eden describes how the snake became the most cursed of all beasts: “you shall walk on your belly, and you shall eat dust all the days of your life.” The reptile’s eternal punishment is no longer feared but embraced for its versatility and flexibility. The snake is fast approaching as one of the most celebrated robotic creatures for roboticists worldwide in out maneuvering rovers and humanoids.

Last week, while General Electric experienced a tumult in its management structure, its Aviation unit completed the acquisition of OC Robotics – a leader in serpent arm design. GE stated that it believes OC’s robots will be useful for jet engine maintenance, enabling repairs to be conducted while the engine is still attached to the wing by wiggling into parts where no human hand could survive. This promise translates into huge cost and time savings for maintenance and airline companies alike.

OC robots have use cases beyond avionics, including inspections of underground drilling and directional borings tens of feet below the Earth. In addition to acquiring visual data, OC’s snake is equipped with a high-pressure water jet and laser to measures the sharpness of the cutting surface. According to OC’s founder Andrew Graham, “This is faster and easier, and it keeps people safe. ” Graham seems to hit on the holy grail of robotics by combining profit and safety.

GE plans to expand the use case for its newest company. Lance Herrington, a leader at GE Aviation Services, says “Aviation applications will just be the starting point for this incredible technology.” Herrington implied that the snake technology could be adopted in the future to inspect power plants, trains, and even healthcare robots. As an example of its versatility, OC Robotics was awarded a prestigious prize by the U.K.’s Nuclear Decommissioning Authority for its LaserSnake. OC’s integrated snake-arm laser cutter was able to disassemble toxic parts of a nuclear fuel processing facility in a matter of weeks which would have taken years by humans while risking radiation exposure.

One of the most prolific inventors of robotic snake applications is Dr. Howie Choset of Carnegie Mellon University. Choset is the co-director of CMU’s Biorobotics Lab that has birthed severals startups based upon his snake technology, including Medrobotics (surgical systems); Hebi Robotics (actuators for modular robots); and Bito Robotics‘ (autonomous vehicles). Choset claims that his menagerie of metal reptiles are perfect for urban search and rescue, infrastructure repairs and medicine.

Source: Medorobotics

Recently, Medrobotics received FDA Clearance for its Flex Robotic System for colorectal procedures in the United States. According to the company’s press release, “Medrobotics is the first and only company to offer minimally invasive, steerable and shapeable robotic products for colorectal procedures in the U.S.” The Flex system promises a “scarfree” experience in accessing “hard-to-reach anatomy” that is just not possible with straight, rigid instruments.

“The human gastrointestinal system is full of twists and turns, and rigid surgical robots were not designed to operate in that environment. The Flex® Robotic System was. Two years ago Medrobotics started revolutionizing treatment in the head and neck in the U.S. We can now begin doing that in colorectal procedures,” said Dr. Samuel Straface, CEO.

Dr. Alessio Pigazzi, Professor of Surgery at the University of California, Irvine, exclaimed that “Medrobotics is ushering in the first of a new generation of shapeable and steerable robotic surgical systems that offer the potential to reduce the invasiveness of surgical procedures for more patients.” While Medrobotics’ system is currently only approved for use through the mouth and anus, Pigazzi looks forward to future applications whereby any natural orifices could be an entry point for true incision-less surgery.

The Technion ‘Snake Robot’. Photo: Kobi Gideon/GPO

Medrobotics was the brainchild of the collaboration of Choset with Dr. Alon Wolf of Israel’s prestigious Technion Institute of Technology. One of the earliest use cases for snake robots was by Wolf’s team in 2009 for military surveillance. As director of Technion’s BioRobotics and BioMechanics Laboratory (BRML) Wolf’s lab created the next generation of defensive snake robots for the latest terror threat, subterranean tunnels transporting suicide bombers and kidnappers.  Since the discovery of tunnels between the Gaza Strip and Israel in 2015, BRML has been working feverishly to deploy snake robots in the field of combat.

The vision for BRML’s hyper-redundant robots is to utilize its highly maneuverable actuators to sneak through tough terrain into tunnels and buildings. Once inside, the robot will provide instant scans of the environment to the command center and then leave behind sensors for continued surveillance. The robots are equipped with an array of sensors, including thermal imagers, miniature cameras, laser scanners, and laser radar with the ability of stitching seamlessly 360-degree views and maps of the targeted subterranean area. The robots, of course, would have dual uses for search & rescue and disaster recovery efforts.

Long term, Wolf would like to deploy his fleet of crawlers in search and rescue missions in urban locations and earthquakes.

“The robots we are creating at the Technion are extremely flexible and are able to manipulate delicate objects and navigate around walls. Over 400 rescue workers were killed during 9/11 because of the dangerous and unstable environment they were attempting to access and our objective is to ensure that robots are able to replace humans in such precarious situations,” explains Wolf.

It is no wonder why on his last visit to Israel, President Obama called Wolf’s vision “inspiring.”

Spider webs as computers

Spiders are truly amazing creatures. They have evolved over more than 200 million years and can be found in almost every corner of our planet. They are one of the most successful animals. Not less impressive are their webs, highly intricate structures that have been optimised through evolution over approximately 100 million years with the ultimate purpose of catching prey.

However, interestingly, the closer you look at spiders’ webs the more details you can observe and the structures are much more complicated than one would expect from a simple snare. They are made of a variety of different types of silks, use water droplets to keep the tension [see: citation 4], and the structure is highly dynamic [see: citation 4]. Spider’s webs have a great deal more morphological complexity than what you would need to simply catch flies.

Since nature typically does not spoil resources the question arises: why are spiders’ webs so complex? Might they have other functionalities besides being a simple trap? One of the most interesting answers to this question is that spiders might use their webs as computational devices.

How does the spider use the web as a computer?

Despite the fact that most spiders have a lot of eyes (the majority has 8, but some have even up to 12), a lot of the spiders have bad sight. In order to understand what is going on in their webs, they use mechanoreceptors in their legs (called lyriforms) to “listen” to vibrations in the web.  Different species of spiders have different preferred places to sit and observe. While some can be found right at the center, others prefer to sit outside the actual web and to listen to one single thread. It is quite remarkable that based only on the information that comes through this single thread the spider seems to be able to deduce what is going on in their web and where these events are taking place.

For example, they need to know if there is a prey, like a fly, entangled in their web. Or if the vibrations are coming from a dangerous insect like a wasp and they should stay away. The web is also used to communicate with potential mates and the spider even excites the web and listens to the echo. This might be a way for the spider to check if threads are broken or if the tension in the web has to be increased.

From a computational point of view, the spider needs to classify different vibration pattern (e.g., prey vs predator vs mate) and to locate its origin (i.e., where the vibration started).

One way to understand how a spider’s web could help to carry out this computational functionality is the concept of morphological computation. This is a term that describes the understanding that mechanical structures all over in nature are carrying out useful computations. For example, they help to stabilise running, facilitate sensory data processing, and helps animals and plants to interact with complex and unpredictable environments.

One could say computation is outsourced to the physical body (e.g., from the brain to another part of the body).

From this point of view, the spider’s web can be seen as a nonlinear, dynamic filter. It can be understood as some kind of pre-processing unit that makes it easier for the animal to interpret the vibration signals. The web’s dynamic properties and its complex morphological structure mix vibration signals in a nonlinear fashion. It even has some memory. This can be easily seen by pinching the web. It responds with vibrations for some moments after the impact echoing the original input.  The web can also damp unwanted frequencies, which is crucial to get rid of noise. On the other hand, it might even be able to highlight other signals at certain frequencies that carry more relevant information about the events taking place on the web.

These are all useful computations and they make it easier for the spider to “read” and understand the vibration patterns. As a result, the brain of the animal has to do less work and it can concentrate on other tasks. In effect, the spider seems to devolve computation to the web.  This might be also the reason why spiders tend to their webs so intensively. They constantly observe it and adapt the tension if it has changed, e.g. due to change in humidity, and repair it as soon a thread is broken.

From spider webs to sensors

People have speculated for a while that spider webs might have additional functionalities. A great article that discusses that is “The Thoughts of a Spiderweb“.

However, nobody so far has systematically looked into the actual computational capabilities of the web. This is about to change. We recently started a Leverhulme Trust Research project that will investigate naturally spun spider webs of different species to understand how and which computing might take place in these structures.  Moreover, the project does not only try to understand the underlying computational principles but will also develop morphological computation-based sensor technology to measure flow and vibrations.

The project combines our research expertise in Morphological Computation at the University of Bristol and the expertise on spider webs at the Silk Group in Oxford.

In experimental setups we will use solenoids and laser Doppler vibrometers to measure vibrations in the web with very high precision. The goal is to understand how computation is carried out. We will systematically investigate how filtering capabilities, memory, and signal integration can happen in such structures. In parallel, we will develop a general simulation environment for vibrating structures. We will use this to ask specific questions about how different shapes and materials others than spider webs and silk can help to carry out computations. In addition, we will develop real prototypes of vibration and flow sensors, which will be inspired by these findings. It’s very likely they will look different from spider webs and they will use various types of materials.

Such sensors can be used in various applications. For example, morphological computation based flow sensors could be used to detect anomalies in the flow in tubes. Or vibration sensors put at strategic places on buildings could be able to detect earthquakes or structural failure. Also highly dynamic machines, for example, a wind turbine, could be monitored by such sensors to predict failure.

Ultimately, the project will provide not only a new technology to build sensors, but we hope also to get a fundamental understanding how spiders use their webs for computation.

References

[1] Hauser, H.; Ijspeert, A.; Füchslin, R.; Pfeifer, R. & Maass, W.”Towards a theoretical foundation for morphological computation with compliant bodies.”Biological Cybernetics, Springer Berlin / Heidelberg, 2011, 105, 355-370

[2]  Hauser, H.; Ijspeert, A.; Füchslin, R.; Pfeifer, R. & Maass, W. “The role of feedback in morphological computation with compliant bodies”. Biological Cybernetics, Springer Berlin / Heidelberg, 2012, 106, 595-613

[3] Hauser, H.; Füchslin, R.M.; Nakajima, K.“Morphological Computation – The Physical Body as a Computational Resource” Opinions and Outlooks on Morphological Computation, editors Hauser, H.; Füchslin, R.M. and Pfeifer, R., Chapter 20, pp 226-244,  2014, ISBN 978-3-033-04515-6

[4] Mortimer, B., Gordon, S. D., Holland, C., Siviour, C. R., Vollrath, F. and Windmill, J. F. C. (2014), The Speed of Sound in Silk: Linking Material Performance to Biological Function. Adv. Mater., 26: 5179–5183. doi:10.1002/adma.201401027

Drones that drive

Image: Alex Waller, MIT CSAIL

Being able to both walk and take flight is typical in nature – many birds, insects and other animals can do both. If we could program robots with similar versatility, it would open up many possibilities: picture machines that could fly into construction areas or disaster zones that aren’t near roads, and then be able to squeeze through tight spaces to transport objects or rescue people.

The problem is that usually robots that are good at one mode of transportation are, by necessity, bad at another. Drones are fast and agile, but generally have too limited of a battery life to travel for long distances. Ground vehicles, meanwhile, are more energy efficient, but also slower and less mobile.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are aiming to develop robots that can do both. In a new paper, the team presented a system of eight quadcopter drones that can both fly and drive through a city-like setting with parking spots, no-fly zones and landing pads.

“The ability to both fly and drive is useful in environments with a lot of barriers, since you can fly over ground obstacles and drive under overhead obstacles,” says PhD student Brandon Araki, lead author on a paper about the system out of CSAIL director Daniela Rus’ group. “Normal drones can’t maneuver on the ground at all. A drone with wheels is much more mobile while having only a slight reduction in flying time.”

Araki and Rus developed the system along with MIT undergraduate students John Strang, Sarah Pohorecky and Celine Qiu, as well as Tobias Naegeli of ETH Zurich’s Advanced Interactive Technologies Lab. The team presented their system at IEEE’s International Conference on Robotics and Automation (ICRA) in Singapore earlier this month.

How it works

The project builds on Araki’s previous work developing a “flying monkey” robot that crawls, grasps, and flies. While the monkey robot could hop over obstacles and crawl about, there was still no way for it to travel autonomously.

To address this, the team developed various “path-planning” algorithms aimed at ensuring that the drones don’t collide. To make them capable of driving, the team put two small motors with wheels on the bottom of each drone. In simulations the robots could fly for 90 meters or drive for 252 meters before their batteries ran out.

Adding the driving component to the drone slightly reduced its battery life, meaning that the maximum distance it could fly decreased 14 percent to about 300 feet. But since driving is still much more efficient than flying, the gain in efficiency from driving more than offsets the relatively small loss in efficiency in flying due to the extra weight.

“This work provides an algorithmic solution for large-scale, mixed-mode transportation and shows its applicability to real-world problems,” says Jingjin Yu, a computer science professor at Rutgers University who was not involved in the paper.

The team also tested the system using everyday materials like pieces of fabric for roads and cardboard boxes for buildings. They tested eight robots navigating from a starting point to an ending point on a collision-free path, and all were successful.

Rus says that systems like theirs suggest that another approach to creating safe and effective flying cars is not to simply “put wings on cars,” but to build on years of research in drone development to add driving capabilities to them.

“As we begin to develop planning and control algorithms for flying cars, we are encouraged by the possibility of creating robots with these capabilities at small scale,” says Rus. “While there are obviously still big challenges to scaling up to vehicles that could actually transport humans, we are inspired by the potential of a future in which flying cars could offer us fast, traffic-free transportation.”

Click here to read the paper.

The Drone Center’s Weekly Roundup: 6/24/17

Amazon’s “beehive” concept for future multi-storey fulfillment centers. Credit: Amazon

June 19, 2017 – June 25, 2017

At the Center for the Study of the Drone

In an interview with Robotics Tomorrow, Center for the Study of the Drone Co-Director Arthur Holland Michel discusses the growing use of drones by law enforcement and describes future trends in unmanned systems technology.

News

The U.S. State Department is set to approve the sale of 22 MQ-9B Guardian drones to India, according to Defense News. The sale is expected to be announced during Prime Minister Narendra Modi’s visit to the United States. The Guardian is an unarmed variant of the General Atomics Aeronautical Systems Predator B. If the deal is approved and finalized, India would be the fifth country besides the U.S. and first non-NATO member to operate the MQ-9.

The United States shot down another armed Iranian drone in Syria. A U.S. F-15 fighter jet intercepted the Shahed-129 drone near the town of Tanf, where the U.S.-led coalition is training Syrian rebel forces. The shootdown comes just days after the U.S. downed another Shahed-129 on June 8, as well as a Syrian SU-22 manned fighter jet on June 18. (Los Angeles Times)

Meanwhile, a spokesperson for Pakistan’s Ministry of Foreign Affairs confirmed that the Pakistani air force shot down an Iranian drone. According to Nafees Zakaria, the unarmed surveillance drone was downed 2.5 miles inside Pakistani territory in the southwest Baluchistan province. (Associated Press)

A U.S. Air Force RQ-4 Global Hawk drone crashed in the Sierra Nevada mountains in California. The RQ-4 is a high-altitude long-endurance surveillance drone. (KTLA5)

The U.S. House of Representatives and Senate introduced bills to reauthorize funding for the Federal Aviation Administration. Both bills include language on drones. The Senate bill would require all drone operators to pass an aeronautical knowledge test and would authorize the FAA to require that drone operators be registered. (Law360)

President Trump spoke with the CEOs of drone companies at the White House as part of a week focused on emerging technologies. Participants discussed a number of topics, including state and local drone laws and drone identification and tracking technologies. (TechCrunch)

The Pentagon will begin offering an award for remote weapons strikes to Air Force personnel in a variety of career fields, including cyber and space. The “R” device award was created in 2016 to recognize drone operators. (Military.com)

The U.S. Federal Aviation Administration has formed a committee to study electronic drone identification methods and technologies. The new committee is comprised of representatives from industry, government, and law enforcement. (Press Release)

Commentary, Analysis, and Art

At MarketWatch, Sally French writes that in the meeting at the White House, some CEOs of drone companies argued for more, not fewer, drone regulations. (MarketWatch)

At Air & Space Magazine, James R. Chiles writes that the crowded airspace above Syria could lead to the first drone-on-drone air war.

At Popular Science, Kelsey D. Atherton looks at how fighter jets of the future will be accompanied by swarms of low-cost armed drones.  

At Drone360, Leah Froats breaks down the different drone bills that have recently been introduced in Congress.

At Motherboard, Ben Sullivan writes that drone pilots are “buying Russian software to hack their way past DJI’s no fly zones.”

At Bloomberg Technology, Thomas Black writes that the future of drone delivery hinges on precise weather predictions.

At Aviation Week, James Drew writes that U.S. lawmakers are encouraging the Air Force to conduct a review of the different MQ-9 Reaper models that it plans to purchase.  

Also at Aviation Week, Tony Osborne writes that studies show that European governments are advancing the implementation of drone regulations.

At The Atlantic, Marina Koren looks at how artificial intelligence helps the Curiosity rover navigate the surface of Mars without any human input.

At Phys.org, Renee Cho considers how drones are helping advance scientific research.

At Ozy, Zara Stone writes that drones are helping to accelerate the time it takes to complete industrial painting jobs.

At the European Council on Foreign Relations, Ulrike Franke argues that instead of following the U.S. example, Europe should develop its own approach to acquiring military drones.

At the New York Times, Frank Bures looks at how a U.S. drone pilot is helping give the New Zealand team an edge in the America’s Cup.

At Cinema5D, Jakub Han examines how U.S. drone pilot Robert Mcintosh created an intricate single-shot fly-through video in Los Angeles.

Know Your Drone

Amazon has filed a patent for multi-storey urban fulfilment centers for its proposed drone delivery program. (CNN)

Airbus Helicopters has begun autonomous flight trials of its VSR700 optionally piloted helicopter demonstrator. (Unmanned Systems Technology)

Italian defense firm Leonardo unveiled the M-40, a target drone that can mimic the signatures of a number of aircraft types. (FlightGlobal)

Defense firm Textron Systems unveiled the Nightwarden, a new variant of its Shadow tactical surveillance and reconnaissance drone. (New Atlas)

Israeli defense firm Elbit Systems unveiled the SkEye, a wide-area persistent surveillance sensor that can be used aboard drones. (IHS Jane’s 360)

Researchers at the University of California, Santa Barbara have developed a WiFi-based  system that allows drones to see through solid walls. (TechCrunch)

Israeli drone maker Aeronautics unveiled the Pegasus 120, a multirotor drone designed for a variety of roles. (IHS Jane’s 360)  

U.S. firm Raytheon has developed a new variant of its Coyote, a tube-launched aerial data collection drone. (AIN Online)

Drone maker Boeing Insitu announced that it has integrated a 50-megapixel photogrammetric camera into a variant of its ScanEagle fixed-wing drone. (Unmanned Systems Technology)

Telecommunications giant AT&T is seeking to develop a system to mount drones on ground vehicles. (Atlanta Business Chronicle)

U.S. defense contractor Northrop Grumman demonstrated an unmanned surface vehicle in a mine-hunting exercise in Belgium. (AUVSI)

Israeli firm Rafael Advanced Defense Systems unveiled a new radar and laser-based counter-drone system called Drone Dome. (UPI)

French firm Reflet du Monde unveiled the RDM One, a small drone that can be flown at ranges of up to 300 kilometers thanks to a satellite link. (Defense News)

RE2 Robotics is helping the U.S. Air Force build robots that can take the controls of traditionally manned aircraft. (TechCrunch)

The U.S. Marine Corps is set to begin using its Nibbler 3D-printed drone in active combat zones in the coming weeks. (3D Printing Industry)

U.S. drone maker General Atomics Aeronautical Systems has completed a design review for its Advanced Cockpit Block 50 Ground Control Station for U.S. Air Force drones. (UPI)

Researchers at NASA’s Langley Research Center are developing systems for small drones that allows them to determine on their own if they are suffering from mechanical issues and find a place to land safely. (Wired)

The inventor of the Roomba robotic vacuum cleaner has unveiled an unmanned ground vehicle that autonomously finds and removes weeds from your garden. (Business Insider)

Drones at Work

A group of public safety agencies in Larimer County, Colorado have unveiled a regional drone program. (The Coloradoan)

Five marijuana growing operations in California will begin using unmanned ground vehicles for security patrols. (NBC Los Angeles)

The Fargo Fire Department in North Dakota has acquired a drone for a range of operations. (KFGO)

The Rochester Police Department in Minnesota has acquired a drone for monitoring patients suffering from Alzheimer’s and other disorders. (Associated Press)

Drone maker Parrot and software firm Pix4D have selected six researchers using drones to study the impacts of climate change as the winners of an innovation grant. (Unmanned Aerial Online)

The Coconino County Sheriff’s Office and the Flagstaff Police Department used an unmanned ground vehicle to enter the home of a man who had barricaded himself in a standoff. (AZ Central)

Industry Intel

The U.S. Special Operations Command awarded Boeing Insitu and Textron Systems contracts to compete for the Mid-Endurance Unmanned Aircraft Systems III drone program. (AIN Online)

The U.S. Navy awarded Arête Associates a $8.5 million contract for the AN/DVS-1 COBRA, a payload on the MQ-8 Fire Scout. (DoD)

The U.S. Army awarded Raytheon a $2.93 million contract for Kinetic Drone Defense. (FBO)

The Spanish Defense Ministry selected the AUDS counter-drone system for immediate deployments. The contract is estimated to be worth $2.24 million. (GSN Magazine)

The European Maritime Safety Agency selected the UMS Skeldar for border control, search and rescue, pollution monitoring, and other missions. (FlightGlobal)

The Belgian Navy awarded SeeByte, a company that creates software for unmanned maritime systems, a contract for the SeeTrack software system for its autonomous undersea vehicles. (Marine Technology News)

A new company established by the Turkish government will build engines for the armed Anka drone. (DefenseNews)

Italian defense firm Leonardo is seeking to market its Falco UAV for commercial applications. (Shephard Media)  

Thales Alenia Space will acquire a minority stake in Airstar Aerospace, which it hopes will help it achieve its goal of developing an autonomous, high-altitude airship. (Intelligent Aerospace)

The Idaho STEM Action Center awarded 22 schools and libraries in Idaho $147,000 to purchase drones. (East Idaho News)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Survey: Examining perceptions of autonomous vehicles using hypothetical scenarios

Driverless car merging into traffic. How big of a gap between vehicles is acceptable? Image credit: Jordan Collver

I’m examining the perception of autonomous cars using hypothetical scenarios. Each of the hypothetical scenarios is accompanied with an image to help illustrate the scene — using grey tones and nondescript human-like features — along with the option to listen to the question spoken out loud to fully visualise an association. 

If you live in the UK, you can take this survey and help contribute to my research!

Public perception has the potential to impact on the timescale and adoption of autonomous vehicles (AV). As the development of the technology advances, understanding attitudes and wider public acceptability is critical. It’s no longer a question of if, but when we will transition. Long range autonomous vehicles are expected between 2020 and 2025, with some estimates suggesting fully autonomous vehicles will take over by 2030. Currently, most modern cars are sold with automated features: automatic braking, autonomous parking, advanced lane assist, advanced cruise control, queue assist, for example. Adopting fully AV has the potential to improve significant societal aspects: efficient road safety, reducing pollution and congestion, and providing another type of transportation for the mobility impaired.

The project’s aim is to add to the conversation about public perception of AV. Survey experiments can be extremely useful tools for studying public attitudes, especially if researchers are fascinated by the “effects of describing or presenting a scenario in a particular way.”  This unusual and creative method may provide a model for other types of research surveys in the future where it’s difficult to visualise future technologies. An online survey was chosen to remove small sample bias and maximise responses by participants in the UK.

You can take this survey by clicking above, or alternatively, click the following link:

https://uwe.onlinesurveys.ac.uk/visualise-this

CARNAC program researching autonomous co-piloting

Credit: Aurora Flight Sciences.

DARPA, the Defense Advanced Research Projects Agency, is researching autonomous co-piloting so they can fly without a human pilot on board. The robotic system — called the Common Aircraft Retrofit for Novel Autonomous Control (CARNAC) (not to be confused with the old Johnny Carson Carnac routine) — has the potential to reduce costs, enable new missions, and improve performance.

CARNAC, the Johnny Carson version.

Unmanned aircraft are generally built from scratch with robotic systems integrated from the earliest design stages. Existing aircraft require extensive modification to add robotic systems.

RE2, the CMU spin-off located in Pittsburgh, makes mobile manipulators for defense and space. They just received an SBIR loan backed by a US Air Force development contract to develop a retrofit kit that would provide a robotic piloting solution for legacy aircraft.

“Our team is excited to incorporate the Company’s robotic manipulation expertise with proven technologies in applique systems, vision processing algorithms, and decision making to create a customized application that will allow a wide variety of existing aircraft to be outfitted with a robotic pilot,” stated Jorgen Pedersen, president and CEO of RE2 Robotics. “By creating a drop-in robotic pilot, we have the ability to insert autonomy into and expand the capabilities of not only traditionally manned air vehicles, but ground and underwater vehicles as well. This application will open up a whole new market for our mobile robotic manipulator systems.”

Aurora Flight Sciences, a Manassas, VA developer of advanced unmanned systems and aerospace vehicles, is working on another similar DARPA project, Aircrew Labor In-Cockpit Automation System (ALIAS), and is designed as a drop-in avionics and mechanics package that can be quickly and cheaply fitted to a wide variety of fixed and rotor aircraft, from a Cessna to a B-52. Once installed, ALIAS is able to analyze the aircraft and adapt itself to the job of the second pilot.

Credit: Aurora Flight Sciences

Assistive robots compete in Bristol

The Bristol Robotics Laboratory (BRL) will host the first European- Commission funded European Robotics League (ERL) tournament for service robots to be held in the UK.

Two teams from the BRL and Birmingham will pitch their robots against each other in a series of events from 26 and 30 June.

Robots designed to support people with care-related tasks in the home will be put to the test in a simulated home test bed.

The assisted living robots of the two teams will face various challenges, including understanding natural speech and finding and retrieving objects for the user.

The robots will also have to greet visitors at the door appropriately, such as welcoming a doctor on their visit, or turning away unwanted visitors.

Associate Professor Praminda Caleb-Solly, Theme Leader for Assistive Robotics at the BRL said, “The lessons learned during the competition will contribute to how robots in the future help people, such as those with ageing-related impairments and those with other disabilities, live independently in their own homes for as long as possible.

“This is particularly significant with the growing shortage of carers available to provide support for an ageing populations.”

The BRL, the host of the UK’s first ERL Service Robots tournament, is a joint initiative of the University of the West of England and the University of Bristol. The many research areas include swarm robotics, unmanned aerial vehicles, driverless cars, medical robotics and robotic sensing for touch and vision. BRL’s assisted living research group is developing interactive assistive robots as part of an ambient smart home ecosystem to support independent living.

The ERL Service Robots tournament will be held in the BRL’s Anchor Robotics Personalised Assisted Living Studio, which was set up to develop, test and evaluate assistive robotic and other technologies in a realistic home environment.

The studio was recently certified as a test bed by the ERL, which runs alongside similar competitions for industrial robots and for emergency robots, which includes vehicles that can search for and rescue people in disaster-response scenarios.

The two teams in the Bristol event will be Birmingham Autonomous Robotics Club (BARC) led by Sean Bastable from the School of Computer Science at the University of Birmingham, and the Healthcare Engineering and Assistive Robotics Technology and Services (HEARTS) team from the BRL led by PhD Student Zeke Steer.

BARC has developed its own robotics platform, Dora, and HEARTS will use a TIAGo Steel robot from PAL Robotics with a mix of bespoke and proprietary software.

The Bristol event will be open for public viewing in the BRL on the afternoon of the 29th of June 2017 (Bookable via EventBrite), and include short tours of the assisted living studio for the attendees. It will be held during UK Robotics Week, on 24-30 June 2017, when there will be a nationwide programme of robotics and automation events.

The BRL will also be organising focus groups on 28 and 29 June 2017 (Bookable via EventBrite and here) as part of the UK Robotics Week, to demonstrate assistive robots and their functionality, and seek the views of carers and older adults on these assistive technologies, exploring further applications and integration of such robots into care scenarios.

The European Commission-funded European Robotics League (ERL) is the successor to the RoCKIn, euRathlon and EuRoC robotics competitions, all funded by the EU and designed to foster scientific progress and innovation in cognitive systems and robotics. The ERL is funded by the European Union’s Horizon 2020 research and innovation programme. See: https://www.eu-robotics.net/robotics_league/

The ERL is part of the SPARC public-private partnership set up by the European Commission and the euRobotics association to extend Europe’s leadership in civilian robotics. SPARC’s €700 million of funding from the Commission in 2014̶20 is being combined with €1.4 billion of funding from European industry. See: http://www.eu-robotics.net/sparc

euRobotics is a European Commission-funded non-profit organisation which promotes robotics research and innovation for the benefit of Europe’s economy and society. It is based in Brussels and has more than 250 member organisations. See: www.eu-robotics.net

Robots Podcast #237: Deep Learning in Robotics, with Sergey Levine

In this episode, Audrow Nash interviews Sergey Levine, assistant professor at UC Berkeley, about deep learning on robotics. Levine explains what deep learning is and he discusses the challenges of using deep learning in robotics. Lastly, Levine speaks about his collaboration with Google and some of the surprising behavior that emerged from his deep learning approach (how the system grasps soft objects).

In addition to the main interview, Audrow interviewed Levine about his professional path. They spoke about what questions motivate him, why his PhD experience was different to what he had expected, the value of self-directed learning,  work-life balance, and what he wishes he’d known in graduate school.

A video of Levine’s work in collaboration with Google.

 

Sergey Levine

Sergey Levine is an assistant professor at UC Berkeley. His research focuses on robotics and machine learning. In his PhD thesis, he developed a novel guided policy search algorithm for learning complex neural network control policies, which was later applied to enable a range of robotic tasks, including end-to-end training of policies for perception and control. He has also developed algorithms for learning from demonstration, inverse reinforcement learning, efficient training of stochastic neural networks, computer vision, and data-driven character animation.

 

 

Links

More efficient and safer: How drones are changing the workplace

Photo credit: Pierre-Yves Guernier

Technology-driven automation plays a critical role in the global economy, and its visibility in our lives is growing. As technology impacts more and more jobs, individuals and enterprises find themselves wondering what effect the current wave of automation will have on their future economic prospects.

Advances in robotics and AI have led to modern commercial drone technology, which is changing the fundamental way enterprises interact with the world. Drones bridge the physical and digital worlds. They enable companies to combine the power of scalable computing resources with pervasive, affordable sensors that can go anywhere. This creates an environment in which businesses can make quick, accurate decisions based on enormous datasets derived from the physical world.

Removing dangers

For individuals in jobs that involve lots of time spent traveling to the extremities of where enterprises do business, or to a precarious perch to get a good view, like infrastructure inspection or site management, an opportunity presents itself.

Historically, it’s been a dangerous job to identify the state of affairs in the physical world and analyze and report on that information. It may have required climbing on tall buildings or unstable areas, or travelling to far-flung sites to inspect critical infrastructure, like live power lines or extensive dams.

Commercial drones, as part of the current wave of automation technology, will fundamentally change this process. The jobs involved aren’t going away, but they are going to change.

A January 2017 study by McKinsey on Automation, Employment, and Productivityreported that less than 5% of all occupations can be automated entirely using demonstrated technologies, but two-thirds of all jobs could have 30% of their work automated. Many jobs will not only be more efficient, they are going to be safer, and the skills required are going to be more mental than physical.

New ways to amass data

Jobs that were once considered gruelling and monotonous will look more like knowledge-worker jobs in the near future. Until now, people in these jobs have had to go to great lengths to collect data for analysis and decision-making. That data can now be collected without putting people in harm’s way. Without the need to don a harness, or climb to dangerous heights, people in these jobs can extend their career.

We’ve seen this firsthand in our own work conducting commercial drone operation training for many of the largest insurers in America, whose teams typically include adjusters in the latter stages of their career.

When you’re 50 years old, the physical demands of climbing on roofs to conduct inspections can make you think about an early retirement, or a career change.

Keeping hard-earned skills in the workplace

But these workers are some of the best in the business, with decades of experience. No one wants to leave hard-earned skills behind due to physical limitations.

We’ve found industry veterans like these to be some of the most enthusiastic adopters of commercial drones for rooftop inspections. After one week-long session, these adjusters could operate a commercial drone to collect rooftop data without requiring any climbing. Their deep understanding of claims adjustment can be brought to bear in the field without the conventional physical demands.

Specialists with knowledge and experience like veteran insurance adjusters are far harder to find than someone who can learn how to use a commercial drone system. Removing the need to physically collect the data means the impact of their expertise can be global, and the talent competition for these roles will be global as well.

Digital skills grow in importance

Workers can come out on top in this shift by focusing on improving relevant digital skills. Their conventional daily-use manual tools will become far less important than those tools that enable them to have an impact digitally.

The tape measure and ladder will go by the wayside as more work is conducted with iPads and cloud software. This transition will also create many more opportunities to do work that simply doesn’t get accomplished today.

Take commercial building inspection as an example.

In the past, the value of a building inspection had to be balanced against many drawbacks, like the cost of stopping business so an inspection could be conducted, the liability of sending a worker to a roof, and the sheer size of sites.

Filling the data gap

The result is a significant data gap. The state of the majority of commercial buildings is simply unknown to their owners and underwriters.

Using drones for inspections dramatically reduces the inherent challenges of data collection, which makes it feasible to inspect far more buildings and creates a demand for human workers to analyze this new dataset. Filling this demand requires specialized knowledge and a niche skillset that the existing workers in this field, like the veterans from our training groups who were on the verge of leaving the field, are best-poised to provide.

This trend is happening in myriad industries, from insurance, to telecoms, to mining and construction.

Preparation now

Enterprises in industries that will be impacted by this technology need to make their preparations for this transformation now. Those that do not, will not be around in 10 years.

Workers in jobs where careers are typically cut short due to physical risk need to invest in learning digital skills, so that they can extend the length of their career and increase their value, while reducing the inherent physical toll. Individuals who see their employers falling behind in innovation have the freedom to pursue a career with a more ambitious competitor, or to take a leadership role kickstarting initiatives internally to keep pace.

There’s no shortage of challenges to tackle or problems to solve in the world.

Commercial drones, and the greater wave of automation technology, will enable us to address more of them. This will create many opportunities for the workers who are prepared to capitalize on this technology. That preparation must begin now.

Helping or hacking? Engineers and ethicists must work together on brain-computer interface technology

File 20170609 4841 73vkw2
A subject plays a computer game as part of a neural security experiment at the University of Washington.
Patrick Bennett, CC BY-ND

By Eran Klein, University of Washington and Katherine Pratt, University of Washington

 

In the 1995 film “Batman Forever,” the Riddler used 3-D television to secretly access viewers’ most personal thoughts in his hunt for Batman’s true identity. By 2011, the metrics company Nielsen had acquired Neurofocus and had created a “consumer neuroscience” division that uses integrated conscious and unconscious data to track customer decision-making habits. What was once a nefarious scheme in a Hollywood blockbuster seems poised to become a reality.

Recent announcements by Elon Musk and Facebook about brain-computer interface (BCI) technology are just the latest headlines in an ongoing science-fiction-becomes-reality story.

BCIs use brain signals to control objects in the outside world. They’re a potentially world-changing innovation – imagine being paralyzed but able to “reach” for something with a prosthetic arm just by thinking about it. But the revolutionary technology also raises concerns. Here at the University of Washington’s Center for Sensorimotor Neural Engineering (CSNE) we and our colleagues are researching BCI technology – and a crucial part of that includes working on issues such as neuroethics and neural security. Ethicists and engineers are working together to understand and quantify risks and develop ways to protect the public now.

Picking up on P300 signals

All BCI technology relies on being able to collect information from a brain that a device can then use or act on in some way. There are numerous places from which signals can be recorded, as well as infinite ways the data can be analyzed, so there are many possibilities for how a BCI can be used.

Some BCI researchers zero in on one particular kind of regularly occurring brain signal that alerts us to important changes in our environment. Neuroscientists call these signals “event-related potentials.” In the lab, they help us identify a reaction to a stimulus.

Examples of event-related potentials (ERPs), electrical signals produced by the brain in response to a stimulus. Tamara Bonaci, CC BY-ND

In particular, we capitalize on one of these specific signals, called the P300. It’s a positive peak of electricity that occurs toward the back of the head about 300 milliseconds after the stimulus is shown. The P300 alerts the rest of your brain to an “oddball” that stands out from the rest of what’s around you.

For example, you don’t stop and stare at each person’s face when you’re searching for your friend at the park. Instead, if we were recording your brain signals as you scanned the crowd, there would be a detectable P300 response when you saw someone who could be your friend. The P300 carries an unconscious message alerting you to something important that deserves attention. These signals are part of a still unknown brain pathway that aids in detection and focusing attention.

Reading your mind using P300s

P300s reliably occur any time you notice something rare or disjointed, like when you find the shirt you were looking for in your closet or your car in a parking lot. Researchers can use the P300 in an experimental setting to determine what is important or relevant to you. That’s led to the creation of devices like spellers that allow paralyzed individuals to type using their thoughts, one character at a time.

It also can be used to determine what you know, in what’s called a “guilty knowledge test.” In the lab, subjects are asked to choose an item to “steal” or hide, and are then shown many images repeatedly of both unrelated and related items. For instance, subjects choose between a watch and a necklace, and are then shown typical items from a jewelry box; a P300 appears when the subject is presented with the image of the item he took.

Everyone’s P300 is unique. In order to know what they’re looking for, researchers need “training” data. These are previously obtained brain signal recordings that researchers are confident contain P300s; they’re then used to calibrate the system. Since the test measures an unconscious neural signal that you don’t even know you have, can you fool it? Maybe, if you know that you’re being probed and what the stimuli are.

Techniques like these are still considered unreliable and unproven, and thus U.S. courts have resisted admitting P300 data as evidence.

For now, most BCI technology relies on somewhat cumbersome EEG hardware that is definitely not stealth. Mark Stone, University of Washington, CC BY-ND

Imagine that instead of using a P300 signal to solve the mystery of a “stolen” item in the lab, someone used this technology to extract information about what month you were born or which bank you use – without your telling them. Our research group has collected data suggesting this is possible. Just using an individual’s brain activity – specifically, their P300 response – we could determine a subject’s preferences for things like favorite coffee brand or favorite sports.

But we could do it only when subject-specific training data were available. What if we could figure out someone’s preferences without previous knowledge of their brain signal patterns? Without the need for training, users could simply put on a device and go, skipping the step of loading a personal training profile or spending time in calibration. Research on trained and untrained devices is the subject of continuing experiments at the University of Washington and elsewhere.

It’s when the technology is able to “read” someone’s mind who isn’t actively cooperating that ethical issues become particularly pressing. After all, we willingly trade bits of our privacy all the time – when we open our mouths to have conversations or use GPS devices that allow companies to collect data about us. But in these cases we consent to sharing what’s in our minds. The difference with next-generation P300 technology under development is that the protection consent gives us may get bypassed altogether.

What if it’s possible to decode what you’re thinking or planning without you even knowing? Will you feel violated? Will you feel a loss of control? Privacy implications may be wide-ranging. Maybe advertisers could know your preferred brands and send you personalized ads – which may be convenient or creepy. Or maybe malicious entities could determine where you bank and your account’s PIN – which would be alarming.

With great power comes great responsibility

The potential ability to determine individuals’ preferences and personal information using their own brain signals has spawned a number of difficult but pressing questions: Should we be able to keep our neural signals private? That is, should neural security be a human right? How do we adequately protect and store all the neural data being recorded for research, and soon for leisure? How do consumers know if any protective or anonymization measures are being made with their neural data? As of now, neural data collected for commercial uses are not subject to the same legal protections covering biomedical research or health care. Should neural data be treated differently?

Neuroethicists from the UW Philosophy department discuss issues related to neural implants.
Mark Stone, University of Washington, CC BY-ND

These are the kinds of conundrums that are best addressed by neural engineers and ethicists working together. Putting ethicists in labs alongside engineers – as we have done at the CSNE – is one way to ensure that privacy and security risks of neurotechnology, as well as other ethically important issues, are an active part of the research process instead of an afterthought. For instance, Tim Brown, an ethicist at the CSNE, is “housed” within a neural engineering research lab, allowing him to have daily conversations with researchers about ethical concerns. He’s also easily able to interact with – and, in fact, interview – research subjects about their ethical concerns about brain research.

There are important ethical and legal lessons to be drawn about technology and privacy from other areas, such as genetics and neuromarketing. But there seems to be something important and different about reading neural data. They’re more intimately connected to the mind and who we take ourselves to be. As such, ethical issues raised by BCI demand special attention.

Working on ethics while tech’s in its infancy

As we wrestle with how to address these privacy and security issues, there are two features of current P300 technology that will buy us time.

First, most commercial devices available use dry electrodes, which rely solely on skin contact to conduct electrical signals. This technology is prone to a low signal-to-noise ratio, meaning that we can extract only relatively basic forms of information from users. The brain signals we record are known to be highly variable (even for the same person) due to things like electrode movement and the constantly changing nature of brain signals themselves. Second, electrodes are not always in ideal locations to record.

All together, this inherent lack of reliability means that BCI devices are not nearly as ubiquitous today as they may be in the future. As electrode hardware and signal processing continue to improve, it will be easier to continuously use devices like these, and make it easier to extract personal information from an unknowing individual as well. The safest advice would be to not use these devices at all.

The ConversationThe goal should be that the ethical standards and the technology will mature together to ensure future BCI users are confident their privacy is being protected as they use these kinds of devices. It’s a rare opportunity for scientists, engineers, ethicists and eventually regulators to work together to create even better products than were originally dreamed of in science fiction.

Shrinking data for surgical training

Image: MIT News

Laparoscopy is a surgical technique in which a fiber-optic camera is inserted into a patient’s abdominal cavity to provide a video feed that guides the surgeon through a minimally invasive procedure. Laparoscopic surgeries can take hours, and the video generated by the camera — the laparoscope — is often recorded. Those recordings contain a wealth of information that could be useful for training both medical providers and computer systems that would aid with surgery, but because reviewing them is so time consuming, they mostly sit idle.

Researchers at MIT and Massachusetts General Hospital hope to change that, with a new system that can efficiently search through hundreds of hours of video for events and visual features that correspond to a few training examples.

In work they presented at the International Conference on Robotics and Automation this month, the researchers trained their system to recognize different stages of an operation, such as biopsy, tissue removal, stapling, and wound cleansing.

But the system could be applied to any analytical question that doctors deem worthwhile. It could, for instance, be trained to predict when particular medical instruments — such as additional staple cartridges — should be prepared for the surgeon’s use, or it could sound an alert if a surgeon encounters rare, aberrant anatomy.

“Surgeons are thrilled by all the features that our work enables,” says Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and senior author on the paper. “They are thrilled to have the surgical tapes automatically segmented and indexed, because now those tapes can be used for training. If we want to learn about phase two of a surgery, we know exactly where to go to look for that segment. We don’t have to watch every minute before that. The other thing that is extraordinarily exciting to the surgeons is that in the future, we should be able to monitor the progression of the operation in real-time.”

Joining Rus on the paper are first author Mikhail Volkov, who was a postdoc in Rus’ group when the work was done and is now a quantitative analyst at SMBC Nikko Securities in Tokyo; Guy Rosman, another postdoc in Rus’ group; and Daniel Hashimoto and Ozanan Meireles of Massachusetts General Hospital (MGH).

Representative frames

The new paper builds on previous work from Rus’ group on “coresets,” or subsets of much larger data sets that preserve their salient statistical characteristics. In the past, Rus’ group has used coresets to perform tasks such as deducing the topics of Wikipedia articles or recording the routes traversed by GPS-connected cars.

In this case, the coreset consists of a couple hundred or so short segments of video — just a few frames each. Each segment is selected because it offers a good approximation of the dozens or even hundreds of frames surrounding it. The coreset thus winnows a video file down to only about one-tenth its initial size, while still preserving most of its vital information.

For this research, MGH surgeons identified seven distinct stages in a procedure for removing part of the stomach, and the researchers tagged the beginnings of each stage in eight laparoscopic videos. Those videos were used to train a machine-learning system, which was in turn applied to the coresets of four laparoscopic videos it hadn’t previously seen. For each short video snippet in the coresets, the system was able to assign it to the correct stage of surgery with 93 percent accuracy.

“We wanted to see how this system works for relatively small training sets,” Rosman explains. “If you’re in a specific hospital, and you’re interested in a specific surgery type, or even more important, a specific variant of a surgery — all the surgeries where this or that happened — you may not have a lot of examples.”

Selection criteria

The general procedure that the researchers used to extract the coresets is one they’ve previously described, but coreset selection always hinges on specific properties of the data it’s being applied to. The data included in the coreset — here, frames of video — must approximate the data being left out, and the degree of approximation is measured differently for different types of data.

Machine learning can be thought of as a problem of approximation, however. In this case, the system had to learn to identify similarities between frames of video in separate laparoscopic feeds that denoted the same phases of a surgical procedure. The metric of similarity that it arrived at also served to assess the similarity of video frames that were included in the coreset, to those that were omitted.

“Interventional medicine — surgery in particular — really comes down to human performance in many ways,” says Gregory Hager, a professor of computer science at Johns Hopkins University who investigates medical applications of computer and robotic technologies. “As in many other areas of human endeavor, like sports, the quality of the human performance determines the quality of the outcome that you achieve, but we don’t know a lot about, if you will, the analytics of what creates a good surgeon. Work like what Daniela is doing and our work really goes to the question of: Can we start to quantify what the process in surgery is, and then within that process, can we develop measures where we can relate human performance to the quality of care that a patient receives?”

“Right now, efficiency” — of the kind provided by coresets — “is probably not that important, because we’re dealing with small numbers of these things,” Hager adds. “But you could imagine that, if you started to record every surgery that’s performed — we’re talking tens of millions of procedures in the U.S. alone — now it starts to be interesting to think about efficiency.”

Page 1 of 5
1 2 3 5