Page 428 of 430
1 426 427 428 429 430

New Horizon 2020 robotics projects, 2016: An.Dy

In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda). Every week, euRobotics will publish a video interview with a project, so that you can find out more about their activities.

A wide variety of research and innovation themes are represented in the new projects: from healthcare via transportation, industrial- and logistics robotics to events media production using drones. Some deal with complex safety matters on the frontier where robots meet people, to ensure that no one comes to harm. Others will create a sustainable ecosystem in the robotics community, setting up common platforms supporting robotics development. One project deals exclusively with the potentially radical changes facing society with the rise of new autonomous technologies. The projects are either helping humans in their daily lives at home or at work, collaborating with humans to help them with difficult, strenuous tasks, or taking care of dangerous tasks, reducing the risk to humans.

The research and innovation projects focus on a wide variety of Robotics and Autonomous Systems and capabilities, such as navigation, human-robot interaction, recognition, cognition and handling. Many of these abilities can be transferable to other fields as well.

Advancing Anticipatory Behaviors in Dyadic Human-Robot Collaboration: An.Dy

Objective

Obj1. ANDY will develop the ANDYSUIT, a wearable technology for monitoring humans involved in whole-body physical interaction tasks.  Obj2. Based on the ANDYSUIT, ANDY will generate ANDYDATASET, a collection of motion and force captures of humans involved in and collaboration tasks. Obj3. From ANDYDATASET, ANDY will develop ANDYMODEL, a set of models to describe human and robot behaviour when engaged in collaborative tasks. Obj4. With ANDYSUIT and ANDYMODEL, ANDY will develop ANDYCONTROL, a reactive and predictive control strategy for human-robot physical collaboration.

Expected impact

Impact on manufacturing domain: ANDY technologies support this objective in two ways: (1) by increasing productivity through more effective production and service processes in which the strength of humans and robots are optimally combined, and (2) by maintaining workers health until the age of retirement including reduced costs for health care and compensation. Impact on healthcare: the ANDYSUIT will open a completely new field for methodological analysis with the possibility of monitoring patients also outside the clinics.

Partners

FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE
INŠTITUT JOŽEF ŠTEFAN
DEUTSCHES ZENTRUM FÜR LUFT- UND RAUMFAHRT
XSENS TECHNOLOGIES BV
IMK AUTOMOTIVE GMBH
OTTO BOCK HEALTHCARE GMBH
ANYBODY TECHNOLOGY A/S

Coordinator: Francesco Nori
francesco.nori@iit.it
http://iron76.github.io
Instituto Italiano di Tecnologia – iCub Facility
Via Morego, 30
16163 Genova, Italy
Phone: (+39) 010 71 781 420
Fax: +39 010 71 70 817
Twitter: @AnDy_H2020

Project website: www.andy-project.eu

To err is algorithm: Algorithm fallibility and economic organisation

Algorithmic fails

Dig below the surface of some of today’s biggest tech controversies and you are likely to find an algorithm misfiring:[1]

These errors are not primarily caused by problems in the data that can make algorithms discriminatory, or their inability to improvise creatively. No, they stem from something more fundamental: the fact that algorithms, even when they are generating routine predictions based on non-biased data, will make errors. To err is algorithm.

The costs and benefits of algorithmic decision-making

We should not stop using algorithms simply because they make errors.[2] Without them, many popular and useful services would be unviable.[3] However, we need to recognise that algorithms are fallible, and that their failures have costs. This points at an important trade-off between more (algorithm-enabled) beneficial decisions and more (algorithm-caused) costly errors. Where lies the balance?

Economics is the science of trade-offs, so why not think about this topic like economists? This is what I have done ahead of this blog, creating three simple economics vignettes that look at key aspects of algorithmic decision-making.[4] These are the key questions:

  • Risk: when should we leave decisions to algorithms, and how accurate do those algorithms need to be?
  • Supervision: How do we combine human and machine intelligence to achieve desired outcomes?
  • Scale: What factors enable and constrain our ability to ramp-up algorithmic decision-making?

The two sections that follow give the gist of the analysis and its implications. The appendix at the end describes the vignettes in more detail (with equations!).

Modelling the modelling

1. Risk: go with the odds

As the American psychologist and economist Herbert Simon once pointed out:

in an information rich world, attention becomes a scarce resource.

This applies to organisations as much as it does to individuals.

The ongoing data revolution risks overwhelming our ability to process information and make decisions, and algorithms can help address this. They are machines that automate decision-making, potentially increasing the number of good decisions that an organisation can make.[5] This explains why they have taken-off first in industries where the volume and frequency of potential decisions goes beyond what a human workforce can process.[6]

What drives this process? For an economist, the main question is how much value will the algorithm create with its decisions. Rational organisations will adopt algorithms with high expected values.

An algorithm’s expected value depends on two factors: its accuracy (the probability that it will make a correct decision), and the balance between the reward from a correct decision and the penalty from an erroneous one.[7]  Riskier decisions (where penalties are big compared to rewards) should be made by highly accurate algorithms. You would not want a flaky robot running a nuclear power station, but it might be ok if it is simply advising you about what TV show to watch tonight.

2. Supervision: watch out

We could bring in human supervisors to check the decisions made by the algorithm and fix any errors they find. This makes more sense if the algorithm is not very accurate (supervisors do not spend a lot of time checking correct decisions), and the net benefits from correcting the wrong decisions (i.e., extra rewards plus avoided penalties) is high. Costs matter too. A rational organisation has more incentives to hire human supervisors if they do not get paid a lot, and if they are highly productive (i.e. it only takes a few of them to do the job).

Following from the example before, if a human supervisor fixes a silly recommendation in a TV website, this is unlikely to create a lot of value for the owner. The situation in a nuclear power station is completely different.

3. Scale: a race between machines and reality

What happens when we scale-up the number of algorithmic decisions? Are there any limits to its growth?

This depends on several things, including whether algorithms gain or lose accuracy as they make more decisions, and the costs of ramping-up algorithmic decision-making. In this situation, there are two interesting races going on.

1. There is a race between an algorithm’s ability to learn from the decisions it makes, and the amount of information that it obtains from new decisions. New machine learning techniques help algorithms ‘learn from experience’, making them more accurate as they make more decisions.[8] However, more decisions can also degrade an algorithm’s accuracy. Perhaps it is forced to deal with weirder cases, or new situations it is not trained to deal with.[9] To make things worse, when an algorithm becomes very popular (makes more decisions), people have more reasons to game it.

My prior is that the ‘entropic forces’ that degrade algorithm accuracy will win out in the end: no matter how much more data you collect, it is just impossible to make perfect predictions about a complex, dynamic reality.

2. The second race is between the data scientists creating the algorithms and the supervisors checking these algorithm’s decisions. Data scientists are likely to ‘beat’ the human supervisors because their productivity is higher: a single algorithm, or an improvement in an algorithm, can be scaled up over millions of decisions. By contrast, supervisors need to check each decision individually. This means that as the number of decisions increases, most of the organisation’s labour bill will be spent on supervision, with potentially spiralling costs as the supervision process gets bigger and more complicated.

What happens at the end?

When considered together, the decline in algorithmic accuracy and the increase in labour costs I just described are likely to limit the number of algorithmic decisions an organisation can make economically. But if and when this happens depends on the specifics of the situation.

Implications for organisations and policy

The processes I discussed above have many interesting organisational and policy implications. Here are some of them:

1. Finding the right algorithm-domain fit

As I said, algorithms making decisions in situations where the stakes are high need to be very accurate to make-up for high penalties when things go wrong.[10] On the flipside, if the penalty from making an error is low, even inaccurate algorithms might be up to the task.

For example, the recommendation engines in platforms like Amazon or Netflix often make irrelevant recommendations, but this is not a big problem because the penalty from these errors is relatively low – we just ignore them. Data scientist Hillary Parker picked up on the need to consider the fit between model accuracy and decision context a recent edition of the ‘Not So Standard Deviations’ podcast:

Most statistical methods have been tuned for the clinical trial implementation where you are talking about people’s lives and people dying with the wrong treatment, whereas in business settings the trade-offs are completely different.

One implication from this is that organisations in ‘low-stakes’ environments can experiment with new and unproven algorithms, including some with low-accuracy early on. As these are improved, they can be transferred to ‘high stake domains’. The tech companies that develop these algorithms often release them as open source software for others to download and improve, making these spill-overs possible.

2. There are limits to algorithmic decision-making in high stakes domains

Algorithms need to be applied much more carefully in domains where the penalties from errors are high, such as health or the criminal justice system, and when dealing with groups who are more vulnerable to algorithmic errors.[11] Only highly accurate algorithms are suitable for these risky decisions, unless they are complemented with expensive human supervisors who can find and fix errors. This will create natural limits to algorithmic decision-making: how many people can you hire to check an expanded number of decisions? Human attention remains a bottleneck to more decisions.

If policymakers want more and better use of algorithms in these domains, they should invest in R&D to improve algorithmic accuracy, encourage the adoption of high-performing algorithms from other sectors, and experiment with new ways of organising that help algorithms and their supervisors work better as a team.

Commercial organisations are not immune to some of these problems: YouTube has for example started blocking adverts in videos with less than ten thousand views. In those videos, the rewards from correct algorithmic ad-matching is probably low (they have low viewership) and the penalties could be high (many of these videos are of dubious quality). In other words, these decisions have low expected value, so YouTube has decided to stop making them. Meanwhile, Facebook just announced that it is hiring 3,000 human supervisors (almost a fifth of its current workforce) to moderate the content in its network. You could imagine how the need to supervise more decisions might put some brakes on its ability to scale up algorithmic decision-making indefinitely.

3. The pros and cons of crowdsourced supervision

One way to keep supervision costs low and coverage of decisions high is to crowdsource supervision to users, for example by giving them tools to report errors and problems. YouTube, Facebook and Google have all done this in response to their algorithmic controversies. Alas, getting users to police online services can feel unfair and upsetting. As Sarah T Roberts, a Law professor pointed out in a recent interview about the Facebook violent video controversy:

The way this material is often interrupted is because someone like you or me encounters it. This means a whole bunch of people saw it and flagged it, contributing their own labour and non-consensual exposure to something horrendous. How are we going to deal with community members who may have seen that and are traumatized today?

4. Why you should always keep a human in the loop

Even when penalties from error are low, it still makes sense to keep humans in the loop of algorithmic decision-making systems.[12] Their supervision provides a buffer against sudden declines in performance if (as) the accuracy of algorithms decreases.  When this happens, the number of erroneous decisions detected by humans and the net benefit from fixing them increase. They can also ring the alarm, letting everyone know that there is a problem with the algorithms that needs fixing.[13]

This could be particularly important in situations where errors create penalties with a delay, or penalties that are hard to measure or hidden (say if erroneous recommendations result in self-fulfilling prophecies, or costs that are incurred outside the organisation).

There are many examples of this. In the YouTube advertising controversy, the big accumulated penalty from previous errors only became apparent with a delay, when brands noticed that their adverts were being posted against hate videos. The controversy with fake news after the US election is an example of hard to measure costs: algorithms’ inability to discriminate between real news and hoaxes creates costs for society, potentially justifying stronger regulations and more human supervision. Politicians have made this point when calling on Facebook to step up its fight against fake news in the run-up to the UK election:

Looking at some of the work that has been done so far, they don’t respond fast enough or at all to some of the user referrals they can get. They can spot quite quickly when something goes viral. They should then be able to check whether that story is true or not and, if it is fake, blocking it or alerting people to the fact that it is disputed. It can’t just be users referring the validity of the story. They [Facebook] have to make a judgment about whether a story is fake or not.

5. From abstract models to real systems

Before we use economic models to inform action, we need to define and measure model accuracy, penalties and rewards, changes in algorithmic performance due to environmental volatility, levels of supervision and their costs, and that is only the beginning.[14]

This is hard but important work that could draw on existing technology assessment and evaluation tools, including methods to quantify non-economic outcomes (e.g. in health).[15] One could even use rich data from an organisation’s information systems to simulate the impact of algorithmic decision-making and its organisation before implementing it. We are seeing more examples of these applications, such as the financial ‘regtech’ pilots that the European Commission is running, or the ‘collusion incubators’ mentioned in a recent Economist article on price discrimination.

Coda: Piecemeal social engineering in an age of algorithms

In a Nature article last year, US researchers Ryan Calo and Kate Crawford called for

a practical and broadly applicable social-systems analysis [that] thinks through all the possible effects of AI systems on all parties [drawing on] philosophy, law, sociology, anthropology and science-and-technology studies, among other disciplines.

Calo and Crawford did not include economists in their list. Yet as this blog suggests, economics thinking has much to contribute to these important analyses and debates. Thinking about algorithmic decisions in terms of their benefits and costs, the organisational designs we can use to manage their downsides, and the impact of more decisions on the value that agorithms create can help us make better decisions about when and how to use them.

This reminds me of a point that Jaron Lanier made in his 2010 book, Who Owns the Future:

With every passing year, economics must become more and more about the design of the machines that mediate human social behaviour. A networked information system guides people in a more direct, detailed and literal way than does policy. Another way to put it is that economics must turn into a large-scale, systemic version of user interface design.

Designing organisations where algorithms and humans work together to make better decisions will be an important part of this agenda.

Acknowledgements

This blog benefited from comments from Geoff Mulgan, and was inspired by conversations with John Davies. The image above represents a precision-recall curve in a multi-label classification problem. It shows the propensity of a random forests classification algorithm to make mistakes when one sets different rules (probability thresholds) for putting observations in a category.

Appendix: Three economics vignettes about algorithmic decision-making

The three vignettes below are very simplified formalisations of algorithmic decision-making situations. My main inspiration was Human fallibility and economic organization, a 1985 paper by Joe Stiglitz and Raj Sah where the authors model how two organisational designs – hierarchies and ‘polyarchies’ (flat organisations) – cope with human error. Their analysis shows that hierarchical organisations where decision-makers lower in the hierarchy are supervised by people further up tend to reject more good projects, while polyarchies where agents make decisions independently from each other, tend to accept more bad projects. A key lesson from their model is that errors are inevitable, and the optimal organisational design depends on context.

Vignette 1: Algorithm says maybe

Let’s imagine an online video company that matches adverts with videos in its catalogue. This company hosts millions of videos so it would be economically inviable for it to rely on human labour to do the job. Instead, its data scientists develop algorithms to do this automatically. [16] The company looks for the algorithm that maximises the expected value of the matching decisions. This value depends on three factors: [17]

-Algorithm accuracy (a): The probability (between 0 and 1) that the algorithm will make the correct decision.[18]

-Decision reward (r): This is the reward when the algorithm makes the right decision

-Error penalty (p): This is the cost of making the wrong decision.

We can combine accuracy, benefit and penalty to calculate the expected value of the decision:

E = ar – (1-a)p [1]

This value is positive when the expected benefits from the algorithm’s decision outweigh the expected costs (or risks):

ar > (1-a)p [2]

Which is the same as saying that:

a/(1-a) > p/r [3]

The odds of making the right decision should be higher than the ratio between penalty and benefit.

Enter human

We can reduce the risk of errors by bringing a human supervisor into the situation. This human supervisor can recognise and fix errors in algorithmic decisions. The impact of this strategy on the expected value of a decision depends on two parameters:

-Coverage ratio (k): k is the probability that the human supervisor will check a decision by the algorithm. If k is 1, this means that all algorithmic decisions are checked by a human.

-Supervision cost (cs(k)): this is the cost of supervising the decisions of the algorithm. The cost depends on the coverage ratio k because checking more decisions takes time.

The expected value of an algorithmic decision with human supervision is the following:[19]

Es = ar + (1-a)kr – (1-a)kp – cs(k) [4]

This equation picks up the fact that some errors are detected and rectified, and others are not. We subtract [3] from [4] to obtain the extra expected value from supervision. After some algebra, we get this.

(r+p)(1-a)k > cs(k) [5]

Supervision only makes economic sense when its expected benefit (which depends on the probability that the algorithm has made a mistake, that this mistake is detected, and the net benefits from flipping a mistake into a correct decision) is larger than the cost of supervision.

Scaling up

Here, I consider what happens when we start increasing n, the number of decisions being made by the algorithm.

The expected value is:

E(n) = nar + n(1-a)kr – n(1-a)(1-k)p [6]

And the costs are C(n)

How do these things change as n grows?

I make some assumptions to simplify things: the organisation wants to hold k constant, and the rewards r and penalties p remain constant as n increases.[20]

This leaves us with two variables that change as n increases: a and C.

  • I assume that algorithmic accuracy a declines with the number of decisions because the processes that degrade accuracy are stronger than those that improve it
  • I assume that C, the production costs, only depend on the labour of data scientists and supervisors. Each of these two occupations gets paid a salary wds and ws.

Based on this, and some calculus, we get the changes in expected benefits as we make more decisions as:

∂E(n)/∂(n) = r + (a+n(∂a/∂n))*(1-k)(r+p) - p(1-k) [7]

This means that as more decisions are made, the aggregated expected benefits grow in a way that is modified by changes in the marginal accuracy of the algorithm. On the one hand, more decisions mean scaled up benefits from more correct decisions. On the other, the decline in accuracy generates an increasing number of errors and penalties. Some of these are offset by human supervisors.

This is what happens with costs:

∂C/∂n = (∂C/∂Lds)(∂Lds/∂n) + (∂C/dLs)(∂Ls/dn) [8]

As the number of decisions increases, costs grow because the organisation has to recruit more data scientists and supervisors.

[8] is the same as saying:

∂C/dn = wds/(∂Lds/dn) + ws/zs/(∂Ls/∂n) [9]

The labour costs of each occupation are directly related to its salary, and inversely related to its marginal productivity. If we assume that data scientists are more productive than supervisors, this means that most of the increases in costs with n will be caused by increases in the supervisor workforce.

The expected value (benefits minus costs) from decision-making for the organisation is maximised with an equilibrium number of decisions ne where the marginal value of an extra decision equals its marginal cost:

r + (a+nda/dn)(1-k)(r+p) - p(1-k) = wds/(∂Lds/∂n) + ws/zs/(∂Ls/∂n) [10]

Extensions

Above, I have kept things simple by making some strong assumptions about each of the situations being modelled. What would happen if we relaxed these assumptions?

Here are some ideas:

Varieties of error

First, the analysis does not take into account that different types of errors (e.g. false positives and negatives, errors made with different degrees of certainty etc.) could have different rewards and penalties. I have also assumed certainty in rewards and penalties, when it would be more realistic to model them as random draws from probability distributions. This extension would help incorporate fairness and bias into the analysis. For example, if errors are more likely to affect vulnerable people (who suffer higher penalties), and these errors are less likely to be detected, this could increase the expected penalty from errors.

Humans are not perfect either

All of the above assumes that algorithms err but humans do not. This is clearly not the case. In many domains, algorithms can be a desirable alternative to humans with deep-rooted biases and prejudices. In those situations, human’s ability to detect and address errors is impaired, and this reduces the incentives to recruit them (this is the equivalent to a decline in their productivity). Organisations deal with all this by investing on technologies (e.g. crowdsourcing platforms) and quality assurance systems (including extra layers of human and algorithmic supervision) that manage the risks of human and algorithmic fallibility.

Scaling up rewards and penalties

Before, I assumed that the marginal penalties and rewards remain constant as the number of algorithmic decisions increase. This need not be the case. The table below shows examples of situations where these parameters change with the number of decisions being made:

Increases with more decisions Decreases with more decisions
Rewards The organisation gains market power, or is able to use price discrimination in more transactions The organisation runs out of valuable decisions to make.
Penalties The organisation becomes more prominent and its mistakes receive more attention Users get accustomed to errors

Getting an empirical handle on these processes is very important, as they could determine if there is a natural limit to the number of algorithmic decisions that an organisation can make economically in a domain or market, with potential implications for its regulation.

Endnotes

[1] I use the term ‘algorithm’ in a restricted sense, to refer to technologies that turn information into predictions (and depending on the system receiving the predictions, decisions). There are many processes to do this, including rule-based systems, statistical systems, machine learning systems and Artificial Intelligence (AI). These systems vary on their accuracy, scalability, interpretability, and ability to learn from experience, so their specific features should be considered in the analysis of algorithmic trade-offs.

[2] One could even say that machine learning is the science that manages trade-offs caused by the impossibility of eliminating algorithmic error. The famous ‘bias-variance’ trade off between fitting a model to known observations and predicting unknown ones is a good example of this.

[3] Some people would say that personalisation is undesirable because it can lead to discrimination and ‘filter bubbles’, but that is a question for another blog post.

[4] Dani Rodrik’s ‘Economics Rules’ makes a compelling case for models as simplistic but useful formalisations of complex reality.

[5] In a 2016 Harvard Business Review article, Ajay Agrawal and colleagues sketched out an economic analysis of machine learning as a technology that lowers the costs of prediction. My way of looking at algorithms is similar because predictions are inputs into decision-making.

[6] This includes personalised experiences and recommendations in e-commerce and social networking sites, or fraud detection and algorithmic trading in finance.

[7] For example, if YouTube shows me an advert which is highly relevant to my interests, I might buy the product, and this generates income for the advertiser, the video producer and YouTube. If it shows me a completely irrelevant or even offensive advert, I might stop using YouTube, or kick up a fuss in my social network of choice.

[8] Reinforcement learning builds agents that use the rewards and penalties from previous actions to make new decisions.

[9] This is what happened with the Google FluTrends system used to predict flu outbreaks based on google searches – people changed their search behaviour, and the algorithm broke down.

[10] In many cases, the penalties might be so high that we decide that an algorithm should never be used, unless it is supervised by humans.

[11] Unfortunately, care is not always taken when implementing algorithmic systems in high-stakes situations. Cathy O’Neil’s ‘Weapons of Maths Destruction’ gives many examples of this, going from the criminal justice system to university admissions.

[12] Mechanisms for accountability and due process are another example of human supervision.

[13] Using Albert Hirschmann’s model of exit, voice and loyalty, we could say that supervision plays the role of ‘voice’, helping organisations detect a decline in quality before users begin exiting.

[14] The appendix flags up some of my key assumptions, and suggests extensions.

[15] This includes rigorous evaluation of algorithmic decision-making and its organisation using Randomised Controlled Trial methods like those proposed by Nesta’s Innovation Growth Lab.

[16] This decision could be based on how well similar adverts perform when matched with different types of videos, on demographic information about the people who watch the videos, or other things.

[17] The analysis in this blog assumes that the results of algorithmic decisions are independent from each other. This assumption might be violated in situations where algorithms generate self-fulfilling prophecies (e.g. logically, a user is more likely to click an advert she is shown that one she is not). This is a hard problem to tackle, but researchers are developing methods based on randomisation of algorithmic decisions to address it.

[18] This does not distinguish between different types of error (e.g. false positives and false negatives). I come back to this at the end.

[19] Here, I am assuming that human supervisors are perfectly accurate. As we know from behavioural economics, this is a very strong assumption. I consider this issue at the end.

[20] I consider the implications of making different assumptions about marginal rewards and penalties at the end.

This post was originally published on Nesta. Click here to view the original.

The Force was strong in this robot competition

An Imperial Snowtrooper inspects a competitor’s entry at the 2017 MIT Mechanical Engineering 2.007 Student Design Final Robot Competition. Photo: Tony Pulsone

Thursday night, dozens of robots designed and built by undergraduates in a mechanical engineering class endured hours of intense, boisterous, and often jubilant competition as they scrambled to rack up points in one-on-one clashes on special “Star Wars”-themed playing arenas.

As has often happened in these contests — which have been going on, and constantly evolving, since 1970 — the ultimate winner in the single-elimination tournament was not the one that’d most consistently racked up the highest scores all evening. Rather, it was a high-scoring bot that triumphed when its competitor missed a crucial scoring opportunity because its starting position was just slightly out of alignment.

The class, 2.007 (Design and Manufacturing I), which has 165 mostly sophomore students, begins by giving each student an identical kit of parts, from which they each have to create a robot to carry out a variety of tasks to score points. This year, in a nod to the 40th anniversary of the first “Star Wars” film, released in 1977, the robots crawled around and over a replica of a “Star Wars” X-wing Starfighter. Students could earn points by pulling up a sliding frame to rescue prisoners trapped in carbonite; by dumping Imperial stormtroopers into a trash trench; by activating a cantina band; or by spinning up one or both of two large cylindrical thrusters on the wings. Students could choose which tasks to have their robot try to accomplish, and had just one semester to design, test, and operate their bot.

The devices could be pre-programmed to carry out set tasks, but could also be manually controlled through a radio-linked controller. As in past years, the open-ended nature of the assignment — and the variety of different ways to score — led to a wide range of strategies and designs, spanning from tall towers that would extend by telescoping out or with hinged sections, to elevator-like lifting devices, to small and nimble bots that scurried around to carry out multiple tasks, to an array of arms and devices for grasping or turning the different pieces. They sported names like Dodocopter, Bonnie and Clyde, Pitfall, Torque Toilet, Spinit to Winit, and Nicki Spinaj.

Students could earn extra points by accomplishing any of the tasks during an initial period when the robot had to perform autonomously, before the start of a manually remote-controlled round. The students were allowed to create multiple robots to carry out different tasks, as long as they were all made from the basic kit of parts, and all fit within a designated starting area. Most of the students opted to build two devices, and some even made three.

Second-place finisher Richard Moyer, with his small but powerful and robust robot called Tornado, consistently scored 960.5 points in every round (the highest score achieved by any of the bots), by spinning both the lower and upper thrusters to their maximum speeds — and by using the lower thruster during the high-scoring autonomous period. But on the final matchup, Tornado was just slightly out of place in the starting box, and missed the thruster, losing out on that big initial score.

The robot used a simple but reliable design, which sported a single horizontally-mounted drive wheel that it used to spin both the lower and upper thrusters, and also to activate an elevator mechanism that carried it from one wing to the other. It was “like the Swiss army knife of robots,” thanks to this multifunction device, said Sangbae Kim, an associate professor of mechanical engineering and co-instructor of the course, who was dressed as the “Star Wars” wookie, Chewbacca.

The grand-prize winner, Tom Frejowski, also built a compact, powerful robot that concentrated on the spinning task, and scored 640 in the final round to take home the top trophy (a replica of the MIT dome). Frejowski’s robot, in order to ensure that it made a straight shot from the starting position to the thruster to line up just right to spin the heavy cylinder, used a single motor to drive both of its front wheels, which helped him earn consistent high scores. “That’s how he goes dead straight every time,” said co-instructor Amos Winter, an assistant professor of mechanical engineering, who was dressed as Darth Vader and shared the emcee duties with Kim.

During the tournament, which took place in the Johnson Ice Rink, all of the course teachers and assistants were dressed in various “Star Wars” costumes, and a packed audience of fellow students, families, and visitors of all ages cheered their encouragement with great enthusiasm. During a break, each of the teaching assistants was presented with a special memento: a beaver-cut twig from a beaver dam in Nova Scotia, symbolizing MIT’s beaver mascot, and nature’s original mechanical engineer.

Echoing the sentiments of many students in the class, sophomore James Li said of the class in a pre-taped video: “I had a bit of building experience, but I never had to design and build anything of this complexity. … It was a great experience.”

RoboCup video series: 20 years of history

RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. Established in 1997, the original mission was to field a team of robots capable of winning against the human soccer World Cup champions by 2050. 

The competition has now grown into an international movement with a variety of leagues that go beyond soccer. Teams compete to make robots for rescue missions, the home, and industry. And it’s not just researchers, kids also have their own league. Last year, almost 3,000 participants and 1,200 robots competed.

To celebrate 20 years of RoboCup, the Federation is launching a video series featuring each of the leagues with one short video for those who just want a taster, and one long video for the full story. Robohub will be featuring one league every week leading up to RoboCup 2017 in Nagoya, Japan.

This week, we take a whirlwind tour of the RoboCup competition, spanning all the leagues. You’ll hear about the history and ambitions of RoboCup from the trustees, and inspiring teams from around the world.

Short Version

Long Version

Can’t wait to watch the rest? You can view all the videos on the RoboCup playlist below:
https://www.youtube.com/playlist?list=PLEfaZULTeP_-bqFvCLBWnOvFAgkHTWbWC

Please spread the word! and if you would like to join a team, check here for more information.

Watch this omnicopter fetch a ball

We have developed a computationally efficient trajectory generator for six degrees-of-freedom multirotor vehicles, i.e. vehicles that can independently control their position and attitude. The trajectory generator is capable of generating approximately 500’000 trajectories per second that guide the multirotor vehicle from any initial state, i.e. position, velocity and attitude, to any desired final state in a given time. In this video, we show an example application that requires the evaluation of a large number of trajectories in real time.

Multirotor vehicle

The multirotor vehicle used in the demonstration is an omni-directional eight-rotor vehicle. Its unique actuator configuration gives it full force and torque authority in all three dimensions, allowing it to fly novel maneuvers. For more details, please refer to the Youtube video or the research paper: “Design, Modeling and Control of an Omni-Directional Aerial Vehicle”, IEEE International Conference on Robotics and Automation (ICRA), 2016.

Researchers

Dario Brescianini and Raffaello D’Andrea
Institute for Dynamic Systems and Control (IDSC), ETH Zurich, Switzerland – http://www.idsc.ethz.ch

Location

Flying Machine Arena, ETH Zurich, Switzerland.

Acknowledgements

This work is supported by and builds upon prior contributions by numerous collaborators in the Flying Machine Arena project. See the list here. This research was supported by the Swiss National Science Foundation (SNSF).

Kids celebrate robotics at RoboFes 2017

Robo Done, the robotic academy franchise for kids from Osaka, Japan, celebrated Japan’s Day of the Children on the 5th of May at their annual event, Robot Festival 2017 or RoboFes. The event welcomed over 1,000 attendees, including children and their parents.

This was the 2nd time Robo Done has celebrated the festival. In only one year, the number of attendees has increased threefold (350 attendees in 2016 to over 1,012 in 2017). It was celebrated in the KANDAI MeRise Campus of the Kansai University in Osaka, Japan and has become the biggest event at the campus.

The main activity was the Robot Contest, using LEGO Mindstorm, with morning and afternoon leagues. Over 200 children — from 6 years and up — participated in the championship. The kids built robots in pairs and programmed their creations, repeating the process of trial-and-error against a time limit. Several IT and robot related companies had booths, as well as, students of the university, which offered a variety of activities for the kids to enjoy.

Robo Done will hold RoboFes again in 2018, hoping to inspire even more kids to enjoy robotics and programming. We hope RoboFes will become a regular event during Japan’s “Golden Week!”

On the future of human-centered robotics

“The new frontier is learning how to design the relationships between people, robots, and infrastructure,” says David Mindell, the Dibner Professor of the History of Engineering and Manufacturing, and a professor of aeronautics and astronautics. “We need new sensors, new software, new ways of architecting systems.” Photo: Len Rubenstein

Science and technology are essential tools for innovation, and to reap their full potential, we also need to articulate and solve the many aspects of today’s global issues that are rooted in the political, cultural, and economic realities of the human world. With that mission in mind, MIT’s School of Humanities, Arts, and Social Sciences has launched The Human Factor — an ongoing series of stories and interviews that highlight research on the human dimensions of global challenges. Contributors to this series also share ideas for cultivating the multidisciplinary collaborations needed to solve the major civilizational issues of our time.

David Mindell, the Frances and David Dibner Professor of the History of Engineering and Manufacturing and Professor of Aeronautics and Astronautics at MIT, researches the intersections of human behavior, technological innovation, and automation. Mindell is the author of five acclaimed books, most recently “Our Robots, Ourselves: Robotics and the Myths of Autonomy” (Viking, 2015). He is also the co-founder of Humatics Corporation, which develops technologies for human-centered automation. SHASS Communications recently asked him to share his thoughts on the relationship of robotics to human activities, and the role of multidisciplinary research in solving complex global issues.

Q: A major theme in recent political discourse has been the perceived impact of robots and automation on the United States labor economy. In your research into the relationship between human activity and robotics, what insights have you gained that inform the future of human jobs, and the direction of technological innovation?

A: In looking at how people have designed, used, and adopted robotics in extreme environments like the deep ocean, aviation, or space, my most recent work shows how robotics and automation carry with them human assumptions about how work gets done, and how technology alters those assumptions. For example, the U.S. Air Force’s Predator drones were originally envisioned as fully autonomous — able to fly without any human assistance. In the end, these drones require hundreds of people to operate.

The new success of robots will depend on how well they situate into human environments. As in chess, the strongest players are often the combinations of human and machine. I increasingly see that the three critical elements are people, robots, and infrastructure — all interdependent.

Q: In your recent book “Our Robots, Ourselves,” you describe the success of a human-centered robotics, and explain why it is the more promising research direction — rather than research that aims for total robotic autonomy. How is your perspective being received by robotic engineers and other technologists, and do you see examples of research projects that are aiming at human-centered robotics?

A: One still hears researchers describe full autonom as the only way to go; often they overlook the multitude of human intentions built into even the most autonomous systems, and the infrastructure that surrounds them. My work describes situated autonomy, where autonomous systems can be highly functional within human environments such as factories or cities. Autonomy as a means of moving through physical environments has made enormous strides in the past ten years. As a means of moving through human environments, we are only just beginning. The new frontier is learning how to design the relationships between people, robots, and infrastructure. We need new sensors, new software, new ways of architecting systems.

Q: What can the study of the history of technology teach us about the future of robotics?

A: The history of technology does not predict the future, but it does offer rich examples of how people build and interact with technology, and how it evolves over time. Some problems just keep coming up over and over again, in new forms in each generation. When the historian notices such patterns, he can begin to ask: Is there some fundamental phenomenon here? If it is fundamental, how is it likely to appear in the next generation? Might the dynamics be altered in unexpected ways by human or technical innovations?

One such pattern is how autonomous systems have been rendered less autonomous when they make their way into real world human environments. Like the Predator drone, future military robots will likely be linked to human commanders and analysts in some ways as well. Rather than eliding those links, designing them to be as robust and effective as possible is a worthy focus for researchers’ attention.

Q: MIT President L. Rafael Reif has said that the solutions to today’s challenges depend on marrying advanced technical and scientific capabilities with a deep understanding of the world’s political, cultural, and economic realities. What barriers do you see to multidisciplinary, sociotechnical collaborations, and how can we overcome them?

A: I fear that as our technical education and research continues to excel, we are building human perspectives into technologies in ways not visible to our students. All data, for example, is socially inflected, and we are building systems that learn from those data and act in the world. As a colleague from Stanford recently observed, go to Google image search and type in “Grandma” and you’ll see the social bias that can leak into data sets — the top results all appear white and middle class.

Now think of those data sets as bases of decision making for vehicles like cars or trucks, and we become aware of the social and political dimensions that we need to build into systems to serve human needs. For example, should driverless cars adjust their expectations for pedestrian behavior according to the neighborhoods they’re in?

Meanwhile, too much of the humanities has developed islands of specialized discourse that is inaccessible to outsiders. I used to be more optimistic about multidisciplinary collaborations to address these problems. Departments and schools are great for organizing undergraduate majors and graduate education, but the old two-cultures divides remain deeply embedded in the daily practices of how we do our work. I’ve long believed MIT needs a new school to address these synthetic, far-reaching questions and train students to think in entirely new ways.

Interview prepared by MIT SHASS Communications
Editorial team: Emily Hiestand (series editor), Daniel Evans Pritchard

The Drone Center’s Weekly Roundup 5/15/17

Sailors assigned to Explosive Ordnance Disposal Mobile Unit 5 (EODMU5) Platoon 142 recover an unmanned underwater vehicle onto a Coastal Riverine Group 1 Detachment Guam MK VI patrol boat in the Pacific Ocean May 10, 2017. Credit: Mass Communication Specialist 1st Class Torrey W. Lee/ U.S. Navy

May 8, 2017 – May 14, 2017

If you would like to receive the Weekly Roundup in your inbox, please subscribe at the bottom of the page.

News

The International Civil Aviation Organization announced that it plans to develop global standards for small unmanned aircraft traffic management. In a statement at the Association of Unmanned Vehicle Systems International’s Xponential trade conference, the United Nations agency said that as part of the initiative it has issued a Request for Information on air traffic management systems for drones. (GPS World)

Virginia Governor Terry McAuliffe has created a new office dedicated to drones and autonomous systems. According to Gov. McAuliffe, the Autonomous Systems Center for Excellence will serve as a “clearinghouse and coordination point” for research and development programs related to autonomous technologies. (StateScoop)

Commentary, Analysis, and Art

At the Telegraph, Alan Tovey writes that the U.K.’s exit from the European Union is unlikely to affect cross-channel cooperation on developing fighter drones.

At the Dead Prussian Podcast, Ulrike Franke discusses the role that drones currently play in the military.

At IHS Jane’s 360, Daniel Wasserbly writes that the U.S. Marine Corps will slow its acquisition of the Boeing Insitu Blackjack drone.

At the Bulletin of Atomic Scientists, James Rogers argues that the Trump administration policy on drones is “likely to prove counterproductive.”

At IEEE Spectrum, David Schneider examines state and local drone regulations.

In the Journal of Archaeological Science, Sean Field, Matt Waite, and LuAnn Wandsnider consider the utility of drones for archeological surveys.

At RJI Online, Jennifer Nelson looks at what a television station in Idaho is learning about using drones for news coverage.

A report by the European Center for Constitutional and Human Rights considers the “impact of drone attacks on law, warfare and society.”

At The New York Times, William Grimes visits “Drones: Is the Sky the Limit?,” a new exhibition at the Intrepid Sea, Air & Space Museum.

In a paper in the International Organization journal, Matthew Fuhrmann and Michael C. Horowitz consider the reasons that states acquire drones.

At Bloomberg, Justin Bachman looks at how different companies are seeking an advantage in managing data from drones for commercial purposes.

At the Associated Press, Dario Lopez and Joshua Goodman write about a U.S. Coast Guard program using drones to counter maritime smuggling.

In a speech at the Xponential 2017 trade show, Intel Corporation CEO Brian Krzanich argued that data will be the most significant aspect of the drone industry. (AUVSI)

At the South China Morning Post, Li Tao writes that China’s popular consumer drone brands are increasingly turning to the commercial sector.

At Defense One, Marcus Weisgerber writes that the Pentagon is using machine-learning to help identify ISIS targets.

Know Your Drone

Saudi Arabia’s King Abdulaziz City for Science and Technology unveiled the Saqr 1, an armed drone with a range of up to 2,500 km. (IHS Jane’s 360)  

U.S. drone maker AeroVironment unveiled the Snipe, a nano quadcopter that weighs just 150 grams. (New Atlas)

In a test, startup Volans-i flew a delivery drone along a 100-mile route in Texas, a new record for a drone delivery. (Tech Crunch)

Energy firm twingtec is developing a tethered drone that harvests power from the wind. (Design Boom)

The U.S. Army is seeking a midsize cargo drone that could operate with a high level of autonomy. (FlightGlobal)

Nautilus, a California startup, is developing a cargo drone that could carry thousands of pounds of goods over long distances. (Air & Space Magazine)

Drone maker Pulse Aerospace unveiled two new rotorcraft drones for military and commercial applications, the Radius 65 and the Vapor 15. (Press Release)

Piaseki Aerospace will likely submit its ARES demonstrator drone for the U.S. Marine Corps’ Unmanned Expeditionary Capabilities program. (FlightGlobal)

Turkish defense firm Aselsan has unveiled two new counter-drone systems. (IHS Jane’s 360)

Defense firm Kratos confirmed that it has conducted several demonstration flights of a high performance jet drone for an undisclosed customer. (FlightGlobal)

Technology firm Southwest Research Institute has been granted a patent for a system by which military drones can collaborate with unmanned ground vehicles. (Unmanned Aerial Online)

The U.S. Army is interested in developing a mid-size unmanned cargo vehicle that could carry up to 800 pounds of payload. (FlightGlobal)

A student at the Milwaukee Institute of Art and Design has created a drone designed to help parents track their children. (Milwaukee Journal Sentinel)

French drone maker Parrot is set to begin developing a line of prosumer drones. (Recode)

Defense firm Qinetiq has announced that it will pursue the U.S. Army’s Lightweight Reconnaissance Robot program. (IHS Jane’s 360)

The U.S. Army is seeking a replacement engine for the RQ-7 Shadow tactical drone. (FlightGlobal) 

Researchers at Carnegie Mellon have been crashing autonomous drones repeatedly in order to teach them how to avoid crashing. (IEEE)

An Air Force investigation found that the cause of the crash of an MQ-9 Reaper drone in Nevada last summer was pilot error. (Press Release)

A Defense Advanced Research Projects Agency press release describes in detail its recent military academy swarming competition.

Raytheon announced that it has installed ground-based sense-and-avoid systems at a number of air bases in the U.S. (IHS Jane’s 360)

The Digital Circuit has put together a compilation of images of some of the more interesting drones at this year’s Xponential drone conference.

Drones at Work

A drone flying over a bike race in in Rancho Cordova, California crashed into a cyclist. (Market Watch)

Meanwhile, a consumer drone crashed into a car crossing the Sydney Harbor Bridge in Australia. It is the second time a drone has crashed at the site of the bridge in the past nine months. (Sydney Morning Herald)

Insurance company Travelers has trained over 150 drone operators to use drones for insurance appraisals over properties. (Insurance Journal)

Kazakhstan’s armed forces displayed a number of its recently acquired unmanned aircraft during a military parade. (IHS Jane’s 360)

A Latvian technology firm used a large multirotor drone to carry a skydiver to altitude before he parachuted back down to earth. (Phys.org)

Clear Flight Solutions and AERIUM Analytics are set to begin integrating the Robird drone system, a falcon-like drone that scares birds away from air traffic, at Edmonton International Airport. (Unmanned Systems Technology)

Industry Intel

The U.S. Army awarded General Atomics Aeronautical Systems a $221.6 million contract modification for 20 extended range Gray Eagle drones and associated equipment. (DoD)

The U.S. Air Force awarded General Electric a $14 million contract for work that includes the Thermal Management System for unmanned aircraft. (DoD)

The U.S. Navy awarded Boeing Insitu a $8.1 million contract for spare parts for the RQ-21A Blackjack. (DoD)

The United Arab Emirates awarded Canada-based CAE a contract estimated at $40.9 million to train drone operators. (UPI)

Airbus opened a subsidiary in Atlanta that will sell imagery from satellites and drones to commercial clients. (AIN Online)

Turkish Aerospace Industries will begin cooperating with ANTONOV Company on the development of unmanned systems. (Press Release)

Aker, a company that develops drones for agriculture, won $950,000 in funding from the Clean Energy Trust Challenge. (Chicago Tribune)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Drones land back to Earth at Xponential 2017

PhoneDrone Ethos, Kickstarter campaign. Credit: xCraft/YouTube

JD Claridge’s story epitomizes the current state of the drone industry. Claridge, founder of xCraft, is best known for being the first contestant on Shark Tank to receive money from all the Sharks – even Kevin O’Leary! Walking the floor of Xponential 2017, the annual convention of the Association for Unmanned Vehicle Systems Integration (AUVSI), Claridge remarked to me how the drone industry has grown up since his TV appearance.

Claridge has gone from pitching cellphone cases that turn into drones (aka phonedrone) to solving mission critical problems. The age of fully autonomous flight is near and the drone industry is finally recovering from the hangover of overhyped Kickstarter videos (see Lily drone’s $34 million fraud). xCraft’s pivot to lightweight, power efficient, enterprise drones is an example of this evolved marketplace. During the three days of Xponential 2017, several far-reaching announcements were made between stalwarts of the tech industry and aviation startups. Claridge introduced me to his new partner, Rajant, which is a leader in industrial wireless networks. xCraft’s latest models utilize Rajant’s mesh networks to launch swarms of drones with one controller. More drones flying simultaneously enables users to maximize the flight time limitations of lithium batteries by covering greater areas within a single mission.

Bob Schena, Rajant’s CEO, said, “Rajant’s network technology now makes it possible for one pilot to operate many aircrafts concurrently, with flight times of 45 minutes. We’re pleased to partner with xCraft and bring more intelligence, mobility and autonomy to UAV communication infrastructures covering greater aerial distances while supporting various drone payloads.”

The battery has been the Achilles heel of the small drone industry since inception. While large winged craft relies heavily on fossil fuels, multirotor battery-operated drones have been plagued with shorter missions of under 45 minutes. Innovators like Claridge are leading the way for a new wave of creative solutions:

Solar Powered Wings 

Solar Powered Wings

Airbus showcased its Zephyr drone products or HAPS (High Altitude Pseudo-Satellite) UAVs using solar-winged craft for power. Zephyr UAVs can fly for months at a time, saving thousands of tons of fuel. The HAPS also offers a number of lightweight payload options from voice communications to persistent internet to real-time surveillance. Airbus was not the only solar solution on display; there were a handful of Chinese upstarts and solar cell purveyors for retrofitting existing aircrafts.

Hybrid Fuel Solutions  

In the Startup Pavilion, William Fredericks of the Advanced Aircraft Company (AAC) demoed a novel technology using a hybrid of diesel fuel and lithium batteries with flexible fixed wings and multirotors, resulting in over 3 hours of flying time. AAC’s prototype, the Hercules (above) is remarkably lightweight and fast. Fredricks is an aircraft designer by trade with 12 designs flying in the air, including NASA’s Greased Lightning that looks remarkably similar to Boeing’s Osprey. The Hercules is available for sale on the company’s website for multiple use cases, including: agricultural, first responders, and package delivery. It is interesting to note that a few rows from Frederick’s booth was his former employer, NASA, promoting their new Autonomy Incubator for “intelligent flight systems” and its “autonomy innovation lab,” (definitely an incubator to watch).

Vertical Take Off & Landing

In addition to hybrid fuel strategies, entrepreneurs are also rethinking the launch procedures. AAC’s Hercules and XCraft’s commercial line of drones vertically takeoff to reduce wind resistance and maximize energy consumption. Australian Startup Iridium Dynamics takes this approach to a new level with astonishing results. Its winged craft, Halo, uses a patent-pending “hover thrust” of its entire craft so its wings actually create the vertical lift to hover with minimal power. The drone also has two rotors to fly horizontally. According to Dion Gonano, Control Systems Engineer, it can fly for over 2 hours. The Halo also lands vertically into a stationary mechanical arm. While the website lists a number of commercial applications for this technology, it was unclear in my discussions with Gonano if they have deployed this technology in real tests.

New Charging Efficiencies

Prior to Xponential, Seattle-based WiBotic announced the closing of its $2.5 seed round to fund its next generation of battery charging technologies. The company has created a novel approach to wireless inductive charging for robotics. Its wireless inductive charging platform includes a patent-pending auto detect feature that can begin recharging once the robot enters the proximity of the base station, even during flight. According to Dr. Ben Waters, (CEO), its charge is faster than traditional solutions presently on the market. Dr. Waters demonstrated for me its suite of software tools that monitor battery performance, providing clients with a complete power management analytics platform. WiBotic is already piloting its technology with leading commercial customers in the energy and security sectors. WiBotic is the first inductive charging platform; other companies have created innovating battery-swapping techniques. Airobotics unique drone storage box that is deployed currently at power plants in Israel, includes a robotic arm, housed inside, that services the robot post flight by switching out the payload and battery:

Reducing Payload Weight

In addition to aircraft design, payload weight is a big factor of battery drain. A growing trend within the industry is miniaturizing the size and cost of the components. Ultimately, the mission of a drone is directly related to the type of payload from cameras for collecting images to precise measurements using Light Detection and Ranging sensors (or Lidar). Lidar is typically deployed in autonomous vehicles to provide the most precise position for the robot in a crowded area, like a self-driving car on the road. However, Lidar is currently extremely expensive and large for many multirotor surveys. Chris Brown of Z-Senz, a former scientist with the The National Institute of Standards and Technology (NIST), hopes to change the landscape of drones with his miniaturized Lidar sensor. Brown’s reduced sensor, SKY1, offers major advantages for size, weight, and power consumption without losing accuracy of high distance sensing. A recent study estimates the Lidar market is expected to exceed $5 billion by 2022, with Velodyne and Quanergy already gaining significant investment. Z-Senz is aiming to be commercially available by 2018.

Lidar is not the only measuring methodology, Global Positioning Solutions (GPS) have been deployed widely. Two of the finalists of the Xponetial Startup Showdown were startups focused on reducing GPS chip sizes and increasing functionality. Inertial Sense has produced a chip the size of a dime that is capable of housing an Inertial Measurement Unit (IMU), Attitude Heading Reference System (AHRS), and GPS-aided Inertial Navigation System (INS). Their website claims that their “advanced algorithms fuse output from MEMs inertial sensors, magnetometers, barometric pressure, and a high-sensitivity GPS (GNSS) receiver to deliver fast, accurate, and reliable attitude, velocity, and position even in the most dynamic environments.” The chips and micro navigation accessories are available on the company’s e-store.

The winner of the Showdown, uAvionix, is a leading developer of avionics for both manned and unmanned flight. Their new transceivers and transponders claim to be “the smallest, and lightest and most affordable on the market” (already GPS is a commodity). uAvionix presented its “Ping Network System that reduces weight on average by 40% as compared to the two-piece installations.” The Ping products also claim barometric altitude precision with accuracy beyond 80,000 ft.

Paul Beard, CEO of uAvionix, said, “our customers have asked for even smaller and lighter solutions; integrating the transceivers, GPS receivers, GPS antennas, and barometric pressure sensors into a single form factor facilitates easier installation and lowers weight and power draw requirements resulting in a longer usable flight time.”

As I rushed to the airport to catch my manned flight, I felt reenergized about the drone industry, although follies will persist. I mean who wouldn’t want a pool deckchair drone this summer?

This and all other autonomous subjects will be explored at RobotLabNYC’s next event with Dr. Howard Morgan (FirstRound Capital) and Tom Ryden (MassRobotics) – RSVP.

Classroom robotics: Training teachers to code

*All images credit: ROBBO

30 teachers arrived, excited to learn. They rolled up their sleeves and placed laptops and Robot kits on the floor. The room filled with excitement (and laughter!) as everyone tried to come up with different solutions on how to create different programs. The results were hilarious; a robot inspired by Darth Vader, a robot that asked everyone to turn the lights off when it was too bright in the room, and a robot that tricked the teacher to leave the classroom during an exam.

Not bad for a day of “work!”

Training, like above, is what we’re all about at ROBBO. ROBBO is a fun and simple way for absolutely anyone to get introduced to the world of robotics and coding. As a part of one of our many projects, we organized a training weekend for the single purpose of introducing teachers to programming and robotics. The teachers started with simple exercises in RobboScratch, a visual programming environment; moving the character, creating series of multiple commands, and learning the advantages of the infinite loop when making programs.

So, what do we mean about classroom robotics? Our educational robotics consist of two different robots; the Robot kit and the Lab. Both robots are ideal for learning programming, robotics as well as skills in problem-solving, mathematics and physics while working in interactive teams. The Lab includes a microphone, LED-lights, light sensor and a slider and is great for experimenting with different elements such as sound or light and numeric values. Our other robot is the Robot kit, equipped with a motor, which is a fun way to explore everyday technology using a touch sensor, proximity sensor, light sensor, line sensors and an LED-light. Our robots are programmed in the visual programming environment RobboScratch, an adapted version of Scratch developed at MIT.

In our earlier example, teachers were divided into separate workshops, working in pairs, or teams of three. We believe it is important to communicate and discuss with others to better understand different programs and come up with alternative solutions if the program doesn’t work in the desired way. The workshops are all based on the exercises from our pedagogical guide, and teachers were given a copy of the guide for their own use. Our guide provides instructions and multiple exercise card (with solutions!) and is free to download here (http://robbo.world/support/).

Our teaching guide is for anyone who wants to learn the basics of programming with the help of ROBBO™ robotics and RobboScratch. Our pedagogical guide is a comprehensive educational tool with instructions, exercise cards and ideas for creating the ultimate learning experience. It has been developed together with Innokas Network at the University of Helsinki and Finnish teachers and students. The majority of the teachers that participated in the training had only limited knowledge of Scratch or Scratch Junior and, therefore, we started from the beginning.

The pedagogical guide includes an introduction to RobboScratch, Lab and Robot kit as well as up to 28 exercise cards to help you along the way. The exercises are designed to develop necessary programming skills step by step, teaching children to think logically as a software developer would do, which may also be useful in many everyday situations. These are, in particular, the ability to understand the whole, to split a problem into smaller parts, and to develop a simple program to perform an operation. In the initial exercises, students will make a program using a predefined model, but as the practice progresses, they will have more and more space for their own ideas.

By developing new skills, users are encouraged to plan and develop innovations in robotics. The training goal is to learn understand and use technology to invent something new. As the final assignment of the teachers’ training, we asked teachers to form teams of four and come up with a small prank using the different capabilities and sensors of either Robot kit or Lab or simultaneous use of both robots.

If you’d like to to learn more about ROBBO or download our free guide, visit our website: http://robbo.world/support/

Comments from teachers:

“The teaching guide is a great support when learning coding. And I can just hand out these ready-made exercise cards to my students as well!”

“The exercise in the guide are good for understanding the different possibilities you have with the robots, because when you start doing an exercise you come up with more ideas on how to develop a more complicated program.”

“The robots emphasized practicality in the learning process. In addition to programming, ROBBO teaches environmental studies and all-around useful skills, in particular when the exercises of the pedagogical guide are being utilized.”

If you’d like to learn more about classroom robotics, check out these articles:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Living and working with robots: Live coverage of #ERF2017

Over 800 leading scientists, companies, and policymakers working in robotics will convene at the European Robotics Forum (#ERF2017) in Edinburgh, 22-24 March. This year’s theme is “Living and Working With Robots” with a focus on applications in manufacturing, disaster relief, agriculture, healthcare, assistive living, education, and mining.

The 3-day programme features keynotes, panel discussions, workshops, and plenty of robots roaming the exhibit floor.

We’ll be updating this post regularly with live tweets and videos. You can also follow all the Robohub coverage here.

Engineers design “tree-on-a-chip”

Engineers have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and other plants.

Trees and other plants, from towering redwoods to diminutive daisies, are nature’s hydraulic pumps. They are constantly pulling water up from their roots to the topmost leaves, and pumping sugars produced by their leaves back down to the roots. This constant stream of nutrients is shuttled through a system of tissues called xylem and phloem, which are packed together in woody, parallel conduits.

Now engineers at MIT and their collaborators have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and plants. Like its natural counterparts, the chip operates passively, requiring no moving parts or external pumps. It is able to pump water and sugars through the chip at a steady flow rate for several days. The results are published this week in Nature Plants.

Anette “Peko” Hosoi, professor and associate department head for operations in MIT’s Department of Mechanical Engineering, says the chip’s passive pumping may be leveraged as a simple hydraulic actuator for small robots. Engineers have found it difficult and expensive to make tiny, movable parts and pumps to power complex movements in small robots. The team’s new pumping mechanism may enable robots whose motions are propelled by inexpensive, sugar-powered pumps.

“The goal of this work is cheap complexity, like one sees in nature,” Hosoi says. “It’s easy to add another leaf or xylem channel in a tree. In small robotics, everything is hard, from manufacturing, to integration, to actuation. If we could make the building blocks that enable cheap complexity, that would be super exciting. I think these [microfluidic pumps] are a step in that direction.”

Hosoi’s co-authors on the paper are lead author Jean Comtet, a former graduate student in MIT’s Department of Mechanical Engineering; Kaare Jensen of the Technical University of Denmark; and Robert Turgeon and Abraham Stroock, both of Cornell University.

A hydraulic lift

The group’s tree-inspired work grew out of a project on hydraulic robots powered by pumping fluids. Hosoi was interested in designing hydraulic robots at the small scale, that could perform actions similar to much bigger robots like Boston Dynamic’s Big Dog, a four-legged, Saint Bernard-sized robot that runs and jumps over rough terrain, powered by hydraulic actuators.

“For small systems, it’s often expensive to manufacture tiny moving pieces,” Hosoi says. “So we thought, ‘What if we could make a small-scale hydraulic system that could generate large pressures, with no moving parts?’ And then we asked, ‘Does anything do this in nature?’ It turns out that trees do.”

The general understanding among biologists has been that water, propelled by surface tension, travels up a tree’s channels of xylem, then diffuses through a semipermeable membrane and down into channels of phloem that contain sugar and other nutrients.

The more sugar there is in the phloem, the more water flows from xylem to phloem to balance out the sugar-to-water gradient, in a passive process known as osmosis. The resulting water flow flushes nutrients down to the roots. Trees and plants are thought to maintain this pumping process as more water is drawn up from their roots.

“This simple model of xylem and phloem has been well-known for decades,” Hosoi says. “From a qualitative point of view, this makes sense. But when you actually run the numbers, you realize this simple model does not allow for steady flow.”

In fact, engineers have previously attempted to design tree-inspired microfluidic pumps, fabricating parts that mimic xylem and phloem. But they found that these designs quickly stopped pumping within minutes.

It was Hosoi’s student Comtet who identified a third essential part to a tree’s pumping system: its leaves, which produce sugars through photosynthesis. Comtet’s model includes this additional source of sugars that diffuse from the leaves into a plant’s phloem, increasing the sugar-to-water gradient, which in turn maintains a constant osmotic pressure, circulating water and nutrients continuously throughout a tree.

Running on sugar

With Comtet’s hypothesis in mind, Hosoi and her team designed their tree-on-a-chip, a microfluidic pump that mimics a tree’s xylem, phloem, and most importantly, its sugar-producing leaves.

To make the chip, the researchers sandwiched together two plastic slides, through which they drilled small channels to represent xylem and phloem. They filled the xylem channel with water, and the phloem channel with water and sugar, then separated the two slides with a semipermeable material to mimic the membrane between xylem and phloem. They placed another membrane over the slide containing the phloem channel, and set a sugar cube on top to represent the additional source of sugar diffusing from a tree’s leaves into the phloem. They hooked the chip up to a tube, which fed water from a tank into the chip.

With this simple setup, the chip was able to passively pump water from the tank through the chip and out into a beaker, at a constant flow rate for several days, as opposed to previous designs that only pumped for several minutes.

“As soon as we put this sugar source in, we had it running for days at a steady state,” Hosoi says. “That’s exactly what we need. We want a device we can actually put in a robot.”

Hosoi envisions that the tree-on-a-chip pump may be built into a small robot to produce hydraulically powered motions, without requiring active pumps or parts.

“If you design your robot in a smart way, you could absolutely stick a sugar cube on it and let it go,” Hosoi says.

This research was supported, in part, by the Defense Advance Research Projects Agency.

Worm-inspired material strengthens, changes shape in response to its environment

The Nereis virens worm inspired new research out of the MIT Laboratory for Atomistic and Molecular Mechanics. Its jaw is made of soft organic material, but is as strong as harder materials such as human dentin. Photo: Alexander Semenov/Wikimedia Commons

A new material that naturally adapts to changing environments was inspired by the strength, stability, and mechanical performance of the jaw of a marine worm. The protein material, which was designed and modeled by researchers from the Laboratory for Atomistic and Molecular Mechanics (LAMM) in the Department of Civil and Environmental Engineering (CEE), and synthesized in collaboration with the Air Force Research Lab (AFRL) at Wright-Patterson Air Force Base, Ohio, expands and contracts based on changing pH levels and ion concentrations. It was developed by studying how the jaw of Nereis virens, a sand worm, forms and adapts in different environments.

The resulting pH- and ion-sensitive material is able to respond and react to its environment. Understanding this naturally-occurring process can be particularly helpful for active control of the motion or deformation of actuators for soft robotics and sensors without using external power supply or complex electronic controlling devices. It could also be used to build autonomous structures.

“The ability of dramatically altering the material properties, by changing its hierarchical structure starting at the chemical level, offers exciting new opportunities to tune the material, and to build upon the natural material design towards new engineering applications,” wrote Markus J. Buehler, the McAfee Professor of Engineering, head of CEE, and senior author of the paper.

The research, recently published in ACS Nano, shows that depending on the ions and pH levels in the environment, the protein material expands and contracts into different geometric patterns. When the conditions change again, the material reverts back to its original shape. This makes it particularly useful for smart composite materials with tunable mechanics and self-powered roboticists that use pH value and ion condition to change the material stiffness or generate functional deformations.

Finding inspiration in the strong, stable jaw of a marine worm

In order to create bio-inspired materials that can be used for soft robotics, sensors, and other uses — such as that inspired by the Nereis — engineers and scientists at LAMM and AFRL needed to first understand how these materials form in the Nereis worm, and how they ultimately behave in various environments. This understanding involved the development of a model that encompasses all different length scales from the atomic level, and is able to predict the material behavior. This model helps to fully understand the Nereis worm and its exceptional strength.

“Working with AFRL gave us the opportunity to pair our atomistic simulations with experiments,” said CEE research scientist Francisco Martin-Martinez. AFRL experimentally synthesized a hydrogel, a gel-like material made mostly of water, which is composed of recombinant Nvjp-1 protein responsible for the structural stability and impressive mechanical performance of the Nereis jaw. The hydrogel was used to test how the protein shrinks and changes behavior based on pH and ions in the environment.

The Nereis jaw is mostly made of organic matter, meaning it is a soft protein material with a consistency similar to gelatin. In spite of this, its strength, which has been reported to have a hardness ranging between 0.4 and 0.8 gigapascals (GPa), is similar to that of harder materials like human dentin. “It’s quite remarkable that this soft protein material, with a consistency akin to Jell-O, can be as strong as calcified minerals that are found in human dentin and harder materials such as bones,” Buehler said.

At MIT, the researchers looked at the makeup of the Nereis jaw on a molecular scale to see what makes the jaw so strong and adaptive. At this scale, the metal-coordinated crosslinks, the presence of metal in its molecular structure, provide a molecular network that makes the material stronger and at the same time make the molecular bond more dynamic, and ultimately able to respond to changing conditions. At the macroscopic scale, these dynamic metal-protein bonds result in an expansion/contraction behavior.

Combining the protein structural studies from AFRL with the molecular understanding from LAMM, Buehler, Martin-Martinez, CEE Research Scientist Zhao Qin, and former PhD student Chia-Ching Chou ’15, created a multiscale model that is able to predict the mechanical behavior of materials that contain this protein in various environments. “These atomistic simulations help us to visualize the atomic arrangements and molecular conformations that underlay the mechanical performance of these materials,” Martin-Martinez said.

Specifically, using this model the research team was able to design, test, and visualize how different molecular networks change and adapt to various pH levels, taking into account the biological and mechanical properties.

By looking at the molecular and biological makeup of a the Nereis virens and using the predictive model of the mechanical behavior of the resulting protein material, the LAMM researchers were able to more fully understand the protein material at different scales and provide a comprehensive understanding of how such protein materials form and behave in differing pH settings. This understanding guides new material designs for soft robots and sensors.

Identifying the link between environmental properties and movement in the material

The predictive model explained how the pH sensitive materials change shape and behavior, which the researchers used for designing new PH-changing geometric structures. Depending on the original geometric shape tested in the protein material and the properties surrounding it, the LAMM researchers found that the material either spirals or takes a Cypraea shell-like shape when the pH levels are changed. These are only some examples of the potential that this new material could have for developing soft robots, sensors, and autonomous structures.

Using the predictive model, the research team found that the material not only changes form, but it also reverts back to its original shape when the pH levels change. At the molecular level, histidine amino acids present in the protein bind strongly to the ions in the environment. This very local chemical reaction between amino acids and metal ions has an effect in the overall conformation of the protein at a larger scale. When environmental conditions change, the histidine-metal interactions change accordingly, which affect the protein conformation and in turn the material response.

“Changing the pH or changing the ions is like flipping a switch. You switch it on or off, depending on what environment you select, and the hydrogel expands or contracts” said Martin-Martinez.

LAMM found that at the molecular level, the structure of the protein material is strengthened when the environment contains zinc ions and certain pH levels. This creates more stable metal-coordinated crosslinks in the material’s molecular structure, which makes the molecules more dynamic and flexible.

This insight into the material’s design and its flexibility is extremely useful for environments with changing pH levels. Its response of changing its figure to changing acidity levels could be used for soft robotics. “Most soft robotics require power supply to drive the motion and to be controlled by complex electronic devices. Our work toward designing of multifunctional material may provide another pathway to directly control the material property and deformation without electronic devices,” said Qin.

By studying and modeling the molecular makeup and the behavior of the primary protein responsible for the mechanical properties ideal for Nereis jaw performance, the LAMM researchers are able to link environmental properties to movement in the material and have a more comprehensive understanding of the strength of the Nereis jaw.

The research was funded by the Air Force Office of Scientific Research and the National Science Foundation’s Extreme Science and Engineering Discovery Environment (XSEDE) for the simulations.

Living and working with robots: European Robotics Forum to focus on robotics markets and future of work

Over 800 leading scientists, companies, and policymakers working in robotics will convene at the European Robotics Forum (#ERF2017) in Edinburgh, 22-24 March. This year’s theme is “Living and Working With Robots” with a focus on applications in manufacturing, disaster relief, agriculture, healthcare, assistive living, education, and mining.

The 3-day programme features keynotes, panel discussions, workshops, and plenty of robots roaming the exhibit floor. Visitors may encounter a humanoid from Pal Robotics, a bartender robot from KUKA, Shadow’s human-like hands, or the latest state-of-the-art robots from European research. Success stories from Horizon 2020, the European Union’s framework programme for research and innovation, and FP7 European projects will be on display.

Dr Cécile Huet Deputy Head of European Commission Robotics & Artificial Intelligence Unit, said, “A set of EU projects will demonstrate the broad impact of the EU funding programme in robotics: from progress in foundational research in robot learning, to in touch sensing for a new dimension in intuitive Human-Robot cooperation, to inspection in the oil-and-gas industry, security, care, manufacturing for SMEs, or the vast applications enabled by the progress in drones autonomous navigation.”

Reinhard Lafrenz, Secretary General of euRobotics said, “A rise in sales in robotics is driving the industry forward, and it’s not just benefiting companies who sell robots, but also SMEs and larger industries that use robots to increase their productivity and adopt new ways of thinking about their business. Around 80 robotics start-ups were created last year in Europe, which is truly remarkable. At euRobotics, we nurture the robotics industry ecosystem in Europe; keep an eye out for the Tech Transfer award and the Entrepreneurship award we’ll be giving out at ERF.”

Projects presented will include:

  • FUTURA – Focused Ultrasound Therapy Using Robotic Approaches
  • PETROBOT – Use cases for inspection robots opening up the oil-, gas- and petrochemical markets
  • sFly – Swarm of Micro Flying Robots
  • SMErobotics – The European Robotics Initiative for Strengthening the Competitiveness of SMEs in Manufacturing by Integrating aspects of Cognitive Systems
  • STRANDS – Spatio-Temporal Representations and Activities For Cognitive Control in Long-Term Scenarios
  • WEARHAP – WEARable HAPtics for Humans and Robots
  • Xperience – Robots Bootstrapped through Learning from Experience

The increased use of Artificial Intelligence and Machine Learning in robotics will be highlighted in two keynote presentations. Raia Hadsell, Senior Research Scientist at DeepMind will focus on deep learning, and strategies to make robots that can continuously learn and improve over time. Stan Boland, CEO of FiveAI, will talk about his company’s aim to accelerate the arrival of fully autonomous vehicles.

Professor David Lane, ERF2017 General Chair and Director of the Edinburgh Centre for Robotics, said,  “We’re delighted this year to have two invited keynotes of outstanding quality and relevance from the UK, representing both research and disruptive industrial application of robotics and artificial intelligence. EURobotics and its members are committed to the innovation that translates technology from research to new products and services. New industries are being created, with robotics providing the essential arms, legs and sensors that bring big data and artificial intelligence out of the laboratory and into the real world.”

Throughout ERF2017, emphasis will be given to the impact of robots on society and the economy. Keith Brown MSP, Cabinet Secretary for Economy, Jobs and Fair Work, will open the event, said, “The European Robotics Forum provides an opportunity for Scotland to showcase our world-leading research and expertise in robotics, artificial intelligence and human-robot interaction. This event will shine a light on some of the outstanding developments being pioneered and demonstrates Scotland’s vital role in this globally significant area.”

In discussing robots and society, Dr Patricia A. Vargas, ERF2017 General Chair and Director of the Robotics Laboratory at Heriot-Watt University, said, “As robots gradually move to our homes and workplace, we must make sure they are fully ethical. A potential morality code for robots should include human responsibilities, and take into account how humans can interact with robots in a safe way. The European Robotics Forum is the ideal place to drive these discussions.”

Ultimately, the forum aims to understand how robots can benefit small and medium-sized businesses, and how links between industry and academia can be improved to better exploit the strength of European robotics and AI research. As robots start leaving the lab to enter our home and work environments, it becomes increasingly important to understand how they will best work alongside human co-workers and users. Issues of policy, the law, and ethics will be debated during dedicated workshops.

Dr Katrin Lohan, General Chair and Deputy Director of the Robotics Laboratory at Heriot-Watt University said, “It is important how to integrate robotics into the workflow so that it support and not disrupt the human workers. The potential of natural interaction interfaces and non-verbal communication cues needs to be further explored. The synergies of robots and human workers could make all the difference for small and medium-sized businesses to discuss this the European Robotics Forum is the ideal place as it joins industry and academia community. ”

______________________

Confirmed keynote speakers include:
Keith Brown, Cabinet Secretary for the Economy, Jobs and Fair Work, Member of the Scottish Parliament
Raia Hadsell, Senior Research Scientist at DeepMind
Stan Boland, CEO of FiveAI

The full programme can be found here.

Dates: 22 – 24 March
Venue: EICC, The Exchange, 150 Morrison St., EH3 8EE Edinburgh, Scotland
Participants: 800+ participants expected
Website: http://www.erf2017.eu/

Press Passes:
Journalists may request free press badges, or support with interviews, by emailing publicity.chairs@erf2017.eu. Please see the website for additional information.

Organisers
The European Robotics Forum is organised by euRobotics under SPARC, the Public-Private partnership for Robotics in Europe. This year’s conference is hosted by the Edinburgh Centre for Robotics.

About euRobotics and SPARC
euRobotics is a non-profit organisation based in Brussels with the objective to make robotics beneficial for Europe’s economy and society.  With more than 250 member organisations, euRobotics also provides the European Robotics Community with a legal entity to engage in a public/private partnership with the European Commission, named SPARC.

SPARC, the public-private partnership (PPP) between the European Commission and euRobotics, is a European initiative to maintain and extend Europe’s leadership in civilian robotics. Its aim is to strategically position European robotics in the world thereby securing major benefits for the European economy and the society at large.

SPARC is the largest research and innovation programme in civilian robotics in the world, with 700 million euro in funding from the European Commission between 2014 to 2020, which is tripled by European industry to yield a total investment of 2.1 billion euro. SPARC will stimulate an ever more vibrant and effective robotics community that collaborates in the successful development of technical transfer and commercial exploitation.

www.eu-robotics.net
www.eu-robotics.net/sparc

Press contact details:

Sabine Hauert, Robohub President
Sabine.Hauert@robohub.org

OR

Kassie Perlongo, Managing Editor
Kassie.Perlongo@robohub.org

RoboThespian stars in UK play Spillikin, a love story

In a poignant play traveling throughout the UK, a robot is co-star and companion to the wife of the (now deceased) robot builder, with the wife developing early Alzheimer’s. The play explores very human themes about love, death, and disease, all handled extremely sensitively with RoboThespian playing a large role.

Jon Welch, the writer and director, said of the play:

“It’s a story about a robot maker. All of his life he builds robots, but he develops degenerative illness in mid-life and realizes he’s not going to live to remain a companion to his wife. His wife, by now, is developing early Alzheimer’s, so he builds his final creation, his final robot to be a companion to his wife.”

The robot is from Engineered Arts, a 12-year-old UK company that develops an ever expanding range of humanoid and semi-humanoid robots featuring natural human-like movement and advanced social behaviours. RoboThespian, Socibot and Byrun are their most prominent robot creations.

“We have pre-programmed every single thing the robot says and every single thing the robot does — all the moves. There’s about nearly 400 separate queues but they are made up of other files, all stuck together so there’s probably a couple of thousand cues in reality. So the robot will always say the same thing and move the same way, depending on what queue is been triggered at what particular time.”

This promotional video for the play is well worth watching:

Page 428 of 430
1 426 427 428 429 430