Page 364 of 433
1 362 363 364 365 366 433

HALCON 18.11 – MVTec’s powerful machine vision software with a huge range of deep learning functions – Free trial!

The newest version of MVTec HALCON is here to solve all of your machine vision tasks at utmost speed and robustness! Deep learning functions, like pixel-precise semantic segmentation or object detection, help you to identify and classify objects and flaws more flexibly and easier than ever before – HALCON extracts relevant image information automatically. Try it for free here!

AdaSearch: A successive elimination approach to adaptive search

By Esther Rolf∗, David Fridovich-Keil∗, and Max Simchowitz

In many tasks in machine learning, it is common to want to answer questions given fixed, pre-collected datasets. In some applications, however, we are not given data a priori; instead, we must collect the data we require to answer the questions of interest.

This situation arises, for example, in environmental contaminant monitoring and census-style surveys. Collecting the data ourselves allows us to focus our attention on just the most relevant sources of information. However, determining which of these sources of information will yield useful measurements can be difficult. Furthermore, when data is collected by a physical agent (e.g. robot, satellite, human, etc.) we must plan our measurements so as to reduce costs associated with the motion of the agent over time. We call this abstract problem embodied adaptive sensing.

We introduce a new approach to the embodied adaptive sensing problem, in which a robot must traverse its environment to identify locations or items of interest. Adaptive sensing encompasses many well-studied problems in robotics, including the rapid identification of accidental contamination leaks and radioactive sources, and finding individuals in search and rescue missions. In such settings, it is often critical to devise a sensing trajectory that returns a correct solution as quickly as possible.

Read More

Robotically fabricated concrete façade mullions installed in DFAB House

Credit:

To celebrate the installation of the concrete façade mullions digitally fabricated using Smart Dynamic Casting in the DFAB House, we have released a new extended video showing the entire process from start to finish.

Smart Dynamic Casting (SDC) is a continuous robotic slip-forming process that enables the prefabrication of material-optimised load-bearing concrete structures using a formwork significantly smaller than the structure produced. Read more about the project on the DFAB House website.

Smart Dynamic Casting is a collaborative research project of Gramazio Kohler Research, ETH Zurich, the Institute for Building Materials, ETH Zurich and the Institute of Structural Engineering, ETH Zurich. As part of the DFAB HOUSE project at the Empa and Eawag NEST research and innovation construction site in Dübendorf Smart Dynamic Casting is used for the automated prefabrication of material-optimised load-bearing concrete mullions.

#273: Presented work at IROS 2018 (Part 1 of 3), with Alexandros Kogkas, Katie Driggs-Campbell and Martin Karlsson



In this episode, Audrow Nash interviews Alexandros Kogkas, Katie Driggs-Campbell, and Martin Karlsson about the work they presented at the 2018 International Conference on Intelligent Robots and Systems (IROS) in Madrid, Spain.

Alexandros Kogkas is a PhD Candidate at the Imperial College London and he speaks about an eye tracking framework to understand where a person is looking.  This framework can be used to understand a person’s intentions, for example to hand a surgeon the correct tool or helping a person who is paraplegic.   Kogkas discusses how the framework works, possible applications, and his future plans for this framework.

Katie Driggs-Campbell is a Post Doctoral Researcher at Stanford’s Intelligent System Laboratory and is—soon to be—an Assistant Professor at University of Illinois Urbana-Champaign (UIUC). She speaks about making inferences about the world from human actions, specifically in the context of autonomous cars.  In the work she discusses, they use a model of a human driver that they use infer what is happening in the world, for example a human using a crosswalk. Driggs-Campbell talks about how they evaluate this work.

Martin Karlsson is a PhD student at Lund University in Sweden, and he speaks about a haptic interface to mirror robotic arms that requires no force sensing.  He discusses a feedback law that allows a mirroring of forces and his future work to deal with joint friction.

Links

Robotics Flagship: What the community thinks

In February we asked for input from the robotics community regarding a potential Robotics Flagship, a pan European interdisciplinary effort with 1B EUR in funding, if successful! The goal of the flagship is to drive the development of future robots and AIs that are ethically, socially, economically, energetically, and environmentally responsible and sustainable.

This is the first of many activities we will host to engage the community. You can read more about the Robotics Flagship in a nutshell here.

We received 125 replies (120 from Europe) from roboticists.

In what areas does robotics have the highest potential to benefit society?

Overall, replies show the potential of robotics in all sectors to benefit society, since they all received an average score above 3 out 5 (high potential). Sectors which received the highest average score were industry, logistics, agriculture, inspection of infrastructure, healthcare, exploration, and transport, in that order, all of with an average above 4. Other sectors highlighted by respondents included ecology and environmental protection, tourism, construction, and the use of robots for human understanding, or for scientific investigation of body and brain.

What are the main challenges to achieving this potential?

The main challenge to achieving this potential was seen as technological with an average score of 4.35 out of 5 (very challenging), then societal and regulatory (average scores of 3.69), and finally economic (average score of 3.52). Respondents also highlighted ethical, ideological and political challenges.

What are the key abilities that need to be developed for the robots of tomorrow?

Central to the flagship proposal is the need for new robot abilities that will make robots a reality in our everyday lives. All abilities shown below were seen as central to develop the robots of tomorrow with average scores above 2.9 out of 5 (very important). Abilities which received the highest average score were learning, advanced sensing, cognition, in that order, all with an average above 4. This clearly shows the need to develop robotics and AI hand in hand. Other abilities highlighted by respondents included, reliability, security and safety, reconfigurability, modularity and customisation, advanced actuation, and efficient energy usage.

What resources would you need to make your robots a reality?

Finally, we asked the community what resources they would need to make their robots a reality. Not surprisingly, funding came out on top with an average score of 4.7 out of 5 (very important), next came experimental sites (3.72), networking opportunities (3.75), fabrication facilities (3.58) and standards (3.31).

So what else did the community think would be helpful? Time, software and hardware aggregators, integrators, and maintainers, ethical and legal support, as well as a better understanding of user requirements and social attitudes.

What would you like to see in a robotics flagship?
Finally, we asked what the community would like to see in a robotics flagship. There were too many suggestions to post here, but a recurring theme was high risk projects and big ideas, the need for cross-disciplinary research, and the hope that robots will finally leave the lab to work alongside humans.

Worried about AI taking over the world? You may be making some rather unscientific assumptions

Eleni Vasilaki, Professor of Computational Neuroscience, University of Sheffield

File 20180923 117383 1d2tv74.jpg?ixlib=rb 1.1

Phonlamai Photo/Shutterstock

Should we be afraid of artificial intelligence? For me, this is a simple question with an even simpler, two letter answer: no. But not everyone agrees – many people, including the late physicist Stephen Hawking, have raised concerns that the rise of powerful AI systems could spell the end for humanity.

Clearly, your view on whether AI will take over the world will depend on whether you think it can develop intelligent behaviour surpassing that of humans – something referred to as “super intelligence”. So let’s take a look at how likely this is, and why there is much concern about the future of AI.

Humans tend to be afraid of what they don’t understand. Fear is often blamed for racism, homophobia and other sources of discrimination. So it’s no wonder it also applies to new technologies – they are often surrounded with a certain mystery. Some technological achievements seem almost unrealistic, clearly surpassing expectations and in some cases human performance.

No ghost in the machine

But let us demystify the most popular AI techniques, known collectively as “machine learning”. These allow a machine to learn a task without being programmed with explicit instructions. This may sound spooky but the truth is it is all down to some rather mundane statistics.

The machine, which is a program, or rather an algorithm, is designed with the ability to discover relationships within provided data. There are many different methods that allow us to achieve this. For example, we can present to the machine images of handwritten letters (a-z), one by one, and ask it to tell us which letter we show each time in sequence. We have already provided the possible answers – it can only be one of (a-z). The machine at the beginning says a letter at random and we correct it, by providing the right answer. We have also programmed the machine to reconfigure itself so that next time, if presented with the same letter, it is more likely to give us the correct answer for the next one. As a consequence, the machine over time improves its performance and “learns” to recognise the alphabet.

In essence, we have programmed the machine to exploit common relationships in the data in order to achieve the specific task. For instance, all versions of “a” look structurally similar, but different to “b”, and the algorithm can exploit this. Interestingly, after the training phase, the machine can apply the obtained knowledge on new letter samples, for example written by a person whose handwriting the machine has never seen before.

We do give AI answers.
Chim/Shutterstock

Humans, however, are good at reading. Perhaps a more interesting example is Google Deepmind’s artificial Go player, which has surpassed every human player in their performance of the game. It clearly learns in a way different to humans – playing a number of games with itself that no human could play in their lifetime. It has been specifically instructed to win and told that the actions it takes determine whether it wins or not. It has also been told the rules of the game. By playing the game again and again it can discover in each situation what is the best action – inventing moves that no human has played before.

Toddlers versus robots

Now does that make the AI Go player smarter than a human? Certainly not. AI is very specialised to particular type of tasks and it doesn’t display the versatility that humans do. Humans develop an understanding of the world over years that no AI has achieved or seem likely to achieve anytime soon.

The fact that AI is dubbed “intelligent” is ultimately down to the fact that it can learn. But even when it comes to learning, it is no match for humans. In fact, toddlers can learn by just watching somebody solving a problem once. An AI, on the other hand, needs tonnes of data and loads of tries to succeed on very specific problems, and it is difficult to generalise its knowledge on tasks very different to those trained upon. So while humans develop breathtaking intelligence rapidly in the first few years of life, the key concepts behind machine learning are not so different from what they were one or two decades ago.

Toddler brains are amazing.
Mcimage/Shutterstock

The success of modern AI is less due to a breakthrough in new techniques and more due to the vast amount of data and computational power available. Importantly, though, even an infinite amount of data won’t give AI human-like intelligence – we need to make a significant progress on developing artificial “general intelligence” techniques first. Some approaches to doing this involve building a computer model of the human brain – which we’re not even close to achieving.

Ultimately, just because an AI can learn, it doesn’t really follow that it will suddenly learn all aspects of human intelligence and outsmart us. There is no simple definition of what human intelligence even is and we certainly have little idea how exactly intelligence emerges in the brain. But even if we could work it out and then create an AI that could learn to become more intelligent, that doesn’t necessarily mean that it would be more successful.

Personally, I am more concerned by how humans use AI. Machine learning algorithms are often thought of as black boxes, and less effort is made in pinpointing the specifics of the solution our algorithms have found. This is an important and frequently neglected aspect as we are often obsessed with performance and less with understanding. Understanding the solutions that these systems have discovered is important, because we can also evaluate if they are correct or desirable solutions.

If, for instance, we train our system in a wrong way, we can also end up with a machine that has learned relationships that do not hold in general. Say for instance that we want to design a machine to evaluate the ability of potential students in engineering. Probably a terrible idea, but let us follow it through for the sake of the argument. Traditionally, this is a male dominated discipline, which means that training samples are likely to be from previous male students. If we don’t make sure, for instance, that the training data are balanced, the machine might end up with the conclusion that engineering students are male, and incorrectly apply it to future decisions.

Machine learning and artificial intelligence are tools. They can be used in a right or a wrong way, like everything else. It is the way that they are used that should concerns us, not the methods themselves. Human greed and human unintelligence scare me far more than artificial intelligence.The Conversation

Page 364 of 433
1 362 363 364 365 366 433