Archive 15.11.2018

Page 6 of 49
1 4 5 6 7 8 49

AdaSearch: A successive elimination approach to adaptive search

By Esther Rolf∗, David Fridovich-Keil∗, and Max Simchowitz

In many tasks in machine learning, it is common to want to answer questions given fixed, pre-collected datasets. In some applications, however, we are not given data a priori; instead, we must collect the data we require to answer the questions of interest.

This situation arises, for example, in environmental contaminant monitoring and census-style surveys. Collecting the data ourselves allows us to focus our attention on just the most relevant sources of information. However, determining which of these sources of information will yield useful measurements can be difficult. Furthermore, when data is collected by a physical agent (e.g. robot, satellite, human, etc.) we must plan our measurements so as to reduce costs associated with the motion of the agent over time. We call this abstract problem embodied adaptive sensing.

We introduce a new approach to the embodied adaptive sensing problem, in which a robot must traverse its environment to identify locations or items of interest. Adaptive sensing encompasses many well-studied problems in robotics, including the rapid identification of accidental contamination leaks and radioactive sources, and finding individuals in search and rescue missions. In such settings, it is often critical to devise a sensing trajectory that returns a correct solution as quickly as possible.

Read More

Robotically fabricated concrete façade mullions installed in DFAB House

Credit:

To celebrate the installation of the concrete façade mullions digitally fabricated using Smart Dynamic Casting in the DFAB House, we have released a new extended video showing the entire process from start to finish.

Smart Dynamic Casting (SDC) is a continuous robotic slip-forming process that enables the prefabrication of material-optimised load-bearing concrete structures using a formwork significantly smaller than the structure produced. Read more about the project on the DFAB House website.

Smart Dynamic Casting is a collaborative research project of Gramazio Kohler Research, ETH Zurich, the Institute for Building Materials, ETH Zurich and the Institute of Structural Engineering, ETH Zurich. As part of the DFAB HOUSE project at the Empa and Eawag NEST research and innovation construction site in Dübendorf Smart Dynamic Casting is used for the automated prefabrication of material-optimised load-bearing concrete mullions.

#273: Presented work at IROS 2018 (Part 1 of 3), with Alexandros Kogkas, Katie Driggs-Campbell and Martin Karlsson



In this episode, Audrow Nash interviews Alexandros Kogkas, Katie Driggs-Campbell, and Martin Karlsson about the work they presented at the 2018 International Conference on Intelligent Robots and Systems (IROS) in Madrid, Spain.

Alexandros Kogkas is a PhD Candidate at the Imperial College London and he speaks about an eye tracking framework to understand where a person is looking.  This framework can be used to understand a person’s intentions, for example to hand a surgeon the correct tool or helping a person who is paraplegic.   Kogkas discusses how the framework works, possible applications, and his future plans for this framework.

Katie Driggs-Campbell is a Post Doctoral Researcher at Stanford’s Intelligent System Laboratory and is—soon to be—an Assistant Professor at University of Illinois Urbana-Champaign (UIUC). She speaks about making inferences about the world from human actions, specifically in the context of autonomous cars.  In the work she discusses, they use a model of a human driver that they use infer what is happening in the world, for example a human using a crosswalk. Driggs-Campbell talks about how they evaluate this work.

Martin Karlsson is a PhD student at Lund University in Sweden, and he speaks about a haptic interface to mirror robotic arms that requires no force sensing.  He discusses a feedback law that allows a mirroring of forces and his future work to deal with joint friction.

Links

Robotics Flagship: What the community thinks

In February we asked for input from the robotics community regarding a potential Robotics Flagship, a pan European interdisciplinary effort with 1B EUR in funding, if successful! The goal of the flagship is to drive the development of future robots and AIs that are ethically, socially, economically, energetically, and environmentally responsible and sustainable.

This is the first of many activities we will host to engage the community. You can read more about the Robotics Flagship in a nutshell here.

We received 125 replies (120 from Europe) from roboticists.

In what areas does robotics have the highest potential to benefit society?

Overall, replies show the potential of robotics in all sectors to benefit society, since they all received an average score above 3 out 5 (high potential). Sectors which received the highest average score were industry, logistics, agriculture, inspection of infrastructure, healthcare, exploration, and transport, in that order, all of with an average above 4. Other sectors highlighted by respondents included ecology and environmental protection, tourism, construction, and the use of robots for human understanding, or for scientific investigation of body and brain.

What are the main challenges to achieving this potential?

The main challenge to achieving this potential was seen as technological with an average score of 4.35 out of 5 (very challenging), then societal and regulatory (average scores of 3.69), and finally economic (average score of 3.52). Respondents also highlighted ethical, ideological and political challenges.

What are the key abilities that need to be developed for the robots of tomorrow?

Central to the flagship proposal is the need for new robot abilities that will make robots a reality in our everyday lives. All abilities shown below were seen as central to develop the robots of tomorrow with average scores above 2.9 out of 5 (very important). Abilities which received the highest average score were learning, advanced sensing, cognition, in that order, all with an average above 4. This clearly shows the need to develop robotics and AI hand in hand. Other abilities highlighted by respondents included, reliability, security and safety, reconfigurability, modularity and customisation, advanced actuation, and efficient energy usage.

What resources would you need to make your robots a reality?

Finally, we asked the community what resources they would need to make their robots a reality. Not surprisingly, funding came out on top with an average score of 4.7 out of 5 (very important), next came experimental sites (3.72), networking opportunities (3.75), fabrication facilities (3.58) and standards (3.31).

So what else did the community think would be helpful? Time, software and hardware aggregators, integrators, and maintainers, ethical and legal support, as well as a better understanding of user requirements and social attitudes.

What would you like to see in a robotics flagship?
Finally, we asked what the community would like to see in a robotics flagship. There were too many suggestions to post here, but a recurring theme was high risk projects and big ideas, the need for cross-disciplinary research, and the hope that robots will finally leave the lab to work alongside humans.

Worried about AI taking over the world? You may be making some rather unscientific assumptions

Eleni Vasilaki, Professor of Computational Neuroscience, University of Sheffield

File 20180923 117383 1d2tv74.jpg?ixlib=rb 1.1

Phonlamai Photo/Shutterstock

Should we be afraid of artificial intelligence? For me, this is a simple question with an even simpler, two letter answer: no. But not everyone agrees – many people, including the late physicist Stephen Hawking, have raised concerns that the rise of powerful AI systems could spell the end for humanity.

Clearly, your view on whether AI will take over the world will depend on whether you think it can develop intelligent behaviour surpassing that of humans – something referred to as “super intelligence”. So let’s take a look at how likely this is, and why there is much concern about the future of AI.

Humans tend to be afraid of what they don’t understand. Fear is often blamed for racism, homophobia and other sources of discrimination. So it’s no wonder it also applies to new technologies – they are often surrounded with a certain mystery. Some technological achievements seem almost unrealistic, clearly surpassing expectations and in some cases human performance.

No ghost in the machine

But let us demystify the most popular AI techniques, known collectively as “machine learning”. These allow a machine to learn a task without being programmed with explicit instructions. This may sound spooky but the truth is it is all down to some rather mundane statistics.

The machine, which is a program, or rather an algorithm, is designed with the ability to discover relationships within provided data. There are many different methods that allow us to achieve this. For example, we can present to the machine images of handwritten letters (a-z), one by one, and ask it to tell us which letter we show each time in sequence. We have already provided the possible answers – it can only be one of (a-z). The machine at the beginning says a letter at random and we correct it, by providing the right answer. We have also programmed the machine to reconfigure itself so that next time, if presented with the same letter, it is more likely to give us the correct answer for the next one. As a consequence, the machine over time improves its performance and “learns” to recognise the alphabet.

In essence, we have programmed the machine to exploit common relationships in the data in order to achieve the specific task. For instance, all versions of “a” look structurally similar, but different to “b”, and the algorithm can exploit this. Interestingly, after the training phase, the machine can apply the obtained knowledge on new letter samples, for example written by a person whose handwriting the machine has never seen before.

We do give AI answers.
Chim/Shutterstock

Humans, however, are good at reading. Perhaps a more interesting example is Google Deepmind’s artificial Go player, which has surpassed every human player in their performance of the game. It clearly learns in a way different to humans – playing a number of games with itself that no human could play in their lifetime. It has been specifically instructed to win and told that the actions it takes determine whether it wins or not. It has also been told the rules of the game. By playing the game again and again it can discover in each situation what is the best action – inventing moves that no human has played before.

Toddlers versus robots

Now does that make the AI Go player smarter than a human? Certainly not. AI is very specialised to particular type of tasks and it doesn’t display the versatility that humans do. Humans develop an understanding of the world over years that no AI has achieved or seem likely to achieve anytime soon.

The fact that AI is dubbed “intelligent” is ultimately down to the fact that it can learn. But even when it comes to learning, it is no match for humans. In fact, toddlers can learn by just watching somebody solving a problem once. An AI, on the other hand, needs tonnes of data and loads of tries to succeed on very specific problems, and it is difficult to generalise its knowledge on tasks very different to those trained upon. So while humans develop breathtaking intelligence rapidly in the first few years of life, the key concepts behind machine learning are not so different from what they were one or two decades ago.

Toddler brains are amazing.
Mcimage/Shutterstock

The success of modern AI is less due to a breakthrough in new techniques and more due to the vast amount of data and computational power available. Importantly, though, even an infinite amount of data won’t give AI human-like intelligence – we need to make a significant progress on developing artificial “general intelligence” techniques first. Some approaches to doing this involve building a computer model of the human brain – which we’re not even close to achieving.

Ultimately, just because an AI can learn, it doesn’t really follow that it will suddenly learn all aspects of human intelligence and outsmart us. There is no simple definition of what human intelligence even is and we certainly have little idea how exactly intelligence emerges in the brain. But even if we could work it out and then create an AI that could learn to become more intelligent, that doesn’t necessarily mean that it would be more successful.

Personally, I am more concerned by how humans use AI. Machine learning algorithms are often thought of as black boxes, and less effort is made in pinpointing the specifics of the solution our algorithms have found. This is an important and frequently neglected aspect as we are often obsessed with performance and less with understanding. Understanding the solutions that these systems have discovered is important, because we can also evaluate if they are correct or desirable solutions.

If, for instance, we train our system in a wrong way, we can also end up with a machine that has learned relationships that do not hold in general. Say for instance that we want to design a machine to evaluate the ability of potential students in engineering. Probably a terrible idea, but let us follow it through for the sake of the argument. Traditionally, this is a male dominated discipline, which means that training samples are likely to be from previous male students. If we don’t make sure, for instance, that the training data are balanced, the machine might end up with the conclusion that engineering students are male, and incorrectly apply it to future decisions.

Machine learning and artificial intelligence are tools. They can be used in a right or a wrong way, like everything else. It is the way that they are used that should concerns us, not the methods themselves. Human greed and human unintelligence scare me far more than artificial intelligence.The Conversation

The race for robot clairvoyance

This week a Harvard Business School student challenged me to name a startup capable of producing an intelligent robot – TODAY! At first I did not understand the question, as artificial intelligence (AI) is an implement like any other in a roboticist’s toolbox. The student persisted, she demanded to know if I thought that the current co-bots working in factories could one day evolve to perceive the world like humans. It’s a good question that I didn’t appreciate at the time as robots are best deployed for specific repeatable tasks, even with deep learning systems. By contrast, mortals comprehend their surroundings (and other organisms) using a sixth sense, intuition.

tennisbot.gif

As an avid tennis player, I also enjoyed meeting Tennibot this week. The autonomous ball-gathering robot sweeps the court like a roomba sucking up dust off a rug. In order to accomplish this task, without knocking over players, it navigates around the cage utilizing six cameras on each side. This is a perfect example of the type of job that an unmanned system excels at performing, freeing up athletes from wasting precious court time with tedious cleanup. Yet, Tennibot, at the end of the day, is a dumb appliance. While it gobbles up balls quicker than any person, it is unable to discern the quality of the game or the health of players.

No one expects Tennibot to save Roger Federer’s life, but what happens when a person has a heart attack inside a self-driving car on a two-hour journey? While autonomous vehicles are packed with sensors to identify and safely steer around cities and highways, few are able to perceive human intent. As Ann Cheng of Hyundai explains, “We [drivers] think about what that other person is doing or has the intent to do. We see a lot of AI companies working on more classical problems, like object detection [or] object classification. Perceptive is trying to go one layer deeper—what we do intuitively already.” Hyundai joined Jim Adler’s Toyota AI Ventures this month in investing Perceptive Automata, an “intuitive self-driving system that is able to recognize, understand, and predict human behavior.”

0_1dqK7bN1HqXq6AT1.png

As stated by Adler’s Medium post, Perceptive’s technology uses “behavioral science techniques to characterize the way human drivers understand the state-of-mind of other humans and then train deep learning models to acquire that human ability. These deep learning models are designed for integration into autonomous driving stacks and next-generation driver assistance systems, sandwiched between the perception and planning layers. These deep learning, predictive models provide real-time information on the intention, awareness, and other state-of-mind attributes of pedestrians, cyclists and other motorists.”

While Perceptive Automata is creating “predictive models” for outside the vehicle, few companies are focused on the conditions inside the cabin. The closest implementations are a number of eye-tracking cameras that alert occupants to distracted driving. While these technologies observe the general conditions of passengers, they rely on direct eye contact to distinguish between emotions (fatigue, excitability, stress, etc.), which is impossible if one is passed out. Furthermore, none of these vision systems have the ability to predict human actions before they become catastrophic.

Isaac Litman, formerly of Mobileye, understands fully well the dilemma presented by computer vision systems in delivering on the promise of autonomous travel. In speaking with Litman this week about his newest venture Neteera, he declared that in today’s automative landscape the “the only unknown variable is the human.” Unfortunately, the recent wave of Tesla and Uber autopilot crashes has glaringly illustrated the importance of tracking the attention of vehicle occupants in handing off between autopilot systems and human drivers. Litman further explains that Waymo and others are collecting data on occupant comfort as AI-enabled drivers have reportedly led to high levels of nausea from driving too consistently. Litman describes this as the indigestion problem, clarifying that after eating a big meal one may want to drive more slowly than on an empty stomach. In the future Litman professes that autonomous cars will be marketed “not by the performance of their engines, but on the comfort of their rides.”

Screen Shot 2018-10-28 at 11.46.02 AM.png

Litman’s view is further endorsed by the recent patent application filed this summer by Apple’s Project Titan team for developing “Comfort Profiles” for autonomous driving. According to AppleInsider, the application “describes how an autonomous driving and navigation system can move through an environment, with motion governed by a number of factors that are set indirectly by the passengers of the vehicle.” The Project Titan system would utilize a fusion of sensors (LIDAR, depth cameras, and infrared) to monitor the occupants’ “eye movements, body posture, gestures, pupil dilation, blinking, body temperature, heart beat, perspiration, and head position.” The application details how the data would integrate into the vehicle systems to automatically adjust the acceleration, turning rate, performance, suspension, traction control and other factors to the personal preferences of the riders. While Project Titan is taking the first step toward developing an autonomous comfort system, Litman expresses that it is limited by the inherent short-comings of vision-based systems that are susceptible to light, dust, line of sight, condensation, motion, resolution, and safety concerns.

Unlike vision sensors, Neteera is a cost-effective micro-radar on a chip that leverages its own network of proprietary algorithms to provide “the first contact free vital sign detection platform.” Its FDA-level of accuracy is not only being utilized by the automative sector, but healthcare systems across the United States for monitoring such elusive conditions as sleep apnea and sudden infant death syndrome. To date, the challenge of monitoring vital signs through micro-skin motion in the automotive industry has been the displacement caused by a moving vehicles. However, Litman’s team has developed a a patent-pending “motion compensation algorithm” that tracks “quasi-periodic signals in the presence of massive random motions,” providing near perfect accuracy (see tables below).

Screen Shot 2018-10-28 at 1.22.31 PM

While the automotive industry races to launch fleets of autonomous vehicles, Litman estimates that the most successful players will be the ones that install empathic engines into the machines’ framework. Unlike the crowded field of AI and computer vision startups that are enabling robocars to safely navigate city streets, Neteera’s “intuition on a chip” is probably one of the only mechatronic ventures that actually report on the psychological state of drivers and passengers. Litman’s innovation has wider societal implications, as social robots begin to augment humans in the workplace and support the infirm and elderly in coping with the fragility of life.

As scientists improve artificial intelligence, it is still unclear what the reaction will be from ordinary people to such “emotional” robots. In the words of writer Adam Williams, “Emotion is something we reserve for ourselves: depth of feeling is what we use to justify the primacy of human life. If a machine is capable of feeling, that doesn’t make it dangerous in a Terminator-esque fashion, but in the abstract sense of impinging on what we think of as classically human.”

Page 6 of 49
1 4 5 6 7 8 49