How to dance to a synthetic band
Google parent to pull plug on bipedal robot development
HALCON 18.11 – MVTec’s powerful machine vision software with a huge range of deep learning functions – Free trial!
What Industry 4.0 Means for Manufacturers
4 Common Questions About Autonomous Mobile Robots
Nepal’s first robot waiter is ready for orders
AdaSearch: A successive elimination approach to adaptive search
By Esther Rolf∗, David Fridovich-Keil∗, and Max Simchowitz
In many tasks in machine learning, it is common to want to answer questions given fixed, pre-collected datasets. In some applications, however, we are not given data a priori; instead, we must collect the data we require to answer the questions of interest.
This situation arises, for example, in environmental contaminant monitoring and census-style surveys. Collecting the data ourselves allows us to focus our attention on just the most relevant sources of information. However, determining which of these sources of information will yield useful measurements can be difficult. Furthermore, when data is collected by a physical agent (e.g. robot, satellite, human, etc.) we must plan our measurements so as to reduce costs associated with the motion of the agent over time. We call this abstract problem embodied adaptive sensing.
We introduce a new approach to the embodied adaptive sensing problem, in which a robot must traverse its environment to identify locations or items of interest. Adaptive sensing encompasses many well-studied problems in robotics, including the rapid identification of accidental contamination leaks and radioactive sources, and finding individuals in search and rescue missions. In such settings, it is often critical to devise a sensing trajectory that returns a correct solution as quickly as possible.
Robotically fabricated concrete façade mullions installed in DFAB House
To celebrate the installation of the concrete façade mullions digitally fabricated using Smart Dynamic Casting in the DFAB House, we have released a new extended video showing the entire process from start to finish.
Smart Dynamic Casting (SDC) is a continuous robotic slip-forming process that enables the prefabrication of material-optimised load-bearing concrete structures using a formwork significantly smaller than the structure produced. Read more about the project on the DFAB House website.
Smart Dynamic Casting is a collaborative research project of Gramazio Kohler Research, ETH Zurich, the Institute for Building Materials, ETH Zurich and the Institute of Structural Engineering, ETH Zurich. As part of the DFAB HOUSE project at the Empa and Eawag NEST research and innovation construction site in Dübendorf Smart Dynamic Casting is used for the automated prefabrication of material-optimised load-bearing concrete mullions.
Robots in sewers will save society millions
Thanks to Robotics Demand, Servo Motors and Drives to Grow to $15.92 Billion by 2022
Universal Robots Continues to Dominate Cobot Market but Faces Many Challengers
Interview with Michael ImObersteg, Future Robotix
Universal Robots hires more than 20 former Rethink Robotics employees
#273: Presented work at IROS 2018 (Part 1 of 3), with Alexandros Kogkas, Katie Driggs-Campbell and Martin Karlsson
In this episode, Audrow Nash interviews Alexandros Kogkas, Katie Driggs-Campbell, and Martin Karlsson about the work they presented at the 2018 International Conference on Intelligent Robots and Systems (IROS) in Madrid, Spain.
Alexandros Kogkas is a PhD Candidate at the Imperial College London and he speaks about an eye tracking framework to understand where a person is looking. This framework can be used to understand a person’s intentions, for example to hand a surgeon the correct tool or helping a person who is paraplegic. Kogkas discusses how the framework works, possible applications, and his future plans for this framework.
Katie Driggs-Campbell is a Post Doctoral Researcher at Stanford’s Intelligent System Laboratory and is—soon to be—an Assistant Professor at University of Illinois Urbana-Champaign (UIUC). She speaks about making inferences about the world from human actions, specifically in the context of autonomous cars. In the work she discusses, they use a model of a human driver that they use infer what is happening in the world, for example a human using a crosswalk. Driggs-Campbell talks about how they evaluate this work.
Martin Karlsson is a PhD student at Lund University in Sweden, and he speaks about a haptic interface to mirror robotic arms that requires no force sensing. He discusses a feedback law that allows a mirroring of forces and his future work to deal with joint friction.
Links