In this episode, Audrow Nash interviews Eric Diller, Assistant Professor at the University of Toronto, on wireless micro-scale robots that could eventually be used in human surgery. Diller speaks about the design, control, and manufacture of micro-scale surgical robotic devices, as well as when we might see this technology in the operating room.
Dr. Diller received his B.S. and M.S. in Mechanical Engineering at Case Western Reserve University, and Ph.D. at Carnegie Mellon University in 2013. His work is enabling a new approach to non-invasive medical procedures, micro-factories and scientific tools. He does this by shrinking the mechanical and electrical components of robots to centimeter, millimeter or even micrometer size. He uses magnetic fields and other smart-material actuation methods to make mobile functional devices. Dr. Diller envisions a future where drug delivery and surgery can be done in a fast, painless and focused way, and where new materials and devices can be manufactured using swarms of tiny gripping, cutting, and sensing wireless robots.
Dr. Diller has received the MIE Early Career Teaching Award, the UofT Connaught New Researcher Award, the Ontario Early Researcher Award, and the I.W. Smith Award from the Canadian Society for Mechanical Engineers.
In this episode, Audrow Nash speaks with Dave Coleman, CEO of PickNik Robotics, about the open source robotics manipulation platform called MoveIt. Coleman talks about MoveIt’s story, from inception and the early days to development and maintenance, as well as how MoveIt relates to the Robot Operating System (ROS) and their move to support ROS-2. He also speaks about MoveIt’s implementation, including global versus local planners and what that means. Coleman concludes by talking about World MoveIt Day and how those interested can begin learning MoveIt and contributing.
Here is a 2017 montage of how MoveIt has been used:
Dave Coleman completed his PhD in Computer Science at CU Boulder focusing on motion planning and his B.S. at Geogia Tech in Mechanical Engineering. Coleman has 12 years experience working in the field of robotics automation and is a leader in the open source MoveIt and ROS communities. His insights into robot-agnostic platforms for different morphologies, theoretical approaches, and different end-user requirements give him a well-rounded understanding for powerful robotic software. He has worked and consulted for all types of robotics companies including Google Robotics, Open Robotics, and Willow Garage.
In this episode, Audrow Nash speaks with Janet Vertessi, Assistant Professor of Sociology at Princeton, on her book Seeing Like a Rover: How Robots, Teams, and Images Craft Knowledge of Mars. The book is written about her experience living and working with NASA’s Mars Rover team, and includes her observations about the team’s leadership and their relationship with their robot millions of miles away on Mars. She also gives some advice from her findings for teams.
Janet Vertesi specializes in the sociology of science, knowledge, and technology. She has spent the past 7 years studying several NASA spacecraft teams as an ethnographer. Her book, Seeing like a Rover: Images and Interaction on the Mars Exploration Rover Mission (Chicago, 2014) draws on over two years of ethnographic immersion with the Mars Exploration Rover mission to show how scientists and engineers use digital images to conduct scientific research on another planet. She is currently working on followup study of the NASA-ESA Cassini mission to Saturn focusing on the role of sociotechnical organization in research, data-sharing, and decision-making on robotic spacecraft teams. Vertesi is also interested in the digital sociology: whether studying computational systems in social life, shifting sociological methods online, or applying sociological insights to build new technologies. She holds a Master’s degree from Cambridge and a PhD from Cornell, has received several grants from the National Science Foundation, and was awarded the Hacker-Mullins prize for best graduate student paper from the American Sociological Association, Science, Knowledge and Technology section in 2007.
In this episode, our interviewer Audrow Nash speaks to Gil Weinberg, Professor in Georgia Tech’s School of Music and the founding director of the Georgia Tech Center for Music Technology. Weinberg leads a research lab called the Robotic Musicianship group, which focuses on developing artificial creativity and musical expression for robots and on augmented humans. Weinberg discusses several of his improvisational robots and how they work, including Shimon, a multi-armed robot marimba player, as well as his work in prosthetic devices for musicians.
Below is a video that includes Shimon on marimba, Jason Barnes playing drums with a prostetic, and Prof. Gil Weinberg on bass guitar.
Gil Weinberg is a professor in Georgia Tech’s School of Music and the founding director of the Georgia Tech Center for Music Technology, where he leads the Robotic Musicianship group. His research focuses on developing artificial creativity and musical expression for robots and augmented humans. Among his projects are a marimba playing robotic musician called Shimon that uses machine learning for jazz improvisation, and a prosthetic robotic arm for amputees that restores and enhances human drumming abilities. Weinberg has presented his work worldwide in venues such as The Kennedy Center, The World Economic Forum, Ars Electronica, Smithsonian Cooper-Hewitt Museum, SIGGRAPH, TED-Ed, DLD and others. His music has been performed with orchestras such as Deutsches Symphonie-Orchester Berlin, the National Irish Symphony Orchestra, and the Scottish BBC Symphony while his research has been disseminated through numerous journal articles and patents. Weinberg received his M.S. and Ph.D. in Media Arts and Sciences from MIT and his B.A. from the interdisciplinary program for fostering excellence in Tel Aviv University.
In this episode, we take a closer look at the effect of novelty in human-robot interaction. Novelty is the quality of being new or unusual.
The typical view is that while something is new, or “a novelty”, it will initially make us behave differently than we would normally. But over time, as the novelty wears off, we will likely return to our regular behaviors. For example, a new robot may cause a person to behave differently initially, as its introduced into the person’s life, but after some time, the robot won’t be as exciting, novel and motivating, and the person might return to their previous behavioral patterns, interacting less with the robot.
To find out more about the concept of novelty in human-robot interactions, our interviewer Audrow caught up with Catharina Vesterager Smedegaard, a PhD-student at Aarhus University in Denmark, whose field of study is Philosophy.
Catharina sees novelty differently to how we typically see it. She thinks of it as projecting what we don’t know onto what we already know, which has implications for how human-robot interactions are designed and researched. She also speaks about her experience in philosophy more generally, and gives us advice on philosophical thinking.
Catharina Vesterager Smedegaard
Catharina Vesterager Smedegaard started as a PhD-student the 1st of December 2017. She has a BA and MA in philosophy. In autumn 2015, Catharina interned at the research group PENSOR (the present RUR), where she first became interested in Social Robotics and formed the idea for her MA thesis.
In this episode, we hear from Brad Hayes, Assistant Professor of Computer Science at the University of Colorado Boulder, who directs the university’s Collaborative AI and Robotics lab. The lab’s work focuses on developing systems that can learn from and work with humans—from physical robots or machines, to software systems or decision support tools—so that together, the human and system can achieve more than each could achieve on their own.
Our interviewer Audrow caught up with Dr. Hayes to discuss why collaboration may at times be preferable to full autonomy and automation, how human naration can be used to help robots learn from demonstration, and the challenges of developing collaborative systems, including the importance of shared models and safety to allow adoption of such technologies in future.
In this episode, Audrow Nash interviews Zhuoran Zhang, PhD student at the University of Toronto, about how robots can be used to assist in artificial insemination. Zhang discusses how precise robotic manipulators can be used to extract a single sperm and how sperm can be evaluated for fitness using computer vision. Zhang also discusses his future plans.
In this episode, Audrow Nash interviews Brian Gerkey, CEO of Open Robotics about the Robot Operating System (ROS) and Gazebo. Both ROS and Gazebo are open source and are widely used in the robotics community. ROS is a set of software libraries and tools, and Gazebo is a 3D robotics simulator. Gerkey explains ROS and Gazebo and talks about how they are used in robotics, as well as some of the design decisions of the second version of ROS, ROS2.
Brian Gerkey is the CEO of Open Robotics, which seeks to develop and drive the adoption of open source software in robotics. Before Open Robotics, Brian was the Director of Open Source Development at Willow Garage, a computer scientist in the SRI Artificial Intelligence Center, a post-doctoral scholar in Sebastian Thrun‘s group in the Stanford Artificial Intelligence Lab. Brian did his PhD with Maja Matarić in the USC Interaction Lab.
In this episode, Audrow Nash interviews Bilge Mutlu, Associate Professor at the University of Wisconsin–Madison, about design-thinking in human-robot interaction. Professor Mutlu discusses design-thinking at a high-level, how design relates to science, and he speaks about the main areas of his work: the design space, the evaluation space, and how features are used within a context. He also gives advice on how to apply a design-oriented mindset.
Bilge Mutlu is an Associate Professor of Computer Science, Psychology, and Industrial Engineering at the University of Wisconsin–Madison. He directs the Wisconsin HCI Laboratory and organizes the WHCI+D Group. He received his PhD degree from Carnegie Mellon University‘s Human-Computer Interaction Institute.
In this episode, Audrow Nash interviews Bernt Børnich, CEO, CTO, and Co-founder of Halodi Robotics, about Eve (EVEr3), a general purpose full-size humanoid robot, capable of a wide variety of tasks. Børnich discusses how Eve can be used in research, how Eve’s motors have been designed to be safe around humans (including why they use a low gear ratio), how they do direct force control and the benefits of this approach, and how they use machine learning to reduce cogging in their motors. Børnich also discusses the longterm goal of Halodi Robotics and how they plan to support researchers using Eve.
Below are two videos of Eve. The first is a video of how Eve can be used as a platform to address several research questions. The second shows Eve moving a box and dancing.
Bernt Børnich is the CEO and CTO of Halodi Robotics, and had the main responsibility for designing the motors, electronics and CAD-models for Eve. He holds a bachelor of robotics and nano-electronics from the University of Oslo.