All posts by Silicon Valley Robotics

Page 3 of 4
1 2 3 4

Audience Choice HRI 2020 Demo

Welcome to the voting for the Audience Choice Demo from HRI 2020. Each of these demos showcases an aspect of Human-Robot Interaction research, and alongside “Best Demo” award, we’re offering an “Audience Choice” award. You can see the video and abstract from each demo here, with voting at the bottom. One vote per person. Deadline May 14 11:59 PM BST. You can also register for the Online HRI 2020 Demo Discussion and Award Presentation on May 21 4:00 PM BST.

1. Demonstration of A Social Robot for Control of Remote Autonomous Systems José Lopes, David A. Robb, Xingkun Liu, Helen Hastie

Abstract: There are many challenges when it comes to deploying robots remotely including lack of situation awareness for the operator, which can lead to decreased trust and lack of adoption. For this demonstration, delegates interact with a social robot who acts as a facilitator and mediator between them and the remote robots running a mission in a realistic simulator. We will demonstrate how such a robot can use spoken interaction and social cues to facilitate teaming between itself, the operator and the remote robots.


2. Demonstrating MoveAE: Modifying Affective Robot Movements Using Classifying Variational Autoencoders Michael Suguitan, Randy Gomez, Guy Hoffman

Abstract: We developed a method for modifying emotive robot movements with a reduced dependency on domain knowledge by using neural networks. We use hand-crafted movements for a Blossom robot and a classifying variational autoencoder to adjust affective movement features by using simple arithmetic in the network’s learned latent embedding space. We will demonstrate the workflow of using a graphical interface to modify the valence and arousal of movements. Participants will be able to use the interface themselves and watch Blossom perform the modified movements in real time.


3. An Application of Low-Cost Digital Manufacturing to HRI Lavindra de Silva, Gregory Hawkridge, German Terrazas, Marco Perez Hernandez, Alan Thorne, Duncan McFarlane, Yedige Tlegenov

Abstract: Digital Manufacturing (DM) broadly refers to applying digital information to enhance manufacturing processes, supply chains, products and services. In past work we proposed a low-cost DM architecture, supporting flexible integration of legacy robots. Here we discuss a demo of our architecture using an HRI scenario.


4. Comedy by Jon the Robot John Vilk, Naomi T. Fitter

Abstract: Social robots might be more effective if they could adapt in playful, comedy-inspired ways based on heard social cues from users. Jon the Robot, a robotic stand-up comedian from the Oregon State University CoRIS Institute, showcases how this type of ability can lead to more enjoyable interactions with robots. We believe conference attendees will be both entertained and informed by this novel demonstration of social robotics.


5. CardBot: Towards an affordable humanoid robot platform for Wizard of Oz Studies in HRI Sooraj Krishna, Catherine Pelachaud

Abstract: CardBot is a cardboard based programmable humanoid robot platform designed for inexpensive and rapid prototyping of Wizard of Oz interactions in HRI incorporating technologies such as Arduino, Android and Unity3d. The table demonstration showcases the design of the CardBot and its wizard controls such as animating the movements, coordinating speech and gaze etc for orchestrating an interaction.


6. Towards Shoestring Solutions for UK Manufacturing SMEs Gregory Hawkridge, Benjamin Schönfuß, Duncan McFarlane, Lavindra de Silva, German Terrazas, Liz Salter, Alan Thorne

Abstract: In the Digital Manufacturing on a Shoestring project we focus on low-cost digital solution requirements for UK manufacturing SMEs. This paper shows that many of these fall in the HRI domain while presenting the use of low-cost and off-the-shelf technologies in two demonstrators based on voice assisted production.


7. PlantBot: A social robot prototype to help with behavioral activation in young people with minor depression Max Jan Meijer, Maaike Dokter, Christiaan Boersma, Ashwin Sadananda Bhat, Ernst Bohlmeijer, Jamy Li

Abstract: The PlantBot is a home device that shows iconographic or simple lights to depict actions that it requests a young person (its user) to do as part of Behavioral Activation therapy. In this initial prototype, a separate conversational speech agent (i.e., Amazon Alexa) is wizarded to act as a second system the user can interact with.


8. TapeBot: The Modular Robotic Kit for Creating the Environments Sonya S. Kwak, Dahyun Kang, Hanbyeol Lee, JongSuk Choi

Abstract: Various types of modular robotic kits such as the Lego Mindstorm [1], edutainment robot kit by ROBOTIS [2], and the interactive face components, FacePartBot [3] have been developed and suggested to increase children’s creativity and to learn robotic technologies. By adopting a modular design scheme, these robotic kits enable children to design various robotic characters with plenty of flexibility and creativity, such as humanoids, robotic animals, and robotic faces. However, because a robot is an artifact that perceives an environment and responds to it accordingly, it can also be characterized by the environment it encounters. Thus, in this study, we propose a modular robotic kit that is aimed at creating an interactive environment for which a robot produces various responses.

We chose intelligent tapes to build the environment for the following reasons: First, we presume that decreasing the expectations of consumers toward the functionalities of robotic products may increase their acceptance of the products, because this hinders the mismatch between the expected functions based on their appearances, and the actual functions of the products [4]. We believe that the tape, which is found in everyday life, is a perfect material to lower the consumers’ expectation toward the product and will be helpful for the consumer’s acceptance of it. Second, the tape is a familiar and enjoyable material for children, and it can be used as a flexible module, which users can cut into whatever size they want and can be attached and detached with ease.

In this study, we developed a modular robotic kit for creating an interactive environment, called the TapeBot. The TapeBot is composed of the main character robot and the modular environments, which are the intelligent tapes. Although previous robotic kits focused on building a robot, the TapeBot allows its users to focus on the environment that the robot encounters. By reversing the frame of thinking, we expect that the TapeBot will promote children’s imagination and creativity by letting them develop creative environments to design the interactions of the main character robot.


9. A Gesture Control System for Drones used with Special Operations Forces Marius Montebaur, Mathias Wilhelm, Axel Hessler, Sahin Albayrak

Abstract: Special Operations Forces (SOF) are facing extreme risks when prosecuting crimes in uncharted environments like buildings. Autonomous drones could potentially save officers’ lives by assisting in those exploration tasks, but an intuitive and reliable way of communicating with autonomous systems is yet to be established. This paper proposes a set of gestures that are designed to be used by SOF during operation for interaction with autonomous systems.


10. CoWriting Kazakh: Learning a New Script with a Robot – Demonstration Bolat Tleubayev, Zhanel Zhexenova, Thibault Asselborn, Wafa Johal, Pierre Dillenbourg, Anara Sandygulova

Abstract: This interdisciplinary project aims to assess and manage the risks relating to the transition of Kazakh language from Cyrillic to Latin in Kazakhstan in order to address challenges of a) teaching and motivating children to learn a new script and its associated handwriting, and b) training and providing support for all demographic groups, in particular senior generation. We present the system demonstration that proposes to assist and motivate children to learn a new script with the help of a humanoid robot and a tablet with stylus.


11. Voice Puppetry: Towards Conversational HRI WoZ Experiments with Synthesised Voices Matthew P. Aylett, Yolanda Vazquez-Alvarez

Abstract: In order to research conversational factors in robot design the use of Wizard of Oz (WoZ) experiments, where an experimenter plays the part of the robot, are common. However, for conversational systems using a synthetic voice, it is extremely difficult for the experimenter to choose open domain content and enter it quickly enough to retain conversational flow. In this demonstration we show how voice puppetry can be used to control a neural TTS system in almost real time. The demo hopes to explore the limitations and possibilities of such a system for controlling a robot’s synthetic voice in conversational interaction.

de1045vf.mp4

12. Teleport – Variable Autonomy across Platforms Daniel Camilleri, Michael Szollosy, Tony Prescott

Abstract: Robotics is a very diverse field with robots of different sizes and sensory configurations created with the purpose of carrying out different tasks. Different robots and platforms each require their own software ecosystem and are coded with specific algorithms which are difficult to translate to other robots.

CAST YOUR VOTE FOR “AUDIENCE CHOICE”

VOTING CLOSES ON THURSDAY MAY 14 AT 11:59 PM BST [British Standard Time]

Where are the robots when you need them!

Looking at the Open Source COVID-19 Medical Supplies production tally of handcrafted masks and faceshields, we’re trying to answer that question in our weekly discussions about ‘COVID-19, robots and us’. We talked to  Rachel ‘McCrafty’ Sadd has been building systems and automation for COVID mask making, as the founder of Project Mask Making and #distillmyheart projects in the SF Bay Area, an artist and also as Executive Director of Ace Monster Toys makerspace/studio. Rachel has been organizing volunteers and automating workflows to get 1700 cloth masks hand sewn and distributed to people at risk before the end of April. “Where’s my f*king robot!” was the theme of her short presentation.

If you think that volunteer efforts aren’t able to make a dent in the problems, here’s the most recent (4/20/20) production tally for the group Open Source COVID-19 Medical Supplies, who speak regularly on this web series. One volunteer group has tallied efforts by volunteers across 45 countries who have so far produced 2,315,559 pieces of PPE. And that’s not counting the #distillmyheart masks. Here’s Rachel’s recent interview on KTVU. Those masks aren’t going to make themselves, people!

We also heard from Robin Murphy, Expert in Rescue Robotics & Raytheon Professor at Texas A&M University, who updated her slides on the way in which robots are being used in COVID-19 response. You can find more information on Robotics for Infectious Diseases, an organization formed in response to the Ebola outbreak and chaired by Dr Murphy. There is also a series of interviews answering any questions a roboticist might have about deploying robots with public health, public safety and emergency managers.

Next we heard from Missy Cummings, Expert in Robotics Safety & Professor at Duke University. “I’ve been doing robotics, certification testing and certification for almost 10 years now. I started out in drones. And then kind of did a segue over into driverless cars and I also work on medical systems. So I work in this field of safety critical systems, where the operation of the robot in terms of the drone or the car or the medical robot, it can actually do damage to people if not designed correctly. Here’s a link to a paper that I’ve written for AI Magazine that’s really looking at the maturity of driverless cars.

“I spent a ridiculous amount of time on Capitol Hill trying to to be a middle ground between, yes, these are good technologies. We want to do research and investment and keep keep building a capacity. But no, we’re not ready to have widespread deployment yet. And I don’t care what Elon Musk says you’re not getting full self driving anytime soon.”

“Any reasoning system has to go through four levels of reasoning, you start at the basics, what we call skill based reasoning, then you go up to rule knowledge and expert based reasoning. And so where do we see that in cars? When you learn to drive you had to learn skill based reasoning, which was learning for example, how to track light lines on the road. Once you did that, maybe 20 minutes to learn that then you never actually have a problem with that again.”

“So once you have the cognitive capacity that you’ve learned skills, then you have enough spare mental capacity to think about rule based reasoning. And that’s when you start to understand, Okay, I see this octagon in front of me, it is a stop sign it’s read, I know that what it means there’s a set of procedures that go along with stopping and I’m going to follow those when I see it. Then once you have the rules of the environment that you’ve learned, then you have the spare capacity to start thinking about knowledge base reasoning, the big jump between rule and knowledge base reasoning is the ability to judge under uncertainty. So this is where you start to see the uncertainty arrow growing. So when you go up to knowledge base reasoning, you are starting to have to make guesses about the world with imperfect information. “

“So I like to show this picture of a stop sign partially obscured by vegetation. There are many many, many driverless car system computer vision systems right now that cannot figure this out that if they see some level of partially obscured stop sign, they just cannot see because they don’t see the way that we see. They don’t see the complete picture. And so they don’t judge that there’s a stop sign there. And you might have seen the recent case of the Tesla being tricked by a partially modified 35 mile per hour sign with a little bit of tape to make it see 85 miles per hour. It’s a really good illustration of just how brittle machine learning deep learning is when it comes to perceptual based reasoning. And then we get to the highest level of reasoning where you really have to make judgments under maximum uncertainty. “

“I love this illustration of this stop sign with these four different arrows. You cannot do this, you cannot turn left, you cannot go right, you cannot go straight and you cannot go back. So I’d be curious, I’d like to see what any driverless car would do in this situation. Because what do you do in the situation? You have to break one of these rules, you have to make a judgment, you have to figure out what is the best possible way to get yourself out of the system. And it means that you’re going to have to break rules inside the system. So the expert base reasoning knowledge base reasoning there is what we call top down reasoning, it’s you taking experience judgment in the world that you’ve had. As you’ve gotten older in life and had more experiences, you bring that to bear to make better guesses about what you should do in the future.”

“Bottom up reasoning is, is essentially what is happening in machine learning. You’re taking all the bits and bytes of information from the world, and then processing that to then make some kind of action. So right now Computers are really good at skill based reasoning, some rule based reasoning, humans are really the best at knowledge and expert based reasoning. And this is something we call functional allocation. But the problem is there’s a big break between rule and knowledge. Driverless cars cannot do this right now, until we can make that jump into knowledge and expert base reasoning. What are we going to do have to do?” [Missy Cummings, Robot Safety Expert]

Michael Sayre, CEO Cognicept said “I’m working with a company called Cognizant as CEO. We’re essentially solving a lot of the problems that Missy highlighted, which is that, you know, when we look at autonomy, and moving robots into the real world, there are a lot of complexities about the real world that cause what we call edge case failure in these systems. And so what we’ve built is essentially a system that allows a confused robot to dial out for human operator.”

“Human in the loop is not a new idea. This is something that self driving cars have used. What we’ve built is essentially a system and a service that allows this confused robot to dialogue for help on real time basis. We essentially listen for intervention requests from robots. So that can be an error code, or, you know, some kind of failure of the system timeouts, whatever it is really, we can listen for that event. And then we cause a ticket to be registered in our system, which our operators will then see, that connects them to the robot, they can kind of get a sense of what’s happening in the robots environment, they can get sensor information, populated in a 3d canvas, we can see videos and so forth, that allow the operator to make judgments on the robots behalf. “

“Self driving vehicles is probably not the best example for us. But maybe you would be able to use our system in something like a last mile delivery vehicle, which will face a lot of the same problems. Maybe the robots uncertain about whether it can cross the road. We can have a look at the video feed from that robot, understand what the traffic signals are saying, or what the environment looks like. And then give the robotic command to essentially help it with getting past the scenario that caused that Case failure. So we see this as sort of a way to help get robots into more useful service.”

Savioke robots deployed in hospitals

“You know, right now, even at 1% failure rate for a lot of these applications can be a deal breaker. You know, we, especially for self driving cars, as everybody mentioned, you know, the cost of failure is really high. But even in other sort of less critical cases, like in building delivery, you see, you know, if something is spinning around in a circle or not performing its job, it causes people to lose confidence in the system stopped using it. And it’s also you know, during the time that it’s confused, not performing its function. So we essentially built this system as a way to bring robots into a broader range of applications and improve the sort of uptime of the system so that it doesn’t get into these positions where it’s stuck during its operation.”

“Similarly, we have robots that get lost in spaces that are widely variable. So you know, a warehouse that has boxes or pallets that move in and out of the space very frequently. That’s going to confuse the robot because its internal map is static in most cases. And when you have a very large change, the robots going to be confused about its location, and then not be able to proceed with its its normal operation. That’s something that we can help with we essentially will be able to look at the robot’s environment and understand where it is in its space and then update its location. Again, you know, we look at different types of obstacles. You know, a plastic bag is not really an obstacle, we can, you can run through that. But on a LIDAR, it shows up the same way as a pile of bricks.”

Anybotics concept delivery with Continental

“So by having a human in the loop element, we are able to sort of handle these edge case failures and get robots to perform functions that they wouldn’t otherwise be able to perform and be useful in applications that were maybe too challenging for full autonomy. I think a lot of it has to do with sort of how dangerous is the robot in question. So, you know, for a self driving vehicle, very dangerous, right, we’ve got a half ton of steel, you know, moving at, you know, relatively rapid speeds. This is a dangerous system. “

“On the other hand, in building delivery robots, we’re doing some work in quarantine zones, making deliveries in buildings that allow social distancing to be maintained. We can put needed supplies inside of this delivery robot and send it in a building to the delivery room. So worst case scenario, we might bump into somebody. It’s just inconvenient and might sort of ruin the either the economics of the usefulness of the robot. That would be a good case for these less critical systems. So things like in-building delivery, material handling and logistics spaces. Maybe like a picking arm like a robot arm pulling things out of a box and putting it into a shipping container, or into another robot for in-building delivery.”

“While we try to get as fast as we can, you’re still talking about 30 seconds, maybe before you can really respond to the problem in a meaningful way. So, you know, 30 seconds is an eternity for a self driving vehicle, whereas for an in-building delivery robot, it’s not a big deal. So I think you know, the answer to that it’s pretty application dependent and also system dependent, you know, how dangerous is the system inherently?” [Michael Sayre, CEO of Cognicept]

Rex St John, ARM IOT Ecosystem Evangelist presented an unusual COVID-19 response topic. “This isn’t quite a robotics topic. But a few weeks ago, I began working on a project called Rosetta@home. So if you’re not familiar, there’s a lot of researchers that are studying protein folding, and other aspects of biological research. And they don’t have the funding to pay for supercomputer time. So what they do is they they distribute the research workloads to volunteer networks all around the world through this program called boink. So there’s a lot of these programs, there’s SETI@home, and Rosetta@home and Fold@home. And there’s all these people that volunteer their extra compute cycles by downloading this client. And then researchers upload work, jobs to the cloud, and then those jobs are distributed to these home computers.”

“So because I work at arm, we realize that Fold@home and Rosetta@home. are two projects which are used specifically to study protein folding. They did not have arm 64 bit clients available, which means you can’t run them on a Raspberry Pi four, you can’t run them on some of the newer arm hardware. So there are a lot of people in the community that were wanting to help out with Fold@home and Rosetta@home, which are now being used extensively by researchers specifically to study COVID-19. So we put together this community project. And it came together very, very quickly. Because once everybody learned about this opportunity, they jumped on board very quickly. So what happened was these guys from Neocortex, it’s a startup out of San Jose. They jumped on this and their CTO ported all the key libraries from Rosetta@home to arm 64 and then within a couple weeks a week or two actually, we’re now up to 793 arm 64 bit devices that are supporting researchers studying COVID-19 so anybody that wants to help out if you’ve got a Raspberry Pi four or an arm 64 bit device you can install Rosetta@home on your Raspberry Pi four and begin crunching on proteins to help researchers fight back COVID-19. https://www.neocortix.com/coronavirus

“You can see this is the spike protein right there of COVID-19. COVID-19 uses the spike protein to sort of latch on to the the receptors of human cells and that’s how it kind of invades your body. So they’re doing a lot of work to understand the structure and behavior that spike protein on Rosetta and Fold@home.” [Rex St John ARM IoT Evangelist]

Scientific illustration of the Coronavirus spike glycoprotein

Ken Goldberg, Director of CITRIS People and Robots Initiative said “I do have one thought that I’d like to share that occurred to me this week, which is that I wonder if we’re shifting from what used to be called the ‘High Tech High Touch’ concept from John Naisbitt. He wrote ‘Mega Trends’ about how we were moving as we got toward more high tech, we’d also just as much crave that touch. And I wonder if we’re moving toward a low touch future where we actually will see new value in things that are don’t involve touch.” “It’s been so interesting for me to be you know, to be in the house. I’ve gotten a whole new appreciation for things like washing machines and even vacuum cleaners. They’re incredible these mechanisms that help us do things, that rather than us reaching down and touching everything they basically do it for us.”

“I’ve been thinking about you before this pandemic. There are a lot of things out there like robot vending machines that I was a little skeptical about. And I thought, well, I don’t really see what’s the big advantage, given a choice I’d rather have a human making a hamburger or coffee. But now I’m starting to really think that equation has changed. And I wonder if that’s going to change permanently. In other words, are we actually going to see this a real trend toward things like these robot coffee making baristas and robot burgers like Creator, the company in San Francisco. or Miso robotics is developing fast food making robots. I think it’s time to really reevaluate those trends because I think there is going to be an actual visceral appeal for this kind of low touch future.” [Ken Goldberg CITRIS People and Robots]

There’ll be more next week on Tuesday April 28 so sign up for COVID-19, robots and us with guest speakers focusing on regulations, risks and opportunities:

  • Chelsey Colbert, Policy Council at The Future of Privacy Forum
  • Michael Froomkin, Laurie Silvers and Mitchell Rubenstein Distinguished Professor of Law
  • Ryan Calo, Lane Powell & D. Wayne Gittinger Endowed Professorship of Law
  • Sue Keay, Research Director at CSIRO Data 61 and Aust National COVID response
  • Robin Murphy, Rescue Robotics Expert and Raytheon Professor at Texas A&M University
  • Thomas Low, Director of Robotics SRI International
  • Ken Goldberg, Director of CITRIS People and Robots Initiative
  • Andra Keay, Director of Silicon Valley Robotics and founder of Women in Robotics

Can robots make food service safer for workers?

Health care workers are not the only unwilling essential services frontline workers at increased risk of COVID-19. According to the Washington Post on April 12, “At least 41 grocery workers have died of the coronavirus and thousands more have tested positive in recent weeks”. At the same time, grocery stores are seeing a surge in demand and are currently hiring. The food industry is also seeing increasing adoption of robots in both the back end supply chain and in the food retail and food service sectors.

“Grocery workers are risking their safety, often for poverty-level wages, so the rest of us can shelter in place,” said John Logan, director of labor and employment studies at San Francisco State University. “The only way the rest of us are able to stay home is because they’re willing to go to work.” [Washington Post April 12 2020]

In our April 7th edition of “COVID-19, robots and us”, we heard from Linda Pouliot, CEO and founder of Dishcraft Robotics and Vipin Jain, CEO and founder of Blendid. Both provided us with insights into robotics in the food industry and the difficulties and joys of adapting robots to COVID-19 work.

Dishcraft Robotics is Linda Pouliot’s fourth startup, and second robotics startup. She previously cofounded Neato Robotics, the number two player in automated vacuuming. Dishcraft Robotics is a clean dish delivery service for cafeterias, or large kitchens. On a daily basis they deliver customized clean dishes to the customer, they collect the dirty dishes, and return them to a central hub where custom robots wash them.

“We initially thought we would develop the robot and install it directly into dish rooms. And then we found that it was a really high hurdle for an operation to have a big capital expense, and that we could solve their problem in a really frictionless way, by simply doing the delivery because there’s no upfront cost for them. There is no risk for them to try it. And in fact, we give a free couple week trial and we have found that once you start with dish craft, you immediately convert because the service solves all their labor problems, it also is more sustainable. And in today’s environment, it’s very, very sanitary.”

“We give them carts full of clean dishes, they are our dishes that are proprietary. The way our robots work is using magnetics. And so it requires using our specialized wares. And so we just bring in flexible, clean dishes and we give them a collection system where they can drop off. It’s very organized, it’s ADA compliant. It’s pretty space saving compared to where people normally store their dishes. So it’s worked really well for us. And that means for a customer, there’s no construction costs. There’s no months of planning. You know, it’s just been pretty delightful for both us and them.”

“Currently we service essential businesses and we have started to offer our services to help in this time of need to hospitals. Dish rooms are very small and cramped spaces and we realized that workers in them were unlikely to be able to be socially distance. Also today, the hospitals are overloaded. Some have twice the volume of what they normally expect a dish room to have to process. And so this was a great opportunity for them to use robots and use our service and keep their existing staff very safe.” [Linda Pouliot April 7 2020]

Vipin Jain is the CEO and founder of Blendid, a food robotic kiosk startup. After about 5 years of R&D Blendid opened their first kiosks just over a year ago in March 2019 at the University of San Francisco, followed by Sonoma State University, and while they were initially doubtful that people were ready to have their food or drink made by robots, they’ve seen great interest.

“We were told that automating food was too hard. And people won’t like it. People don’t want robots to make the food, people said. But people have different tastes and and standardized food doesn’t cut it anymore. So you need a better solution so that I can get what I like to eat anytime of the day wherever I am. And that can only be done using a lot of data, AI and robotics.”

“We were just getting ready to start deploying into retail, corporate and hospital environments when the COVID hit. So I wish we were a little bit ahead further ahead in terms of deployment because as you can imagine, food is an essential service. We all want food, we all want access to food, but in this environment, we will be fearful about the safety of people working on food preparation, because we don’t know how the food was handled.” [Vipin Jain April 7 2020]

Rich Stump is one of the cofounders of Studio Fathom for prototyping, product development and production services. Fathom specializes in 3d printing additive manufacturing, dealing with multiple materials and leveraging traditional manufacturing alongside, allowing them to solve some interesting engineering challenges or product development challenges. Fathom has been very active in COVID-19 response.

“There’s three main initiatives that we have going right now with COVID-19. And some of them are, are potentially revenue generating, but most of them are just just trying to help us with our expertise and capabilities. The first is obviously with PPE challenges we have. We have a facility in Asia and southern China and so we have a vast amount of supply chain resources. So, we called in our colleagues over there and we found a number of factories that had FDA approved PBE supplies that that were overrun from the flattening of the curve in Asia.”

“The news has been talking about investment into 3M and Honeywell ramping up production in the US, which is absolutely great. The problem is, it’s just not going to get done in time, right? Anytime you buy 150 thermoforming machines and you try to ramp up production to make n95 masks, it’s going to take weeks or potentially months to get any volume. So what we were trying to do is match hospitals and senior homes and folks that needed PPE with supply chain resources that we had in Asia.”

“We’ve had a bunch of roadblocks to the point where we’ve tried to reach out to the state government officials to try to remove some of the roadblocks with customs and tariffs and freight constraints. And so that’s been a an interesting challenge. But we’ve been able to connect, I think over 1.5 million and counting supplies into the US from Asia. It’s not our everyday business, but we have resources, so we tried to help there.

“The second initiative was around the test kits. Obviously there’s been a shortage of test kits. And it turns out, interestingly enough, one of the big shortage items is the nasopharyngeal swab that goes back into your nasal cavity. The main manufacturer was in Italy, and obviously with everything happened in Italy, that that shut down a lot of the supply. There’s been a community of about 50 folks that have come out of the 3d printing community to see if we can 3d print these swabs. And we’ve made a lot of progress.”

There’s now four manufacturers approved to 3d print nasopharyngeal swabs. Next week, we’ll get into production of these in at least hundreds of thousands, if not millions, in order to support the short term need. Long term, it probably doesn’t make sense to 3d print them given the cost base. But using 3d printing technology, we can design swabs that could potentially perform better than the traditional swab. So that’s been a fun and challenging project at the same time.”

“And the third initiative is around the ventilators. You have folks like Lawrence Livermore developing a ventilator with spare parts that are available today. And you have JPL, the NASA team, trying to develop their own ventilator. Then you have the large automotive manufacturers who have partnered up with various existing ventilator manufacturers and are trying to ramp up production. By today I think more than 270 people have come to us with either a ventilator design, a mask design, or some type of shield design, just over the last 10 days. There’s just a tremendous amount of activity. And it’s great because everyone wants to help, but at the same time, it presents a lot of challenges because so much effort going in so many different directions, that you you worry that, you know, all these efforts won’t end up being impactful by the time we need the supplies.” [Rich Stump April 7 2020]

Mark Martin is the Director of Industry and Workforce Development for California Community Colleges and founder of the Bay Area Manufacturers Discussion Forum. He’s very connected with manufacturers in the region and has a lot of insights into what’s going on in the manufacturing sector.

“Many of the manufacturing companies are being hurt and having to furlough people. There’s about 8000 in the Bay Area and we’re trying to help them repurpose. In some things, it’s relatively easy, maybe you have an injection mold facility, and you can injection mold these face shields, or some of these parts of the face shield. You have to make the molds but it’s still within your core business. Others are contract manufacturers that could theoretically assemble ventilators because they’re already doing medical products. Ventilators are a complicated product, but not necessarily more complicated than some of these people already doing.”

“But for others, you might have to get specialized machines, which can take a long time. And then you need to have expertise around. If you haven’t done medical products before, do you know how to handle the equipment and ensure QC? And the FDA approves factories, not just the designs. I created a list of Bay Area manufacturers and I got almost 200 responses with things they think they can manufacture and I’ve supplied that list to the state government.”

“And community colleges have makerspaces or fab labs. So where I’m located at Laney College in Oakland, we started working on face shields a week and a half ago, because that was relatively simple for us to be able to do. And we printed up 500 face shields to Highland hospital. It took us a day to do it with six people. And I asked the hospital what their daily use was for face shields and they said 600 a day. So I was like, all right, that was a day’s worth for this one hospital.”

“And then we actually took a design that was done by Stratasys for 3D printing and modified it a little bit. And in like two days, we got a molder to set up injection molding, although it’s going to take a couple of weeks to finish. And honestly, I have no idea if we’re going to need in a couple weeks. Because I have no idea what the demand is.”

“Apple’s bringing in over a million a week and others are doing this hundreds or even thousands a time in basements and little shops. But we don’t know if the demand is 5 million a week or 50 million. “

“So that that’s the thing the manufacturers all trying to figure out. I kept wondering why the government didn’t take a little bit of control. Even just financial incentives to say we will backstop you if you supply PPE, so that if the demand falls out, you don’t lose money.” [Mark Martin April 7 2020]

Ken Goldberg is the William S. Floyd Jr Distinguished Chair in Engineering at UC Berkeley, Director of CITRIS People and Robots Initiative and a regular guest on COVID-19, robots and us. Today in keeping with the theme of food robotics, Ken explained the reasoning behind one of his latest projects, AlphaGarden.

This is a farm bot that you can buy from a local company, in San Luis Obispo for about $3,000. Our garden includes 14 species of edible herbs and plants and they’re all growing in close proximity. The challenge is that they’re competing for increasingly scarce light and water. So what happens is that this starts to get quite out of control. Right now, the garden has been acting very autonomously because we haven’t been able to get into the greenhouse, and so it’s starting to decay, and certain invasive species like mint have essentially taken over.”

“We’re trying to optimize diversity, that is to maximize the number of different plant types that are growing. But the key idea here is that we’re using AI to simulate the garden, because a grow cycle takes three months. So we have a simulator that can simulate 10,000 times the speed of nature. We have 64 gardens but thousands of parameters to fine tune the system. Then we evaluate each of those gardens to figure out how to tune the parameters for the watering and pruning devices.”

“This is an ongoing project but the end goal is to be able to learn a policy for successful gardening because polyculture is much more labor intensive than monoculture. The reason that I call it an art project is because it’s extremely difficult for AI and robotics. It’s a big challenge. I’m not at all convinced we’re actually going to succeed. We’re really putting AI to the test. But just this week we did a deep learning method based on the data that we collected from our simulator. And in time we hope to learn how to be able to automate some of these very difficult tasks like organic gardening.” [Ken Goldberg April 7 2020]

The moderator of the weekly ‘COVID-19, robots and us’ discussions, Andra Keay, is the Managing Director of Silicon Valley Robotics, supporting innovation and commercialization of robotics technologies. Andra was also the Industry Co-Chair of the Human-Robot Interaction Conference 2020, which is taking place online. In the April 3 Industry Talks Session, Chris Roberts from Cambridge Consultants described the process of working out what sort of robotics or automation made sense for food production.

“Chris described a project with the food service industry to look at using robotics and AI. And of course what he wanted to build was a robot dipping chips and flipping burgers and all the rest, but on evaluation, that was not the best direction to go. In this instance AI optimization of locations was the best option. And this is something I’ve heard from a number of other people who’ve been doing robots for the actual preparation of food itself. For example, Creator, who make those fantastic hamburgers, didn’t have a robot arm, imitating the way humans cook. Instead, they completely re architected the entire process of constructing a burger, so that it could be done mechanically.” [Andra Keay April 7 2020]

There’s much more to hear in our weekly discussion on ‘COVID-19, robots and us’ from April 7 2020 with guest expert speakers: Linda Pouliot, CEO of Dishcraft Robotics Vipin Jain, CEO of Blendid Rich Stump, Principal at Studio Fathom Mark Martin, Industry & Workforce Development California Community Colleges and Bay Area Manufacturing Discussion Forum Ken Goldberg, Director of CITRIS People and Robots Initiative and William S. Lloyd Jr Distinguished Chair in Engineering at UC Berkeley; moderated by Andra Keay, Silicon Valley Robotics.

HRI 2020 Keynote: Stephanie Dinkins

Community, craft, and the vernacular in artificially intelligent systems take the position that everyone participating in society is an expert in our experiences within the community infrastructures, which inform the makeup of robotic entities.

Though we may not be familiar with the jargon used in specialized professional contexts, we share the vernacular of who we are as people and communities and the intimate sense that we are being learned. We understand that our data and collaboration is valuable, and our ability to successfully cooperate with the robotic systems proliferating around is well served by the creation of qualitatively informed systems that understand and perhaps even share the aims and values of the humans they work with.

Using her art practice, which interrogates a humanoid robot and seeks to create culturally specific voice interactive entities as a case in point, Dinkins examines how interactions between humans and robots are reshaping human-robot and human-human relationships and interactions. She ponders these ideas through the lens of race, gender, and aging. She argues communities on the margins of tech production, code, and the institutions creating the future must work to upend, circumvent, or reinvent the algorithmic systems increasingly controlling the world, including robotics, that maintain us.

Publication:HRI ’20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot InteractionMarch 2020 Pages 221https://doi.org/10.1145/3319502.3374844

Robots providing social support while we’re social distancing

Wired Magazine recently called for us to, post pandemic, “ditch our tech enabled tools of social distancing”. But are our telepresence robots creating emotional distancing or are they actually improving our emotional lives. This week in our weekly “COVID-19, robots and us” discussion with experts, we’re looking at the topic of virtual presence and emotional contact as well as many other practical ways that robotics can make a difference in pandemic times.

Robin Murphy, Raytheon Professor at Texas A&M University and founder of the field of Rescue Robotics, was involved in the very first use of robots in a disaster scenario in 9/11. Since then she’s been involved in multiple disaster responses worldwide, including the Ebola outbreak in 2014-2016. During the US Ebola outbreak, the White House Office of Science, Technology and policy, and then later NSF did a series of workshops, and myself and Ken Goldberg, are among those who participated in work with various public health officials in groups such as Doctors Without Borders.

“Some of the lessons learned about infectious diseases in general, and for COVID, in particular, are that there’s really five big categories of uses of robots. Most everybody immediately thinks of clinical applications, or things that are directly related to the health care industry, but the roll of robots is far broader than that. It’s not always about replacing doctors. It’s about how can robots assist in any way they can in this whole large, complex enterprise of a disaster.” [Robin Murphy March 31 2020]

Ross Mead’s company Semio develops software for robot operation that focuses on how people will live, work and play with robots in their everyday lives. “We’re building an operating system and app ecosystem for in home personal robots. And all of our software relies on natural language user interfaces, just speech and body language or other social cues.”

“Right now as it pertains to COVID-19. We are working with a team of content creators from a company called copy to develop conversational content similar to chatbots, or voice voice based skills that’s geared towards informing users about or helping mitigate the spread of COVID-19. We’re also developing socially aware navigation systems for mobile robots, natural human environments. I would love to talk about use cases for social robots, even telepresence robots, as well as the impacts of social isolation in these times.” [Ross Mead March 31 2020]

Therapeutic robot seal Paro in an aged care facility.

Wendy Ju studies people’s interaction with autonomous cars, systems or robots. She focuses on things like nonverbal cues, and the things people do to coordinate and collaborate with one another. Her PhD dissertation was on the design of implicit interactions, something a lot of us take for granted or consider static, not dynamic. Through her work on autonomous cars, she’s been exploring the subtle cues that convey critical information.

“if we get these things wrong, or it’s life or death. I think we’re starting to understand that a lot of the things that we think of as interaction are only the top layer of what we’re actually doing all the time with other people. And if we don’t understand those lower layers, it could kill us.”

Pedestrians and driverless car interaction

“So last week, I put together a proposal to study how people are interacting with one another in the city around the social distancing policy. And I agree the name is not perfect, but I think it also gets to the heart of what’s important to do. In this epidemic, there’s a halo effect around our social interactions because we know they’re necessary and good for us. And so people think, well, I shouldn’t go to the grocery stores and go to the hospital. But surely, it can’t be bad to go visit my neighbor or surely it can’t be bad to go see my grandmother, these kind of inclinations will kill us, when taken to scale.”

“When we say social distancing, we’re saying like, yes, school is good, but school is bad in the situation, churches good, but churches are bad in the situation, really getting at the thing that we are so tempted to do that is literally the thing that we’re trying to stop right now. I think that’s why they call it social distancing. And it does definitely have a physical corollary. I’m interested to see afterwards, if those people who were playing basketball and other people who were playing soccer, are those places where people got more sick or not? We don’t actually know all the different mechanisms for transmission for disease. And I think later on, we’ll be able to figure it out.” [Wendy Ju March 31 2020]

Cory Kidd is the CEO and founder of Catalia Health, which uses social robots for medical care management. Catalia Health has done extensive clinical trials prior to commercial roll out and leads the world in understanding robots for medical care.

“The concept of chronic disease management of course is not new, it’s just that the usual model is very human powered. We do it in clinical settings, in the doctor’s office, in the hospital, and we send people out to homes, and a lot of the work is done by calling patients on the phone to check in on them. We replace all of those by putting actually a physical robot in the patient’s home to talk to them.

Catalia Health’s Mabu at home with Michelle Chin

“So what we’re doing on the AI side is generating conversations for patients on the fly, for whatever condition they’re dealing with, and we build these around specific conditions. The robotic piece of it is really driven by the psychology around why we would rather be all in a room together, as opposed to, gathering around our computers and staring into the the screen at zoom. We intuitively get that physical presence is different. “

“When we’re face to face with someone, they’re more engaged, we create stronger relationships and a number of other things. Research showed that those differences actually carry over into the future. When you put a cute little robot in front of someone that can look them in the eyes while it’s talking to them, we actually get a lot of the effects of face to face interaction. And so we’ve leveraged that to build chronic disease care management programs. Over the last couple of years, we’ve been rolling these out largely in specialty pharmacy, so we work with some of the largest pharma manufacturers in the world, like Pfizer. We’re helping patients across a number of different conditions really keep track of how they’re doing, to stay on therapy and stay out of the hospital using our AI and robotics platform.”

“The current situation around the world is really highlighting the need for more of this kind of technology.” [Cory Kidd March 31 2020]

Ken Goldberg is the director of CITRIS People and Robots Initiative, and the William s. Floyd Jr. Distinguished Chair in Engineering at UC Berkeley. Both Ken and Robin are amongst the authors of recent editorial in Science Robotics “Combating COVID-19 – The role of robotics in manging public health and infectious diseases”.

Medical personnel works inside one of the emergency structures that were set up to ease procedures outside the hospital of Brescia, Northern Italy, Tuesday, March 10, 2020. For most people, the new coronavirus causes only mild or moderate symptoms, such as fever and cough. For some, especially older adults and people with existing health problems, it can cause more severe illness, including pneumonia. (Claudio Furlan/LaPresse via AP)

“There’s this fundamental issue that I’ve been thinking a lot about, which is protecting the health care workers, especially where they’re now having to provide these tests for huge numbers of people. Swabbing is quite an uncomfortable and invasive process. And is there any way that that that we might be able to automate that at some point? I don’t think that’s going to happen anytime soon. But it’s an interesting goal that we could move in that direction.”

“The other is the idea that intubation is an incredibly difficult process and very risky because a lot of droplets vaporizing happens. That’s another area where it would be very helpful if that could be teleoperated. Right now, the state of the art in telemedicine, tele surgery in particular, and these type of procedures is not ready for the situation we’re facing now. We are nowhere near capable of doing that. And so I think this is a really important wake up call to start to develop these technologies.”

Also, in the discussion, Jessica Armstrong who is a mechanical engineer at SuitX and local coordinator for Open Source COVID-19 Medical Supplies gave us updates on local PPE activities and how community grass roots initiatives like OSCMS and Helpful Engineering have been part of catalyzing networks of people to sew masks and gowns, to laser cut face shields and 3D print parts for PPE and medical equipment, and developing new designs for emergency ventilators and respirators, while we’re still waiting for manufacturers and the supply chain to meet the demand.

Perhaps most critically, groups like OSCMS and Helpful Engineering validate and share designs for PPE so that people aren’t wasting time designing their own solutions, nor putting health care workers at risk with badly designed homemade PPE.


Our second weekly discussion about “COVID-19, robots and us” from March 31 is now available online and as a podcast. You can sign up to join the audience for the next episodes here.

Special guests were Robin Murphy, Raytheon Professor at Texas A&M University and founder of the field of Rescue Robotics, Ross Mead, CEO of Semio and VP of AI LA, Wendy Ju, Interaction Design Professor at Cornell, Cory Kidd, CEO of Catalia Health – maker of medical social robots, Ken Goldberg, Director of CITRIS People and Robots Initiative and Jessica Armstrong, mechanical engineer at SuitX and local coordinator for Open Source Covid-19 Medical Supplies. Moderated by Andra Keay, Managing Director of Silicon Valley Robotics, with extra help from Erin Pan, Silicon Valley Robotics, and Beau Ambur from Kickstarter.

HRI 2020 Keynote: Ayanna Howard

Intelligent systems, especially those with an embodied construct, are becoming pervasive in our society. From chatbots to rehabilitation robotics, from shopping agents to robot tutors, people are adopting these systems into their daily life activities. Alas, associated with this increased acceptance is a concern with the ethical ramifications as we start becoming more dependent on these devices [1]. Studies, including our own, suggest that people tend to trust, in some cases overtrusting, the decision-making capabilities of these systems [2]. For high-risk activities, such as in healthcare, when human judgment should still have priority at times, this propensity to overtrust becomes troubling [3]. Methods should thus be designed to examine when overtrust can occur, modelling the behavior for future scenarios and, if possible, introduce system behaviors in order to mitigate. In this talk, we will discuss a number of human-robot interaction studies conducted where we examined this phenomenon of overtrust, including healthcare-related scenarios with vulnerable populations, specifically children with disabilities.

References

  1. A. Howard, J. Borenstein, “The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity,” Science and Engineering Ethics Journal, 24(5), pp 1521–1536, October 2018.
  2. A. R. Wagner, J. Borenstein, A. Howard, “Overtrust in the Robotic Age: A Contemporary Ethical Challenge,” Communications of the ACM, 61(9), Sept. 2018.
  3. J. Borenstein, A. Wagner, A. Howard, “Overtrust of Pediatric Healthcare Robots: A Preliminary Survey of Parent Perspectives,” IEEE Robotics and Automation Magazine, 25(1), pp. 46–54, March 2018.

Publication: HRI ’20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction |March 2020 | Pages 1 |https://doi.org/10.1145/3319502.3374842

 

HRI 2020 Keynote: Ayanna Howard

Intelligent systems, especially those with an embodied construct, are becoming pervasive in our society. From chatbots to rehabilitation robotics, from shopping agents to robot tutors, people are adopting these systems into their daily life activities. Alas, associated with this increased acceptance is a concern with the ethical ramifications as we start becoming more dependent on these devices [1]. Studies, including our own, suggest that people tend to trust, in some cases overtrusting, the decision-making capabilities of these systems [2]. For high-risk activities, such as in healthcare, when human judgment should still have priority at times, this propensity to overtrust becomes troubling [3]. Methods should thus be designed to examine when overtrust can occur, modelling the behavior for future scenarios and, if possible, introduce system behaviors in order to mitigate. In this talk, we will discuss a number of human-robot interaction studies conducted where we examined this phenomenon of overtrust, including healthcare-related scenarios with vulnerable populations, specifically children with disabilities.

References

  1. A. Howard, J. Borenstein, “The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity,” Science and Engineering Ethics Journal, 24(5), pp 1521–1536, October 2018.
  2. A. R. Wagner, J. Borenstein, A. Howard, “Overtrust in the Robotic Age: A Contemporary Ethical Challenge,” Communications of the ACM, 61(9), Sept. 2018.
  3. J. Borenstein, A. Wagner, A. Howard, “Overtrust of Pediatric Healthcare Robots: A Preliminary Survey of Parent Perspectives,” IEEE Robotics and Automation Magazine, 25(1), pp. 46–54, March 2018.

Publication: HRI ’20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction |March 2020 | Pages 1 |https://doi.org/10.1145/3319502.3374842

 

HRI 2020 Online Day One

HRI2020 has already kicked off with workshops and the Industry Talks Session on April 3, however the first release of videos has only just gone online with the welcome from General Chairs Tony Belpaeme, ID Lab, University of Ghent and James Young, University of Manitoba.

https://youtu.be/Fkg3YvA5n5o

There is also a welcome from the Program Chairs Hatice Gunes from University of Cambridge and Laurel Riek from University of San Diego, requesting that we all engage with the participants papers and videos.

https://youtu.be/_74udxMmGJw

The theme of this year’s conference is “Real World Human-Robot Interaction,” reflecting on recent trends in our community toward creating and deploying systems that can facilitate real-world, long-term interaction. This theme also reflects a new theme area we have introduced at HRI this year, “Reproducibility for Human Robot Interaction,” which is key to realizing this vision and helping further our scientific endeavors. This trend was also reflected across our other four theme areas, including “Human-Robot Interaction User Studies,” “Technical Advances in Human-Robot Interaction,” “Human-Robot Interaction Design,” and “Theory and Methods in Human-Robot Interaction.”

The conference attracted 279 full paper submissions from around the world, including Asia, Australia, the Middle East, North America, South America, and Europe. Each submission was overseen by a dedicated theme chair and reviewed by an expert group of program committee members, who worked together with the program chairs to define and apply review criteria appropriate to each of the five contribution types. All papers were reviewed by a strict double-blind review process, followed by a rebuttal period, and shepherding if deemed appropriate by the program committee. Ultimately the committee selected 66 papers (23.6%) for presentation as full papers at the conference. As the conference is jointly sponsored by ACM and IEEE, papers are archived in the ACM Digital Library and the IEEE Xplore.

Along with the full papers, the conference program and proceedings include Late Breaking Reports, Videos, Demos, a Student Design Competition, and an alt.HRI section. Out of 183 total submissions, 161 (88%) Late Breaking Reports (LBRs) were accepted and will be presented as posters at the conference. A full peer-review and meta-review process ensured that authors of LBR submissions received detailed feedback on their work. Nine short videos were accepted for presentation during a dedicated video session. The program also includes 12 demos of robot systems that participants will have an opportunity to interact with during the conference. We continue to include an alt.HRI session in this year’s program, consisting of 8 papers (selected out of 43 submissions, 19%) that push the boundaries of thought and practice in the field. We are also continuing the Student Design Competition with 11 contenders, to encourage student participation in the conference and enrich the program with design inspiration and insights developed by student teams. The conference will include 6 full-day and 6 half-day workshops on a wide array of topics, in addition to the selective Pioneers Workshop for burgeoning HRI students.

Keynote speakers will reflect the interdisciplinary nature and vigour of our community. Ayanna Howard, the Linda J. and Mark C. Smith Professor and Chair of the School of Interactive Computing at the Georgia Institute of Technology, will talk about ‘Are We Trusting AI Too Much? Examining Human-Robot Interactions in the Real World’, Stephanie Dinkins, a transmedia artist who creates platforms for dialog about artificial intelligence (AI) as it intersects race, gender, aging, and our future histories, and Dr Lola Canamero, Reader in Adaptive Systems and Head of the Embodied Emotion, Cognition and (Inter-)Action Lab in the School of Computer Science at the University of Hertfordshire in the UK, will talk about ‘Embodied Affect for Real-World HRI’.

The Industry Talks Session was held on April 3 and we are particularly grateful to the sponsors who have remained with HRI2020 as we transition into virtual. Karl Fezer from ARM, Chris Roberts from Cambridge Consultants, Ker-Jiun Wang from EXGWear and Tony Belpaeme from IDLab at University of Ghent were able to join me for the first Industry Talks Session at HRI 2020 – a very insightful discussion!

The HRI2020 proceedings are available from the ACM digital library.

Full papers:
https://dl.acm.org/doi/proceedings/10.1145/3319502

Companion Proceedings (alt.HRI, Demonstrations, Late-Breaking Reports, Pioneers Workshop, Student Design Competitions, Video Presentations, Workshop Summaries):
https://dl.acm.org/doi/proceedings/10.1145/3371382

COVID-19, robots and us – weekly online discussion

Silicon Valley Robotics and the CITRIS People and Robots Initiative are hosting a weekly “COVID-19, robots and us” online discussion with experts from the robotics and health community on Tuesdays at 7pm (California time – PDT). You can sign-up for the free event here.

Guest speakers this week are:

Prof Ken Goldberg, UC Berkeley Director of the CITRIS People and Robots Initiative.

Alder Riley, Founder at ideastostuff and a coordinator at Helpful Engineering. Helpful Engineering is a rapidly growing global network created to design, source and execute projects that can help people suffering from the COVID-19 crisis worldwide.

Tra Vu, COO at Ohmnilabs, a telepresence robotics and 3D printing startup

Mark Martin, Regional Director Advanced Manufacturing Workforce Development California Community Colleges

Gui Cavalcanti, CEO/Cofounder of Breeze Automation and Founder of Open Source Covid-19 Medical Supplies Group. The Open Source COVID-19 Medical Supplies Group is a rapidly growing Facebook group formed to evaluate, design, validate, and source the fabrication of open source emergency medical supplies around the world, given a variety of local supply conditions.

Andra Keay, Managing Director of Silicon Valley Robotics and Visiting Scholar at CITRIS People and Robots Initiative will act as moderator.

Beau Ambur, Outreach, Design and Technology Lead for Kickstarter will be coordinating technology for us.

Transience, Replication, and the Paradox of Social Robotics

with Guy Hoffman
Robotics Researcher, Cornell University

An Art, Technology, and Culture Colloquium, co-sponsored by the Center for New Music and Audio Technologies and CITRIS People and Robots (CPAR), presented with Berkeley Arts + Design as part of Arts + Design Mondays.

As we continue to develop social robots designed for connectedness, we struggle with paradoxes related to authenticity, transience, and replication. In this talk, I will attempt to link together 15 years of experience designing social robots with 100-year-old texts on transience, replication, and the fear of dying. Can there be meaningful relationships with robots who do not suffer natural decay? What would our families look like if we all choose to buy identical robotic family members? Could hand-crafted robotics offer a relief from the mass-replication of the robot’s physical body and thus also from the mass-customization of social experiences?

About Guy Hoffman

Dr. Guy Hoffman is an Assistant Professor and the Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering at Cornell University. Prior to that he was an Assistant Professor at IDC Herzliya and co-director of the IDC Media Innovation Lab. Hoffman holds a Ph.D from MIT in the field of human-robot interaction. He heads the Human-Robot Collaboration and Companionship (HRC2) group, studying the algorithms, interaction schema, and designs enabling close interactions between people and personal robots in the workplace and at home. Among others, Hoffman developed the world’s first human-robot joint theater performance, and the first real-time improvising human-robot Jazz duet. His research papers won several top academic awards, including Best Paper awards at robotics conferences in 2004, 2006, 2008, 2010, 2013, 2015, 2018, and 2019. His TEDx talk is one of the most viewed online talks on robotics, watched more than 3 million times.

About the Art, Technology, and Culture Colloquium

Founded by Prof. Ken Goldberg in 1997, the ATC lecture series is an internationally respected forum for creative ideas. Always free of charge and open to the public, the series is coordinated by the Berkeley Center for New Media and has presented over 200 leading artists, writers, and critical thinkers who question assumptions and push boundaries at the forefront of art, technology, and culture including: Vito Acconci, Laurie Anderson, Sophie Calle, Bruno Latour, Maya Lin, Doug Aitken, Pierre Huyghe, Miranda July, Billy Kluver, David Byrne, Gary Hill, and Charles Ray.

Fall 2019 – Spring 2020 Series Theme: Robo-Exoticism

In 1920, Karl Capek coined the term “robot” in a play about mechanical workers organizing a rebellion to defeat their human overlords. A century later, increasing popularism, inequality, and xenophobia require us to reconsider our assumptions about labor, trade, political stability, and community. At the same time, advances in artificial intelligence and robotics, fueled by corporations and venture capital, challenge our assumptions about the distinctions between humans and machines. To explore potential linkages between these trends, “Robo-Exoticism” characterizes a range of human responses to AI and robots that exaggerate both their negative and positive attributes and reinforce fears, fantasies, and stereotypes.

Robo-Exoticism Calendar

09/09/19 Robots Are Creatures, Not Things
Madeline Gannon, Artist / Roboticist, Pittsburgh, PA
Co-sponsored by the Jacobs Institute for Design Innovation and CITRIS People and Robots (CPAR)

09/23/19 The Copper in my Cooch and Other Technologies
Marisa Morán Jahn, Artist, Cambridge, MA and New York, NY
Co-sponsored by the Wiesenfeld Visiting Artist Lecture Series and the Jacobs Institute for Design Innovation

10/21/19 Non-Human Art
Leonel Moura, Artist, Lisbon
Co-sponsored by the Department of Spanish & Portuguese and CITRIS People and Robots (CPAR)

11/4/19 Transience, Replication, and the Paradox of Social Robotics
Guy Hoffman, Robotics Researcher, Cornell University
Co-sponsored by the Center for New Music and Audio Technologies and CITRIS People and Robots (CPAR)

01/27/20 Dancing with Robots: Expressivity in Natural and Artificial Systems
Amy LaViers, Robotics, Automation, and Dance (RAD) Lab
Co-sponsored by the Department of Theater, Dance, and Performance Studies and CITRIS People and Robots (CPAR)

02/24/20 In Search for My Robot: Emergent Media, Racialized Gender, and Creativity
Margaret Rhee, Assistant Professor, SUNY Buffalo; Visiting Scholar, NYU
Co-sponsored by the Department of Ethnic Studies and the Department of Comparative Literature

03/30/20 The Right to Be Creative
Margarita Kuleva, National Research University Higher School of Economics, Moscow
Invisible Russia: Participatory Cultures, Their Practices and Values
Natalia Samutina, National Research University Higher School of Economics, Moscow
Co-sponsored by the Department of Slavic Languages and Literature and Department of the History of Art and the Arts Research Center

04/06/20 Artist Talk
William Pope.L, Artist
Presented by the Department of Art Practice

04/13/20 Teaching Machines to Draw
Tom White, New Zealand
Co-sponsored by Autolab and CITRIS People and Robots (CPAR)

For more information:

http://atc.berkeley.edu/

Contact: info.bcnm [​at​] berkeley.edu, 510-495-3505

ATC Director: Ken Goldberg
BCNM Director: Nicholas de Monchaux
Arts + Design Director: Shannon Jackson
BCNM Liaisons: Lara Wolfe, Laurie Macfee

ATC Highlight Video from F10-S11 Season (2 mins)
http://j.mp/atc-highlights-hd

ATC Audio-Video Archive on Brewster Kahle’s Internet Archive:
http://tinyurl.com/atc-internet-archive

ATC on Facebook:
https://www.facebook.com/cal-atc

ATC on Twitter:
https://www.twitter.com/cal_atc

Catalia Health and Pfizer collaborate on robots for healthcare

New robot platform improves patient experience using AI to help patients navigate barriers and health care challenges

SAN FRANCISCO, Sept. 12, 2019 /PRNewswire/ — Catalia Health and Pfizer today announced they have launched a pilot program to explore patient behaviors outside of clinical environments and to test the impact regular engagement with artificial intelligence (AI) has on patients’ treatment journeys. The 12-month pilot uses the Mabu® Wellness Coach, a robot that uses artificial intelligence to gather insights into symptom management and medication adherence trends in select patients.

The Mabu robot can interact with patients using AI algorithms to engage in tailored, voice-based conversations. Mabu “talks” with patients about how they are feeling and helps answer questions they may have about their treatment. The Mabu Care Insights Platform then delivers detailed data and insights to clinicians at a specialty pharmacy provider to help human caregivers initiate timely and appropriate outreach to the patient. The goal is to help better manage symptoms and address patient questions in real-time.

“At Catalia Health we’ve seen firsthand the benefits that AI has brought to healthcare for both the patient and the healthcare systems,” said Cory Kidd, founder and CEO of Catalia Health. “Our work with Pfizer allows us to engage with patients on a larger scale and therefore gain access to more insights and data that we hope can improve health outcomes.”

Mabu is helping to deliver personalized care by gaining insights that allow the specialty pharmacy to reach out to patients as they express challenges in managing their conditions. Mabu also generates health tips and reminders to help patients get additional information about their condition and treatment that may help them along the way. Over time, it is our goal that Mabu can help patients navigate barriers and health care challenges that are often a part of managing a chronic disease.

“The healthcare system is overburdened, and as a result, patients often seek more-coordinated care and information. Through this collaboration with Catalia Health, we hope to learn through real-time data and insights about challenges patients face, outside the clinical setting, with the goal to improve their treatment journeys in the future,” said Lidia Fonseca, Chief Digital and Technology Officer at Pfizer. “This pilot is an example of how we are working to develop digital companions for all our medicines to better support patients in their treatment journeys.”

The pilot program was officially announced on stage at the National Association of Specialty Pharmacy’s Annual Meeting & Expo on September 10, 2019. Initial pilot data will be available in the coming months. For more information, visit www.cataliahealth.com

About Catalia Health

Catalia Health is a San Francisco-based patient care management company founded by Cory Kidd, Ph.D., in 2014. Catalia Health provides an effective and scalable solution for individuals managing chronic disease or taking medications on an ongoing basis. The company’s AI-powered robot, Mabu, enables healthcare providers and pharmaceutical companies to better support patients living with chronic illness. Mabu uses a voice-based interface designed for simple, intuitive use by a wide variety of patients in remote care environments. The cloud-based platform delivers unique conversations to patients each time they have a conversation with Mabu.

Catalia Health’s care management programs are tailored to increase clinically appropriate medication adherence, improve symptom management and reduce the likelihood that a patient is readmitted to the hospital after being discharged.

For more information, visit www.cataliahealth.com

Robo-Exoticism is the theme for 2019/20 Art, Technology and Culture Colloquiums

Manus – at World Economic Forum 2018

Madeline Gannon’s “Robots Are Creatures, Not Things” will be the first work of the Fall 2019-Spring 2020 season of the Colloquiums at UC Berkeley’s Center for New Media at 6pm on Sept 9th.

Dr. Madeline Gannon is a multidisciplinary designer inventing better ways to communicate with machines. In her work, Gannon seeks to blend knowledge from design, robotics, and human-computer interaction to innovate at the intersection of art and technology. Gannon designs her research to engage with wide audiences across scientific and cultural communities: her work has been exhibited at international cultural institutions, published at ACM conferences, and covered by diverse global media outlets. Her 2016 interactive installation, Mimus, even earned her the nickname, “The Robot Whisperer.”

Mimus – a curious robot

She is three-time World Economic Forum Cultural Leader, and serves as a council member on the World Economic Forum Global Council for IoT, Robotics, & Smart Cities. Gannon holds a Ph.D. in computational design from Carnegie Mellon University, a master’s in architecture from Florida International University, and is a Research Fellow at the Frank-Ratchye STUDIO for Creative Inquiry at Carnegie Mellon University.

Her work “Robots Are Creatures, Not Things” questions how we should coexist with intelligent, autonomous machines. After 50 years of promises and potential, robots are beginning to leave the lab to live in the wild with us. In this lecture, Dr. Madeline Gannon discusses how art and technology are merging to forge new futures for human-robot relations. She shares her work in convincing robots to do things they were never intended to do: from transforming a giant industrial robot into living, breathing mechanical creature, to taming a horde autonomous robots to behave more like a pack of animals. By pushing the boundaries of human-robot interaction, her work shows that robots can not only be useful, but meaningful additions to our everyday lives.

Quipt – gestural control of industrial robots

Founded in 1997, the ATC series is an internationally respected forum for creative ideas. The ATC series, free of charge and open to the public, is coordinated by the Berkeley Center for New Media and has presented over 170 leading artists, writers, and critical thinkers who question assumptions and push boundaries at the forefront of art, technology, and culture including: Vito Acconci, Laurie Anderson, Sophie Calle, Bruno Latour, Maya Lin, Doug Aitken, Pierre Huyghe, Miranda July, Billy Kluver, David Byrne, Gary Hill, and Charles Ray.

Current ATC Director is robotics professor Ken Goldberg, who is behind this season’s “Robo-Exotica” theme as well as being the Director of the CITRIS People and Robots Initiative and head of the AutoLab at UC Berkeley.

In 1920, Karl Capek coined the term “robot” in a play about mechanical workers organizing a rebellion to defeat their human overlords. A century later, increasing popularism, inequality, and xenophobia require us to reconsider our assumptions about labor, trade, political stability, and community. At the same time, advances in artificial intelligence and robotics, fueled by corporations and venture capital, challenge our assumptions about the distinctions between humans and machines. To explore potential linkages between these trends, “Robo-Exoticism” characterizes a range of human responses to AI and robots that exaggerate both their negative and positive attributes and reinforce fears, fantasies, and stereotypes.

A Century of Art and Technology in the Bay Area” (essay)

Location:

Monday Evenings, 6:30-8:00pm
Osher Auditorium
BAMPFA, Berkeley, CA
More information
Lectures are free and open to the public. Sign up for the ATC Mailing List!

2019 Robot Launch startup competition is open!

It’s time for Robot Launch 2019 Global Startup Competition! Applications are now open until September 2nd 6pm PDT. Finalists may receive up to $500k in investment offers, plus space at top accelerators and mentorship at Silicon Valley Robotics co-work space.

Winners in previous years include high profile robotics startups and acquisitions:

2018: Anybotics from ETH Zurich, with Sevensense and Hebi Robotics as runners-up.

2017: Semio from LA, with Appellix, Fotokite, Kinema Systems, BotsAndUs and Mothership Aeronautics as runners up in Seed and Series A categories.

Women in robotics on International Women’s Day 2019

What does a day in the life of a woman working with robots look like? We asked members of WomeninRobotics.org to volunteer “a paragraph and a picture” for this first patchwork representation of the field. And if you’re a woman working in robotics or interested in the field, join us! (pictured in order of arrival – go Aussies!)

Australian Nurse: Anne Elvin, recently travelled to Brisbane to present a talk at the Queensland eHealth Innovation Showcase. Anne presented an insider’s perspective about what it was like to be working with Softbank’s Pepper robot at the Townsville Hospital. Pepper’s message about flu and the importance of vaccination and hand hygiene was very simple, but the user experience provided by the incredible programming by our collaborative partners at the Australian Centre of Robotics Vision was extraordinary. As part of an innovation project created by Anne, Pepper brought a new layer of engagement in health and the robot became very popular amongst staff, patients, and volunteers at the Townsville Hospital. Where some people were initially sceptical about the presence of a robot in a hospital, even the most sceptical became very quickly accustomed to seeing the friendly little robot and began to treat Pepper as a sort of mascot or ambassador for health. Anne and her work with Pepper have paved the way to introduce other social robots into the Australian health system.

Lisa Winter with MiniTento and a middle school robotics team.

Lisa Winter is a mechanical engineer at Quartz; building hardware to identify, track, and predict everything that moves on a construction site. Her hobby of building robots started at the age of 10, when she fought in Robot Wars, and continued as she competed in BattleBots until 2016. In her spare time she likes to talk to kids about the importance of STEM. Seen here, Lisa and her robot ‘Mini Tento’ are with a middle school Lego robot building team.

Meka and Natalia Diaz Rodriguez

Natalia Diaz Rodriguez is Ass. Prof. of Artificial Intelligence at the Autonomous Systems and Robotics Lab (U2IS) at ENSTA ParisTech (Autonomous systems and Robotics (computer vision) group and INRIA Flowers team (flowers.inria.fr), which works on developmental robotics). Her research interests include deep, reinforcement and unsupervised learning, (state) representation learning, explainable AI and AI for social good. She works on open-ended learning, and continual/lifelong learning for applications in computer vision and robotics. Her background is on knowledge engineering (semantic web, ontologies and knowledge graphs) and she is interested in explainable AI and neural-symbolic approaches to practical applications of AI.

She got a Computer Engineering degree by the University of Granada (UGR, Spain) and a Double PhD from Abo Akademi (Finland) (together with UGR) on Artificial Intelligence and Semantic and Fuzzy Modelling for Human Behaviour Recognition in Smart Spaces. She has worked on R&D at CERN (Switzerland), Philips Research (Netherlands) at the Personal Health Dept., done a postdoctoral stay at University of California Santa Cruz, and worked in industry in Silicon Valley at Stitch Fix (San Francisco, CA) -a recommendation service for fashion delivery with humans in the loop.

She has participated in a range of international projects and is Management Committee member of EU COST (European Cooperation in Science and Technology) Action AAPELE.EU (Algorithms, Architectures and Platforms for Enhanced Living Environments, www.aapele.eu), or EU H2020 DREAM (www.robotsthatdream.eu). She was Google Anita Borg Scholar 2014, Heidelberg Laureate Forum 2014 & 2017 fellow, and obtained the Nokia Scholarship among others.

Cristina Zaga

Cristina Zaga is a Ph.D. candidate at the HMI group (University of Twente) and a visiting scholar at the RiG lab (Cornell University). Cristina’s doctoral research focuses on designing “robotthings” , everyday robotic objects and toys, to promote children’s prosocial behaviors in collaborative play. She studies how robots communicate intent and social qualities only through movement and nonverbal actions, defining a framework for non-anthropomorphic robots. Currently, she works on developing approaches and toolkits for research through design and participatory practices to bring together roboticists, designers and stakeholders. She envisions a future of robothings, robotics embedded in everyday objects, that meaningfully interact with people to empower them steering away from reinforcing existing biases in the society and paternalism. Cristina in her after-hours explores artistic intervention to advance the discourse on human-centered robotics, using speculative design to make what she calls poetic robots. Her work in HRI interaction design for robothings received an HRI student design competition award and has been exhibited at the Eindhoven Design Week 2017. Cristina is one of the founders of the Child-Robot Interaction international workshop series and co-organizer of the workshop Robots for Social Good. Cristina was selected as Google Women TechMaker Scholar 2018 for her research quality and her support to empower women and children in STEM.

Ecem Tuğlan

Ecem Tuğlan is Co Founder of Fenom Robotics that builds World’s First Hologram Displaying Robot. She is also founder of Revulation4.0, world’s first digital and printable clothing label which release it’s first collection soon.

She is professional robopsychologist who recognized by NASA. In June 2016 she  presented her original paper “Do Androids Sense of Electric Deja-vu?” to Dr. Ravi Margasahayam, NASA.

She graduated Ege University philosophy and sociology department. She took courses for teaching and consulting psychology at Dokuz Eylul University. She is also an amateur photographer and painter. Her pictures exhibited at Saatchi Gallery’s website and History Channel’s website. IASSR, IBAD and ECSBS invented her to different countries for making Artificial Intelligence, philosophy and Robopsychology presentations. She teached philosophy lessons at Oxford Creative Writing Center.

She is also working for robot rights.

PR2 and Laurel Riek

Dr. Laurel Riek is a professor in Computer Science and Engineering at the University of California, San Diego, with joint appointments in the Department of Emergency Medicine and Contextual Robotics Institute. Dr. Riek directs the Healthcare Robotics Lab and leads research in human-robot teaming, computer vision, and healthcare engineering, with a focus on autonomous robots that work proximately with people. Riek’s current research interests include long term learning, robot perception, and personalization; with applications in critical care, neurorehabilitation, and manufacturing.

Dr. Riek received a Ph.D. in Computer Science from the University of Cambridge, and B.S. in Logic and Computation from Carnegie Mellon. Riek served as a Senior Artificial Intelligence Engineer and Roboticist at The MITRE Corporation from 2000-2008, working on learning and vision systems for robots, and held the Clare Boothe Luce chair in Computer Science and Engineering at the University of Notre Dame from 2011-2016. Dr. Riek has received the NSF CAREER Award, AFOSR Young Investigator Award, Qualcomm Research Award, and was named one of ASEE’s 20 Faculty Under 40.

Nao and Deanna Hood

Deanna Hood is an electrical engineer whose work focuses on humanitarian applications of engineering and robotics, with projects spanning accessibility, education and healthcare. Examples of her work include a brain-controlled car, with applications for people living with paralysis; a low-cost USB stethoscope for diagnosing childhood pneumonia in developing countries; and the first robotic partner for children with handwriting difficulties: a robot that children can teach how to write, so that even those at the bottom of their class can benefit from “learning by teaching”. These projects have resulted in a number of academic publications as well as international print and TV media coverage such as by Reuters and Discovery Channel, and saw Deanna as a finalist for TED2013.
Most recently, during her time at the Open Source Robotics Foundation, Deanna worked on the Robot Operating System (ROS), what is referred to as the “lingua franca” of robot developers, which is used in applications as diverse as autonomous cars, Antarctic research robots, and robots on the International Space Station.For her efforts in advancing society’s perception of engineering, Deanna has been recognised as a Google Anita Borg Memorial Scholar, a John Kindler Memorial Medallist, an Erasmus Mundus Scholar, and as a finalist for the Pride of Australia Young Leader Medal. This is in addition to various academic medals for placing at the top of her three degrees despite starting university at age 15.

Image: DARPA Project Manager Erin McColl with CyberPhysical Systems Research Director Sue Keay, both with CSIRO’s Data61 next to a hexapod robot being trialled for the DARPA Sub-T challenge.

CyberPhysical Systems Research Director Sue Keay: Here I am pictured with our DARPA Project Manager, Erin McColl. One of the most exciting projects in the portfolio I’ve inherited now that I am the Research Director for Cyber-Physical Systems within Australia CSIRO’s Data61 is our work on the DARPA Sub-T Challenge. The aim of the challenge is to develop innovative technologies that will augment operations underground. We are the only non-US team included in the Challenge. We are currently testing technology to rapidly map, navigate, and search underground environments. The three-year Subterranean Challenge is funded by the US Defense Advanced Research Projects Agency (DARPA).

Joanne Pransky and patients

Dubbed as the ‘real-life’ Susan Calvin’ by Isaac Asimov in 1989, Joanne Pransky, the World’s First Robotic Psychiatrist®, has been tracking the robot evolution for over three decades. Her focus is on the use and marketing of robots as well as the critical psychological issues of the relationship between humans and robots.  The field of robotic psychiatry which she pioneered in 1986, is no longer science fiction and she is accepting new robo-patients ready to be integrated into society.

Nissia Sabri at Novanta: We chose to highlight women across the company all the way from our President to the factory floor. As you will see in the attached paragraph description, these women make critical components for surgical robotics!

Celera Motion precision components and subsystems enabled ~1 million robotic surgeries in 2018. Here are some of the women who contributed to the advancement of innovative technology in the field of robotics. This great team at Celera Motion is part of Novanta, focused on delivering innovations that matter.

Allison Thackston

Allison Thackston is the Engineering Lead and Manager of the Shared Autonony Robotics team at Toyota Research Institute. Her team focuses on developing advanced robotic teleoperation technologies that enable robots and people to seamlessly and safely work together.
Allison previously held the position of Lead Manipulation Researcher/Project Manager at Toyota Partner Robotic Group where she investigated robust task and motion planning manipulation strategies in unstructured environments. Before joining Toyota, she was the Lead Engineer for Robotic Perception on Robonaut 2, the first humanoid robot on the International Space Station. There, she was responsible for software development and applied vision research to facilitate the cooperation between robots and people.
Allison has a degrees in Electrical and Mechanical Engineering. Her thesis focused on collision avoidance during supervised robotic manipulation.

The women of Omron Adept in California.

Omron Adept Technologies, Inc. is a robotics company under Omron Corporation. More specifically, we are part of the Industrial Automation group at Omron. Our company is unique in that it develops industry-leading electronics, mechanics, and software for a broad spectrum of robots for the global market. Recently we have seen an important growth of women presence in our company. The rate of women in engineering has increased by almost 10% in the last three years. And the number of women in engineering management positions is at around 25%. Women have presence in almost all engineering teams: Systems, Software fixed and mobile, applications, electrical, quality and marketing. There are also currently two women who have been named as one of the “25 Women in Robotics that you  Need to Know About”: Noriko Kageki in 2014 and Casey Schultz in 2018.

Audrey Roberts is a sophomore studying Mechanical Engineering at the University of Southern California. At USC, Audrey does robotics research in Professor Maja Matarić’s Interaction Lab. Currently, she is excited to be working under PhD student Lauren Klein, exploring the ability of socially assistive robotics to increase exploratory motor movement. This research is aimed in particular at infants at risk for developmental delay. Furthermore, Audrey is part of the USC Rocket Propulsion Lab, where she works on a team that designs and builds the mechanical components for the rockets’ avionics systems. Audrey will be interning at Microsoft this summer and hopes to continue exploring human-computer interaction in the future as a hardware engineer.

Melonee Wise

Melonee Wise is the CEO of Fetch Robotics, which is delivering on-demand automation solutions for the logistics industry. She was the second employee at Willow Garage, a research and development laboratory extremely influential in the advancement of robotics.  She led a team of engineers developing next-generation robot hardware and software, including ROS, the PR2 and TurtleBot.  Melonee was a 2015 recipient of MIT Technology Review’s TR 35 award for innovators under the age of 35. In 2017, Business Insider named her as one of eight CEOs changing the way we work. Under her leadership, the company won the MODEX Innovation award for the materials handling industry.

Roxanna Pakkar

Roxanna Pakkar is a junior studying electrical engineering at University of Southern California. She is a research assistant in the USC Interaction Lab where she has assisted in projects including a robotic system intended to improve the social interaction skills of children with Autism and a study demonstrating the role of augmented reality in improving expressiveness in human-robot interaction. She has also led her own study within the lab on help-seeking behaviors with robot tutors. In addition, Roxanna has interned at NASA JPL in the Robotics and Mobility Section, working on a swarm autonomy platform. This summer she is interning as a product engineer at Microsoft and she hopes to continue work in human-robot interaction and collaboration in the future.

I am Rania Rayyes, a PhD student in TU Braunschweig in Germany.
I am doing my PhD in Robotics and Machine Learning.  My research focus on learning robot models, i.e., learning required robot actions to accomplish specific tasks. I am developing for this purpose a novel intrinsic motivation machine learning methods.

Nicole Mirnig is a passionate researcher in social robots and human-robot interaction. She very recently finished her PhD on essentials of robot feedback at the University of Salzburg, Austria. Nicole’s research focus lies in human-robot cooperation, taking into account different factors that foster a positive user experience. Her latest work is dedicated to systematically researching erroneous robot behavior, which was well-received by both, international media and the fellow research community. She aims at making robots understand that they made a mistake and react accordingly. Another hot topic for Nicole is researching robot humor and how it can be exploited for an enjoyable user experience.


Tori Fujinami, Robotics Engineer, Cobalt Robotics
Robotics is exciting because the applications are endless, only limited by the people designing them! The Cobalt robot specifically is inspiring because it is not the kind of technology intended to replace people or their valuable skills, but rather enhance people’s capabilities and collaborate with existing systems.
Rachel Domagalski, Systems Engineer, Cobalt RoboticsI got interested in robotics because robots have an interesting intersection with software, data, and hardware development, and they provide a way to positively augment people’s lives. In particular, Cobalt is exciting to me because our robots combine making human-robot interactions friendlier with a practical application of the technology.

International Women’s Day is a chance to showcase Women in Robotics

This International Women’s Day, Universal Robots pays tribute to the women in robotics. Thanks to the growing awareness towards the untapped potential of the vast Indian female workforce, entrepreneurs have committed themselves to see that each part of society gets the opportunity to prove itself professionally. One of the leading consumer food manufacturer in India, Udhaiyam Dhall, has in fact a workforce comprised of 75% female employees that work alongside Universal Robots’ collaborative robots (cobots).

Universal Robots feels proud to associate itself with such manufacturing businesses and non-profit healthcare organisations Aurolab. They choose to employ and train local women workers, aged 18 and above to manufacture high quality eye care products. The women pride themselves in their work, showing their dedication by standing nine hours a day to ensure the smooth operations process. To improve their working conditions and grow production, Aurolab decided to deploy Universal Robots’ cobots alongside its workforce, which was retrained to operate these robotics for high precision work. The benefit for the workforce is that they get to acquire meaningful new skills with the implementation of the cobots. Aurolab employees are now able to manage the cobots operation with the simple click of a button and do checks on the machines once every hour or so.

Female employees in others industries like automotive, metal and machinery have had similar experiences. Rameshwari, female assembly line operator at Bajaj Auto, says that she is grateful to be able to work with Universal Robots’ cobots, as she is now able to achieve high-quality output. Her other female colleagues and herself find them interesting and easy to operate, as all the physically challenging parts are taken care of by the cobots. ‪The robots had to be comfortable for the staff to operate. Besides fitting in seamlessly with the workforce, in each case cobots from Universal Robots were picked for their affordability, reduced power consumption and safety, which ensured that the protective stop measure turns the power off when a load is applied to it. In these ways, robotics has not only opened doors to the female workforce, but it has empowered and instilled it with pride that will see yet more women entering this field in India and all over the globe.

JOIN US! WOMENINROBOTICS.ORG

Join the World MoveIt! Day code sprint on Oct 25 2018

World MoveIt! Day is an international hackathon to improve the MoveIt! code base, documentation, and community. We hope to close as many pull requests and issues as possible and explore new areas of features and improvements for the now seven year old framework. Everyone is welcome to participate from their local workplace, simply by working on open issues. In addition, a number of companies and groups host meetings on their sites all over the world. A video feed will unite the various locations and enable more collaboration. Maintainers will take part in some of these locations.

 

Locations

  • Note that the Tokyo and Singapore locations will have their events on Friday the 26th, not Thursday the 25th.

General Information Contacts

  • Dave Coleman, Nathan Brooks, Rob Coleman // PickNik Consulting

Signup

Please state your intent to join the event on this form. Note that specific locations will have their own signups in addition to this form.

If you aren’t near an organized event we encourage you to have your own event in your lab/organization/company and video conference in to all the other events. We would also like to mail your team or event some MoveIt! stickers to schwag out your robots!

Logistics

What version of MoveIt! should you use?

We recommend the Kinetic LTS branch/release. The Melodic release is also a good choice but is new and has been tested less. The Indigo branch is considered stable and frozen – and only critical bug fixes will be backported.

For your convenience, a VirtualBox image for ROS Kinetic on Ubuntu 16.04 is available here.

Finding Where You Can Help

Suggested areas for improvement are tracked on MoveIt’s GitHub repo via several labels:

  • moveit day candidate labels issues as possible entry points for participants in the event. This list will grow longer before the event.
  • simple improvements indicates the issue can probably be tackled in a few hours, depending on your background.
  • documentation suggests new tutorials, changes to the website, etc.
  • assigned aids developers to find issues that are not already being worked on.
  • no label – of course issues that are not marked can still be worked on during World MoveIt! day, though they will likely take longer than one day to complete.

If you would like to help the MoveIt! project by tackling an issue, claim the issue by commenting “I’ll work on this” and a maintainer will add the label “assigned”. Feel free to ask further questions in each issue’s comments. The developers will aim to reply to WMD-related questions before the event begins.

If you have ideas and improvements for the project, please add your own issues to the tracker, using the appropriate labels where applicable. It’s fine if you want to then claim them for yourself.

Further needs for documentation and tutorials improvement can be found directly on the moveit_tutorials issue tracker.

Other larger code sprint ideas can be found on this page. While they will take longer than a day the ideas might provide a good reference for other things to contribute on WMD.

Documentation

Improving our documentation is at least as important as fixing bugs in the system. Please add to our Sphinx and Markdown-based documentation within our packages and on the MoveIt! website. If you have studied extensively an aspect of MoveIt! that is not currently documented well, please convert your notes into a pull request in the appropriate location. If you’ve started a conversation on the mailing list or other location where a more experienced developer explained a concept, consider converting that answer into a pull request to help others in the future with the same question.

For more details on modifying documentation, see Contributing.

Video Conference and IRC

Join the conversation on IRC with #moveit at irc.freenode.net. For those new to IRC try this web client.

Joint the video conference on Appear.In

Sponsorship

We’d like to thank the following sponsors:

PickNik Consulting

Iron Ox

Fraunhofer IPA

ROS-Industrial Asian Pacific Consortium

Tokyo Opensource Robotics Kyokai Association

OMRON SINIC X Corporation

Southwest Research Institute

Read More

Page 3 of 4
1 2 3 4