Category robots in business

Page 307 of 458
1 305 306 307 308 309 458

RoboTED: a case study in Ethical Risk Assessment

A few weeks ago I gave a short paper at the excellent International Conference on Robot Ethics and Standards (ICRES 2020), outlining a case study in Ethical Risk Assessment – see our paper here. Our chosen case study is a robot teddy bear, inspired by one of my favourite movie robots: Teddy, in A. I. Artificial Intelligence.

Although Ethical Risk Assessment (ERA) is not new – it is after all what research ethics committees do – the idea of extending traditional risk assessment, as practised by safety engineers, to cover ethical risks is new. ERA is I believe one of the most powerful tools available to the responsible roboticist, and happily we already have a published standard setting out a guideline on ERA for robotics in BS 8611, published in 2016.

Before looking at the ERA, we need to summarise the specification of our fictional robot teddy bear: RoboTed. First, RoboTed is based on the following technology:

  • RoboTed is an Internet (WiFi) connected device, 
  • RoboTed has cloud-based speech recognition and conversational AI (chatbot) and local speech synthesis,
  • RoboTed’s eyes are functional cameras allowing RoboTed to recognise faces,
  • RoboTed has motorised arms and legs to provide it with limited baby-like movement and locomotion.

And second RoboTed is designed to:

  • Recognise its owner, learning their face and name and turning its face toward the child.
  • Respond to physical play such as hugs and tickles.
  • Tell stories, while allowing a child to interrupt the story to ask questions or ask for sections to be repeated.
  • Sing songs, while encouraging the child to sing along and learn the song.
  • Act as a child minder, allowing parents to both remotely listen, watch and speak via RoboTed.

The tables below summarise the ERA of RoboTED for (1) psychological, (2) privacy & transparency and (3) environmental risks. Each table has 4 columns, for the hazard, risk, level of risk (high, medium or low) and actions to mitigate the risk. BS8611 defines an ethical risk as the “probability of ethical harm occurring from the frequency and severity of exposure to a hazard”; an ethical hazard as “a potential source of ethical harm”, and an ethical harm as “anything likely to compromise psychological and/or societal and environmental well-being”.


(1) Psychological Risks

 


(2) Security and Transparency Risks

 

(3) Environmental Risks

 

For a more detailed commentary on each of these tables see our full paper – which also, for completeness, covers physical (safety) risks. And here are the slides from my short ICRES 2020 presentation:

Through this fictional case study we argue we have demonstrated the value of ethical risk assessment. Our RoboTed ERA has shown that attention to ethical risks can

  • suggest new functions, such as “RoboTed needs to sleep now”,
  • draw attention to how designs can be modified to mitigate some risks, 
  • highlight the need for user engagement, and
  • reject some product functionality as too risky.

But ERA is not guaranteed to expose all ethical risks. It is a subjective process which will only be successful if the risk assessment team are prepared to think both critically and creatively about the question: what could go wrong? As Shannon Vallor and her colleagues write in their excellent Ethics in Tech Practice toolkit design teams must develop the “habit of exercising the skill of moral imagination to see how an ethical failure of the project might easily happen, and to understand the preventable causes so that they can be mitigated or avoided”.

Raptor-inspired drone with morphing wing and tail

The northern goshawk is a fast, powerful raptor that flies effortlessly through forests. This bird was the design inspiration for the next-generation drone developed by scientists of the Laboratory of Intelligent Systems of EPFL, led by Dario Floreano. They carefully studied the shape of the bird's wings and tail and its flight behavior, and used that information to develop a drone with similar characteristics.

Multi-drone system autonomously surveys penguin colonies

Stanford University researcher Mac Schwager entered the world of penguin counting through a chance meeting at his sister-in-law's wedding in June 2016. There, he learned that Annie Schmidt, a biologist at Point Blue Conservation Science, was seeking a better way to image a large penguin colony in Antarctica. Schwager, who is an assistant professor of aeronautics and astronautics, saw an opportunity to collaborate, given his work on controlling swarms of autonomous flying robots.

Researchers improve autonomous boat design

The feverish race to produce the shiniest, safest, speediest self-driving car has spilled over into our wheelchairs, scooters, and even golf carts. Recently, there's been movement from land to sea, as marine autonomy stands to change the canals of our cities, with the potential to deliver goods and services and collect waste across our waterways.

Researchers create robots that can transform their wheels into legs

A team of researchers is creating mobile robots for military applications that can determine, with or without human intervention, whether wheels or legs are more suitable to travel across terrains. The Defense Advanced Research Projects Agency (DARPA) has partnered with Kiju Lee at Texas A&M University to enhance these robots' ability to self-sufficiently travel through urban military environments.

Dog training methods help teach robots to learn new tricks

With a training technique commonly used to teach dogs to sit and stay, Johns Hopkins University computer scientists showed a robot how to teach itself several new tricks, including stacking blocks. With the method, the robot, named Spot, was able to learn in days what typically takes a month.

AI improves control of robot arms

More than one million American adults use wheelchairs fitted with robot arms to help them perform everyday tasks such as dressing, brushing their teeth, and eating. But the robotic devices now on the market can be hard to control. Removing a food container from a refrigerator or opening a cabinet door can take a long time. And using a robot to feed yourself is even harder because the task requires fine manipulation.

Lobe.ai Review

Lobe.ai just released for open beta and the short story is that you should go try it out. I was lucky and got to test it in the closed beta so I figured i should review a short review.

Making AI more understandable and accessible for most people is something I spend a lot of time on and Lobe is without a doubt right down my alley. The tagline is “machine learning made simple” and that is exactly what they do.

Overall great tool and I see it as an actual advance in the AI technology by making AI and deep learning models even more accessible than the AutoML wave is already doing.

So what is Lobe.ai exactly?

Lobe.ai is an Automl tool. That means that you can make AI without coding. In Lobe’s case they work with image classification only. So in short you give Lobe a set of images with labels and Lobe will automatically find the most optimal model to classify the images.


Lobe is also acquired by Microsoft. I think that’s a pretty smart move by Microsoft. The big clouds can be difficult to get started with and especially Microsoft's current AutoML solutions is first of all only tabular data but also requires a good degree of technical skills to get started.

It’s free. I don’t really understand the business model yet, but so far the software is free. That is pretty cool but I’m still curious about how the plan on getting revenue to keep up the good work.

Features

Image classification

So far Lobe only has one main feature and that’s training and image classification network. And it does that pretty well. In all the tests I have done I have gotten decent results with only very little training data.

Speed

The speed is insane. The models are being trained in something that seems like a minut. That’s a really cool feature. You can also decide to train it for longer to get better accuracy.

Export

You can export the model to CoreML, TensorFlow, TensorFlow Lite and they also provide a local API. 

Use Cases

I’m planning to use Lobe for both hobby and commercial projects. For commercial use I’m going to use it for the main three purposes:

Producing models

Since the quality is good and the export possibilities are ready for production I see no reason not to use this for production purposes when helping clients with AI projects. You might think it’s better to hand build models for commercial use, but my honest experience is that many simple problems should be solved with the simplest solution first. You might stay with a model build with Lobe for at least the first few iterations of a project and sometimes forever.

Teaching 

As I’m teaching people about applied AI, Lobe is going to be one of my go to tools from now on. It makes AI development very tangible and accessible and you can play around with it to get a feeling about AI without having to code. When project and product managers get to try developing models themself I expect a lot more understanding of edge cases and unforeseen problems.

Selling

When trying to convince a potential customer that they should invest in AI development you easily run into decision makers that have no understanding about AI. By showing Lobe and doing live tests I hope to be able to make the discussion more leveled since there’s a chance we are now talking about the same thing.

Compared with other AutoML solutions

The bad:

Less insights 

In short you don’t get any model analysis like you would with Kortical for example.

Less options

As mentioned Lobe only offers image classification. Compared to Google Automl, that does that and objectregocnition, text, tabular, video and so on, it is still limited in use cases.

The good:

It’s Easy

This is the whole core selling point for Lobe and it does it perfectly. Lobe is so easy to use that it could easily be used for teaching 3rd graders. 

It’s Fast

The model building is so fast that you can barely get a glass of water while training. 

The average:

Quality

When I compared a model I build in Google AutoML to one I build in Lobe, Google seemed to be a bit better but not by far. That being said the Google model took me 3 hours to train vs. minutes with Lobe.

Future possibilities

For me to see Lobe.ai can go in two different directions in the future. They can either go for making a bigger part of the pipeline and let you build small apps on top of the models or they can go for more types of models such as tabular models or text classification. Both directions could be pretty interesting and whatever direction they go for I’m looking forward to testing it out.

Conclusion

In conclusion Lobe.ai is a great step forward for accessible AI and already in it’s beta it’s very impressive and surely will be the first in a new niche of AI. 

It doesn’t get easier than this and with the export functionality it’s actually a good candidate for many commercial products.

Make sure you test it out, even if it’s just for fun.

Lily the barn owl reveals how birds fly in gusty winds

Scientists from the University of Bristol and the Royal Veterinary College have discovered how birds are able to fly in gusty conditions – findings that could inform the development of bio-inspired small-scale aircraft.

Lily the barn owl flying
Lily flies through gusts: Scientists from Bristol and the RVC have discovered how birds fly in gusty conditions – with implications for small-scale aircraft design. Image credit: Cheney et al 2020

“Birds routinely fly in high winds close to buildings and terrain – often in gusts as fast as their flight speed. So the ability to cope with strong and sudden changes in wind is essential for their survival and to be able to do things like land safely and capture prey,” said Dr Shane Windsor from the Department of Aerospace Engineering at the University of Bristol.

“We know birds cope amazingly well in conditions which challenge engineered air vehicles of a similar size but, until now, we didn’t understand the mechanics behind it,” said Dr Windsor.

The study, published in Proceedings of the Royal Society B, reveals how bird wings act as a suspension system to cope with changing wind conditions. The team, which included Bristol PhD student Nicholas Durston and researchers Jialei Song and James Usherwood from Dongguan University of Technology in China and the RVC respectively, used an innovative combination of high-speed, video-based 3D surface reconstruction, computed tomography (CT) scans, and computational fluid dynamics (CFD) to understand how birds ‘reject’ gusts through wing morphing, i.e. by changing the shape and posture of their wings.

In the experiment, conducted in the Structure and Motion Laboratory at the Royal Veterinary College, the team filmed Lily, a barn owl, gliding through a range of fan-generated vertical gusts, the strongest of which was as fast as her flight speed. Lily is a trained falconry bird who is a veteran of many nature documentaries, so wasn’t fazed in the least by all the lights and cameras. “We began with very gentle gusts in case Lily had any difficulties, but soon found that – even at the highest gust speeds we could make – Lily was unperturbed; she flew straight through to get the food reward being held by her trainer, Lloyd Buck,” commented Professor Richard Bomphrey of the Royal Veterinary College.

“Lily flew through the bumpy gusts and consistently kept her head and torso amazingly stable over the trajectory, as if she was flying with a suspension system. When we analysed it, what surprised us was that the suspension-system effect wasn’t just due to aerodynamics, but benefited from the mass in her wings. For reference, each of our upper limbs is about 5% of our body weight; for a bird it’s about double, and they use that mass to effectively absorb the gust,” said joint lead-author Dr Jorn Cheney from the Royal Veterinary College.

“Perhaps most exciting is the discovery that the very fastest part of the suspension effect is built into the mechanics of the wings, so birds don’t actively need to do anything for it to work. The mechanics are very elegant. When you strike a ball at the sweetspot of a bat or racquet, your hand is not jarred because the force there cancels out. Anyone who plays a bat-and-ball sport knows how effortless this feels. A wing has a sweetspot, just like a bat. Our analysis suggests that the force of the gust acts near this sweetspot and this markedly reduces the disturbance to the body during the first fraction of a second. The process is automatic and buys just enough time for other clever stabilising processes to kick in,” added joint lead-author, Dr Jonathan Stevenson from the University of Bristol.

Dr Windsor said the next step for the research, which was funded by the European Research Council (ERC), Air Force Office of Scientific Research and the Wellcome Trust, is to develop bio-inspired suspension systems for small-scale aircraft.

International conference on intelligent robots and systems (IROS)

This Sunday sees the start of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). This year the event is online and free for anyone to attend. Content will be available from the platform on demand, with access available from 25 October to 25 November 2020.

IROS conferences have traditionally had a theme and this year is no different with the emphasis being on “consumer robotics and our future”. You can sign up here.

Plenaries

IROS will feature three plenary talks. The speakers and topics are as follows:

  • Danica Kragic Robotics and artificial intelligence impacts on the fashion industry
  • Cynthia Breazeal Living with social robots: from research to commercialization and back
  • Yukie Nagai Cognitive development in humans and robots: new insights into intelligence

Keynote speakers

There are also nine keynote talks covering a number of topics.

  • Frank Dellaert Air, sea, and space robots
  • Anya Petrovskaya Driverless vehicles and field robots
  • Ashish Deshpande Rehabilitation robotics
  • Jonathan Hurst Humanoids
  • I-Ming Chen Food handling robotics
  • Steve LaValle Perception, action and control
  • Nikolaus Correll Grasping, haptics and end-effectors
  • Andrea Thomaz Human-robot interaction
  • Sarah Bergbreiter Design, micro and bio-inspired robotics

Technical talks

The technical talks have been divided into 12 topic areas.

  • Air, sea, and space robots
  • Driverless vehicles and field robots
  • Medical, cellular, micro and nano robots
  • Humanoids, exoskeletons, and rehab robots
  • Localization, mapping and navigation
  • Dynamics, control and learning
  • Design, mechanisms, actuators, soft and bio-inspired robots
  • Perception, action, and cognition
  • Grasping, haptics and end-effectors
  • Human-robot interaction, teleoperation, and virtual reality
  • Swarms and multi-robots
  • Industry 4.0

Each talk will feature its digest slide, pre-recorded video presentation, and the paper’s PDF. These will be available from 25 October, so keep an eye on the website.

Workshops

There are a whopping 35 workshops to choose from. These have on-demand content and also live sessions (dates vary so visit the webpages below for more information about each specific workshop).

  1. 3rd workshop on proximity perception in robotics: towards multi-modal cognition, Stefan Escaida Navarro*, Stephan Mühlbacher-Karrer, Hubert Zangl, Keisuke Koyama, Björn Hein, Ulrike Thomas, Hosam Alagi, Yitao Ding, Christian Stetco
  2. Bringing geometric methods to robot learning, optimization and control, Noémie Jaquier*, Leonel Rozo, Søren Hauberg, Hans-Peter Schröcker, Suvrit Sra
  3. 12th IROS20 workshop on planning, perception and navigation for intelligent vehicles, Philippe Martinet*, Christian Laugier, Marcelo H Ang Jr, Denis Fernando Wolf
  4. Robot-assisted training for primary care: how can robots help train doctors in medical examinations?, Thrishantha Nanayakkara*, Florence Ching Ying Leong, Thilina Dulantha Lalitharatne, Liang He, Fumiya Iida, Luca Scimeca, Simon Hauser, Josie Hughes, Perla Maiolino
  5. Workshop on animal-robot interaction, Cesare Stefanini and Donato Romano*
  6. Ergonomic human-robot collaboration: opportunities and challenges, Wansoo Kim*, Luka Peternel, Arash Ajoudani, Eiichi Yoshida
  7. New advances in soft robots control, Concepción A. Monje*, Egidio Falotico, Santiago Martínez de la Casa
  8. Autonomous system in medicine: current challenges in design, modeling, perception, control and applications, Hang Su, Yue Chen*, Jing GUO, Angela Faragasso, Haoyong Yu, Elena De Momi
  9. MIT MiniCheetah workshop, Sangbae Kim*, Patrick M. Wensing, Inhyeok Kim
  10. Workshop on humanitarian robotics, Garrett Clayton*, Raj Madhavan, Lino Marques
  11. Robotics-inspired biology, Nick Gravish*, Kaushik Jayaram, Chen Li, Glenna Clifton, Floris van Breugel
  12. Robots building robots. Digital manufacturing and human-centered automation for building consumer robots, Paolo Dario*, George Q. Huang, Peter Luh, MengChu Zhou
  13. Cognitive robotic surgery, Michael C. Yip, Florian Richter*, Danail Stoyanov, Francisco Vasconcelos, Fanny Ficuciello, Emmanuel B Vander Poorten, Peter Kazanzides, Blake Hannaford, Gregory Scott Fischer
  14. Application-driven soft robotic systems: Translational challenges, Sara Adela Abad Guaman, Lukas Lindenroth, Perla Maiolino, Agostino Stilli*, Kaspar Althoefer, Hongbin Liu, Arianna Menciassi, Thrishantha Nanayakkara, Jamie Paik, Helge Arne Wurdemann
  15. Reliable deployment of machine learning for long-term autonomy, Feras Dayoub*, Tomáš Krajník, Niko Sünderhauf, Ayoung Kim
  16. Robotic in-situ servicing, assembly, and manufacturing, Craig Carignan*, Joshua Vander Hook, Chakravarthini Saaj, Renaud Detry, Giacomo Marani
  17. Benchmarking progress in autonomous driving, Liam Paull*, Andrea Censi, Jacopo Tani, Matthew Walter, Felipe Codevilla, Sahika Genc, Sunil Mallya, Bhairav Mehta
  18. ROMADO: RObotic MAnipulation of Deformable Objects, Miguel Aranda*, Juan Antonio Corrales Ramon, Pablo Gil, Gonzalo Lopez-Nicolas, Helder Araujo, Youcef Mezouar
  19. Perception, learning, and control for autonomous agile vehicles, Giuseppe Loianno*, Davide Scaramuzza, Sertac Karaman
  20. Planetary exploration robots: challenges and opportunities, Hendrik Kolvenbach*, William Reid, Kazuya Yoshida, Richard Volpe
  21. Application-oriented modelling and control of soft robots, Thomas George Thuruthel*, Cosimo Della Santina, Seyedmohammadhadi Sadati, Federico Renda, Cecilia Laschi
  22. State of the art in robotic leg prostheses: where we are and where we want to be, Tommaso Lenzi*, Robert D. Gregg, Elliott Rouse, Joost Geeroms
  23. Worskhop on perception, planning and mobility in forestry robotics (WPPMFR 2020), João Filipe Ferreira* and David Portugal
  24. Why robots fail to grasp? – failure ca(u)ses in robot grasping and manipulation, Joao Bimbo*, Dimitrios Kanoulas, Giulia Vezzani, Kensuke Harada
  25. Trends and advances in machine learning and automated reasoning for intelligent robots and systems, Abdelghani Chibani, Craig Schlenoff, Yacine Amirat*, Shiqi Zhang, Jong-Hwan Kim, Ferhat Attal
  26. Learning impedance modulation for physical interaction: insights from humans and advances in robotics, Giuseppe Averta*, Franco Angelini, Meghan Huber, Jongwoo Lee, Manolo Garabini
  27. New horizons of robot learning – from industrial challenges to future capabilities, Kim Daniel Listmann* and Elmar Rueckert
  28. Robots for health and elderly care (RoboHEC), Leon Bodenhagen*, Oskar Palinko, Francois Michaud, Adriana Tapus, Julie Robillard
  29. Wearable SuperLimbs: design, communication, and control, Harry Asada*
  30. Human Movement Understanding for Intelligent Robots and Systems, Emel Demircan*, Taizo Yoshikawa, Philippe Fraisse, Tadej Petric
  31. Construction and architecture robotics, Darwin Lau*, Yunhui Liu, Tobias Bruckmann, Thomas Bock, Stéphane Caro
  32. Mechanisms and design: from inception to realization, Hao Su*, Matei Ciocarlie, Kyu-Jin Cho, Darwin Lau, Claudio Semini, Damiano Zanotto
  33. Bringing constraint-based robot programming to real-world applications, Wilm Decré*, Herman Bruyninckx, Gianni Borghesan, Erwin Aertbelien, Lars Tingelstad, Darwin G. Caldwell, Enrico Mingo, Abderrahmane Kheddar, Pierre Gergondet
  34. Managing deformation: a step towards higher robot autonomy, Jihong Zhu*, Andrea Cherubini, Claire Dune, David Navarro-Alarcon
  35. Social AI for human-robot interaction of human-care service robots, Ho Seok Ahn*, Hyungpil Moon, Minsu Jang, Jongsuk Choi

Robot challenges

Another element of the conference that sounds interesting is the robot challenges. There are three of these and you should be able to watch the competitions in action next week.

  1. Open cloud robot table organization challenge (OCRTOC). This competition focusses on table organisation tasks. Participants will need to organize the objects in the scene according to a target configuration. This competition will be broadcast on 25-27 October.
  2. 8th F1Tenth autonomous Grand Prix @ IROS 2020. This competition will take the form of a virtual race with standardised vehicles and hardware. The qualifying phase is a timed trial. The Grand Prix phase pits virtual competitors against each other on the same track. The race will be broadcast on 27 October.
  3. Robotic grasping and manipulation competition. There are two sections to this competition. In the first the robot has to make five cups of iced Matcha green tea. The second involves disassembly and assembly using a NIST Task Board.

Electronic design tool morphs interactive objects

MorphSensor glasses
An MIT team used MorphSensor to design multiple applications, including a pair of glasses that monitor light absorption to protect eye health. Credits: Photo courtesy of the researchers.

By Rachel Gordon

We’ve come a long way since the first 3D-printed item came to us by way of an eye wash cup, to now being able to rapidly fabricate things like car parts, musical instruments, and even biological tissues and organoids

While much of these objects can be freely designed and quickly made, the addition of electronics to embed things like sensors, chips, and tags usually requires that you design both separately, making it difficult to create items where the added functions are easily integrated with the form. 

Now, a 3D design environment from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) lets users iterate an object’s shape and electronic function in one cohesive space, to add existing sensors to early-stage prototypes.

The team tested the system, called MorphSensor, by modeling an N95 mask with a humidity sensor, a temperature-sensing ring, and glasses that monitor light absorption to protect eye health.

MorphSensor automatically converts electronic designs into 3D models, and then lets users iterate on the geometry and manipulate active sensing parts. This might look like a 2D image of a pair of AirPods and a sensor template, where a person could edit the design until the sensor is embedded, printed, and taped onto the item. 

To test the effectiveness of MorphSensor, the researchers created an evaluation based on standard industrial assembly and testing procedures. The data showed that MorphSensor could match the off-the-shelf sensor modules with small error margins, for both the analog and digital sensors.

“MorphSensor fits into my long-term vision of something called ‘rapid function prototyping’, with the objective to create interactive objects where the functions are directly integrated with the form and fabricated in one go, even for non-expert users,” says CSAIL PhD student Junyi Zhu, lead author on a new paper about the project. “This offers the promise that, when prototyping, the object form could follow its designated function, and the function could adapt to its physical form.” 

MorphSensor in action 

Imagine being able to have your own design lab where, instead of needing to buy new items, you could cost-effectively update your own items using a single system for both design and hardware. 

For example, let’s say you want to update your face mask to monitor surrounding air quality. Using MorphSensor, users would first design or import the 3D face mask model and sensor modules from either MorphSensor’s database or online open-sourced files. The system would then generate a 3D model with individual electronic components (with airwires connected between them) and color-coding to highlight the active sensing components.  

Designers can then drag and drop the electronic components directly onto the face mask, and rotate them based on design needs. As a final step, users draw physical wires onto the design where they want them to appear, using the system’s guidance to connect the circuit. 

Once satisfied with the design, the “morphed sensor” can be rapidly fabricated using an inkjet printer and conductive tape, so it can be adhered to the object. Users can also outsource the design to a professional fabrication house.  

To test their system, the team iterated on EarPods for sleep tracking, which only took 45 minutes to design and fabricate. They also updated a “weather-aware” ring to provide weather advice, by integrating a temperature sensor with the ring geometry. In addition, they manipulated an N95 mask to monitor its substrate contamination, enabling it to alert its user when the mask needs to be replaced.

In its current form, MorphSensor helps designers maintain connectivity of the circuit at all times, by highlighting which components contribute to the actual sensing. However, the team notes it would be beneficial to expand this set of support tools even further, where future versions could potentially merge electrical logic of multiple sensor modules together to eliminate redundant components and circuits and save space (or preserve the object form). 

Zhu wrote the paper alongside MIT graduate student Yunyi Zhu; undergraduates Jiaming Cui, Leon Cheng, Jackson Snowden, and Mark Chounlakone; postdoc Michael Wessely; and Professor Stefanie Mueller. The team will virtually present their paper at the ACM User Interface Software and Technology Symposium. 

This material is based upon work supported by the National Science Foundation.

Page 307 of 458
1 305 306 307 308 309 458