Category Robotics Classification

Page 414 of 430
1 412 413 414 415 416 430

Robots on the Rise


NEDO, Japan’s New Energy and Industrial Technology Development Organization, is a regular funder of robotic technology, has an office in Silicon Valley, and participates in various regional events to promote its work and future programs. One such event was Robots on the Rise: The Future of Robotics in Japan and the US held October 16th in Mountain View, CA and jointly sponsored by Silicon Valley Forum.

Over 400 people attended the all-day series of panels with well-known speakers and relevant subject matter. Panels covered mobility, agricultural robotics, search and rescue, and the retail and manufacturing revolutions. Henrick Christensen from UC San Diego gave an overview of robotics in Japan and the US as a keynote. He described the key drivers propelling the robotics industry forward and the digitization of manufacturing: mass customization, unmanned vehicles, the aging society (particularly in Japan), and the continuing need for application-specific integrators.

He was followed by Atsushi Yasuda from METI, Japan’s Ministry of Economy, Trade and Industry (the agency that funds NEDO) who emphasized Japan’s need to focus on technologies that can safely assist their aging population. Manufacturing, agriculture, nursing and medical care, plus disaster relief were points he detailed.

I was the moderator of a panel on The Manufacturing Revolution: Automated Factories with speakers from Yaskawa (Chetan Kapoor), Yamaha (Hiro Sauou), OMRON/Adept (Edwardo De Robbio), GE (Steve Taub) and VEO Robotics (Patrick Sobalvarro). Trends in this arena are being driven by the global movement toward mass customization and the need for flexibility in automation and robotics. For the next while that flexibility will use humans in the loop to collaborate with their robot counterparts.

There was also an exhibition with around 25 companies and agencies participating in a pop-up type of trade show. It was noisy, fun and informative.

Best line from the investment panel: “Invest in missionaries; not mercenaries.” 

Second best line came from Henrik Christensen regarding measuring the successfulness of home robots by their “time to boredom.”

Most interesting question and answer about the future came from James Kuffner, the CTO of Toyota Research Institute who said that Toyota asked the Institute what the company should to do after self-driving reduces the size of the car industry. Kuffner said that Toyota decided to “pivot to robotics and particularly to assistance robots for health, elder and home care.”

In the panel on unmanned vehicles, the consensus was that mapping, proprietary driving data, regulation and weather were all holdups thwarting fully autonomous vehicles (Level 5 vehicles (without pedals or a steering wheel)). Because of those problems, it was their opinion that only Level 4 would be achieved in the next decade.

NEDO’s 2017 fundings total $1.17 billion and include $99.1 million for robot technology seed and mid-term fundings for practical robotic solutions. Current projects include infrastructure inspection and maintenance, disaster response robots, elder care robots, and next-generation technologies in industrial and service robots and AI.

Talking Machines: The pace of change and the public view of machine learning, with Peter Donnelly


In episode ten of season three we talk about the rate of change (prompted by Tim Harford), take a listener question about the power of kernels, and talk with Peter Donnelly in his capacity with the Royal Society’s Machine Learning Working Group about the work they’ve done on the public’s views on AI and ML.

If you enjoyed this episode, you may also want to listen to:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Shaping animal, vegetable and mineral

The face of the father of quantum physics, Max Planck, emerges from a flat disk. In each state, the colors show the growth factors of the top (left) and bottom (right) layer, and the thin black lines indicate the direction of growth. The top layer is viewed from the front, and the bottom layer is viewed from the back, to highlight the complexity of the geometries. Credit: Harvard SEAS

By Leah Burrows

Nature has a way of making complex shapes from a set of simple growth rules. The curve of a petal, the swoop of a branch, even the contours of our face are shaped by these processes. What if we could unlock those rules and reverse engineer nature’s ability to grow an infinitely diverse array of shapes?

Scientists from Harvard’s Wyss Institute for Biologically Inspired Engineering and the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have done just that. In a paper published in the Proceedings of the National Academy of Sciences, the team demonstrates a technique to grow any target shape from any starting shape.

“Architect Louis Sullivan once said that ‘form ever follows function’,” said L. Mahadevan, Ph.D., Associate Faculty member at the Wyss Institute and the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology and of Physics and senior author of the study. “But if one took the opposite perspective, that perhaps function should follow form, how can we inverse design form?”

In previous research, the Mahadevan group used experiments and theory to explain how naturally morphing structures — such as Venus flytraps, pine cones and flowers — changed their shape in the hopes of one day being able to control and mimic these natural processes. And indeed, experimentalists have begun to harness the power of simple, bioinspired growth patterns. For example, in 2016, in a collaboration with the group of Jennifer Lewis, a Wyss Institute Core Faculty member and the Hansjörg Wyss Professor of Biologically Inspired Engineering at SEAS, the team printed a range of structures that changed its shape over time in response to environmental stimuli.

“The challenge was how to do the inverse problem,” said Wim van Rees, Ph.D., a postdoctoral fellow at SEAS and first author of the paper. “There’s a lot of research on the experimental side but there’s not enough on the theoretical side to explain what’s actually happening. The question is, if I want to end with a specific shape, how do I design my initial structure?”

Inspired by the growth of leaves, the researchers developed a theory for how to pattern the growth orientations and magnitudes of a bilayer, two different layers of elastic materials glued together that respond differently to the same stimuli. By programming one layer to swell more and/or in a different direction than the other, the overall shape and curvature of the bilayer can be fully controlled. In principle, the bilayer can be made of any material, in any shape, and respond to any stimuli from heat to light, swelling, or even biological growth.

The team unraveled the mathematical connection between the behavior of the bilayer and that of a single layer.

“We found a very elegant relationship in a material that consists of these two layers,” said van Rees. “You can take the growth of a bilayer and write its energy directly in terms of a curved monolayer.”

That means that if you know the curvatures of any shape you can reverse engineer the energy and growth patterns needed to grow that shape using a bilayer.

“This kind of reverse engineering problem is notoriously difficult to solve, even using days of computation on a supercomputer,” said Etienne Vouga, Ph.D., former postdoctoral fellow in the group and now an Assistant Professor of Computer Science at the University of Texas at Austin. “By elucidating how the physics and geometry of bilayers are intimately coupled, we were able to construct an algorithm that solves the needed growth pattern in seconds, even on a laptop, no matter how complicated the target shape.”

A snapdragon flower petal starting from a cylinder. In each state, the colors show the growth factors of the top (left) and bottom (right) layer, and the thin black lines indicate the direction of growth. The top layer is viewed from the front, and the bottom layer is viewed from the back, to highlight the complexity of the geometries. Credit: Harvard SEAS

The researchers demonstrated the system by modeling the growth of a snapdragon flower petal from a cylinder, a topographical map of the Colorado river basin from a flat sheet and, most strikingly, the face of Max Planck, one of the founders of quantum physics, from a disk.

“Overall, our research combines our knowledge of the geometry and physics of slender shells with new mathematical algorithms and computations to create design rules for engineering shape,” said Mahadevan. “It paves the way for manufacturing advances in 4-D printing of shape-shifting optical and mechanical elements, soft robotics as well as tissue engineering.”

The researchers are already collaborating with experimentalists to try out some of these ideas.

This research was funded in part by the Swiss National Science Foundation and the US National Science Foundation.

Could we build a Blade Runner-style ‘replicant’?

Sony Pictures

The new Blade Runner sequel will return us to a world where sophisticated androids made with organic body parts can match the strength and emotions of their human creators. As someone who builds biologically inspired robots, I’m interested in whether our own technology will ever come close to matching the “replicants” of Blade Runner 2049.

The reality is that we’re a very long way from building robots with human-like abilities. But advances in so-called soft robotics show a promising way forward for technology that could be a new basis for the androids of the future.

From a scientific point of view, the real challenge is replicating the complexity of the human body. Each one of us is made up of millions and millions of cells, and we have no clue how we can build such a complex machine that is indistinguishable from us humans. The most complex machines today, for example the world’s largest airliner, the Airbus A380, are composed of millions of parts. But in order to match the complexity level of humans, we would need to scale this complexity up about a million times.

There are currently three different ways that engineering is making the border between humans and robots more ambiguous. Unfortunately, these approaches are only starting points, and are not yet even close to the world of Blade Runner.

There are human-like robots built from scratch by assembling artificial sensors, motors and computers to resemble the human body and motion. However, extending the current human-like robot would not bring Blade Runner-style androids closer to humans, because every artificial component, such as sensors and motors, are still hopelessly primitive compared to their biological counterparts.

There is also cyborg technology, where the human body is enhanced with machines such as robotic limbs, wearable and implantable devices. This technology is similarly very far away from matching our own body parts.

Sony Pictures

Finally, there is the technology of genetic manipulation, where an organism’s genetic code is altered to modify that organism’s body. Although we have been able to identify and manipulate individual genes, we still have a limited understanding of how an entire human emerges from genetic code. As such, we don’t know the degree to which we can actually programme code to design everything we wish.

Soft robotics: a way forward?

But we might be able to move robotics closer to the world of Blade Runner by pursuing other technologies, and in particular by turning to nature for inspiration. The field of soft robotics is a good example. In the last decade or so, robotics researchers have been making considerable efforts to make robots soft, deformable, squishable and flexible.

This technology is inspired by the fact that 90% of the human body is made from soft substances such as skin, hair and tissues. This is because most of the fundamental functions in our body rely on soft parts that can change shape, from the heart and lungs pumping fluid around our body to the eye lenses generating signals from their movement. Cells even change shape to trigger division, self-healing and, ultimately, the evolution of the body.

The softness of our bodies is the origin of all their functionality needed to stay alive. So being able to build soft machines would at least bring us a step closer to the robotic world of Blade Runner. Some of the recent technological advances include artificial hearts made out of soft functional materials that are pumping fluid through deformation. Similarly, soft, wearable gloves can help make hand grasping stronger. And “epidermal electronics” has enabled us to tattoo electronic circuits onto our biological skins.

Softness is the keyword that brings humans and technologies closer together. Sensors, motors and computers are all of a sudden integrated into human bodies once they became soft, and the border between us and external devices becomes ambiguous, just like soft contact lenses became part of our eyes.

Nevertheless, the hardest challenge is how to make individual parts of a soft robot body physically adaptable by self-healing, growing and differentiating. After all, every part of a living organism is also alive in biological systems in order to make our bodies totally adaptable and evolvable, the function of which could make machines totally indistinguishable from ourselves.

The ConversationIt is impossible to predict when the robotic world of Blade Runner might arrive, and if it does it will probably be very far in the future. But as long as the desire to build machines indistinguishable from humans is there, the current trends of robotic revolution could make it possible to achieve that dream.

Fumiya Iida, Lecturer in mechatronics, University of Cambridge

This article was originally published on The Conversation. Read the original article.

What is Catapult Launching of Drones

This is one of the methods to launch airplane drones, because airplanes need initial speed in order to fly. Catapults are used in order to throw airplanes into the air easily and quickly, where there might not be enough distance to speed up, or the drone might not have the gear to speed up (which saves weight and control systems). Catapult launched airplanes will need additional reinforcement in order to withstand the throwing force from catapult.

 

 

 

 

 

 

 

 

 

Ask, discuss anything about robots and drones in our forums

See our Robot Book

This post was originally written by RoboticMagazine.com and displaying without our permission is not allowed.

The post What is Catapult Launching of Drones appeared first on Roboticmagazine.

Robohub Podcast

I am happy to announce that Robots Podcast will be renamed to “Robohub Podcast“.

This name change is to avoid confusion about how the podcast and Robohub relate, a question we frequently get. The answer is that they are part of the same effort to connect the global robotics community to the world — and they were founded by many of the same people.

The podcast began in 2006 as “Talking Robots” and was launched by Dr. Dario Floreano at EPFL in Switzerland and his PhD students. Several of those PhD students then went on to launch the “Robots Podcast”, which will celebrate its 250th episode at the end of this year (make sure to check the whole playlist)! Robohub came a few years later as an effort to bring together all the communicators in robotics under one umbrella to provide free, high-quality information about robotics. Robohub has supported the podcast over the years by advising us, making connections for interviews, and sponsoring us to attend conferences.

Going forward, I am happy that our new name will show our close relationship to Robohub, and I look forward to many more interviews.

 

Happy listening!

Audrow Nash

Podcast Director, Robohub

Robohub Podcast #245: High-Performance Autonomous Vehicles, with Chris Gerdes



In this episode, Audrow Nash interviews Chris Gerdes, Professor of Mechanical Engineering at Stanford University, about designing high-performance autonomous vehicles. The idea is to make vehicles safer, as Gerdes says, he wants to “develop vehicles that could avoid any accident that can be avoided within the laws of physics.”

In this interview, Gerdes discusses developing a model for high-performance control of a vehicle; their autonomous race car, an Audi TTS named ‘Shelley,’ and how its autonomous performance compares to ameteur and professional race car drivers; and an autonomous, drifting Delorean named ‘MARTY.’

Chris Gerdes

Chris Gerdes is a Professor of Mechanical Engineering at Stanford University, Director of the Center for Automotive Research at Stanford (CARS) and Director of the Revs Program at Stanford. His laboratory studies how cars move, how humans drive cars and how to design future cars that work cooperatively with the driver or drive themselves. When not teaching on campus, he can often be found at the racetrack with students, instrumenting historic race cars or trying out their latest prototypes for the future. Vehicles in the lab include X1, an entirely student-built test vehicle, and Shelley, an Audi TT-S capable of turning a competitive lap time around the track without a human driver. Professor Gerdes and his team have been recognized with a number of awards including the Presidential Early Career Award for Scientists and Engineers, the Ralph Teetor award from SAE International and the Rudolf Kalman Award from the American Society of Mechanical Engineers.

 

Links

 

 

Udacity Robotics video series: Interview with Felipe Chavez from Kiwi


Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.

This week we’re featuring Mike’s interview with Felipe Chavez, Co-Founder and CEO of Kiwi. Kiwi is a mobile robot company delivering food to hungry college students across University of California, Berkeley’s campus. Listen to Felipe explain some of the challenges Kiwi faces when deploying their robots.

You can find all the interviews here. We’ll be posting them regularly on Robohub.

5G fast and ultra-low latency robot control demonstrated

SoftBank and Huawei jointly demonstrated various use cases for their forthcoming 5G network. 5G commercial services, which will provide ultra-high throughput of over 800 Mbps with ultra-low latency transmission of less than 2ms, will begin being rolled out in 2020 in Japan and Korea and 2021-2023 in China, Europe and the U.S.

5G will (we hope) be able to handle the massive growth of IoT devices and their streaming data. With 5G technology, getting and staying connected will get easier. You’ll still need a robust network provider but your devices will learn to do things like sync or pair automatically.

When 5G comes online, around 50 billion “things” will be connected and that number will be growing exponentially. Think of self-driving cars that have capabilities to communicate with traffic lights, smart city sensor systems, savvy home appliances, industrial automation systems, connected health innovations, personal drones, robots and more.

“5G will make the internet of things more effective, more efficient from a spectral efficiency standpoint,” said an Intel spokesperson. “Each IOT device and network will use exactly and only what it needs and when it needs it, as opposed to just what’s available.”

In the SoftBank and Huawei robot demonstration, a robotic arm played an air hockey game against a human. A camera installed on top of the air hockey table detected the puck’s position to calculate its trajectory. That data was streamed to the cloud and the calculated result was then forwarded to the robotic arm control server to control the robotic arm. In the demonstration, the robotic arm was able to strike back the puck shot by the human player on various trajectories at competition speed, i.e., with no noticeable latency from camera to cloud to controller to robot arm.

Other demonstrations by SoftBank and Huawei included real-time ultra-high definition camera data compressed, streamed and the then displayed on a UHD monitor; an immersive video scenery capture from 180-degree 4-lense cameras uploaded and the downloaded to smartphones and tablets; remote rendering by a cloud GPU server; and the robot demo. Each demo was oriented to various industries, eg: tele-health, tele-education, VR, AR, CAD overlays at a remote (construction) site and the robot example which can apply to factory automation and vehicle-to-vehicle communication.

Other vendors have also demonstrated 5G use cases. Ericsson and BMW tracked a connected car at 105 mph and Verizon used 5G wireless to livestream the Indianapolis Motor Speedway in VR and hi-res 4k 360° video.

5G is coming!

Batteries for Drones

Batteries provide essential power to the motors, receivers and controllers. For multirotors, the most commonly used batteries are Lithium Polymer (LiPo) types, as their energy efficiency is high. Usually 3-4 cell batteries are used, which provide currents of up to around 5000 mah (miliamperes – hour) capacity. To understand what mah means, consider this example: a 3000 mah battery will last 3 times longer than a 1000 mah battery.  Think of the charge (or load) in Amperes and time as similar to velocity and time. Velocity x time = distance. Here the distance is mah, so in other words, it is the distance you can go for so many hours at a certain speed. As the speed (ampere or load you use) increases, the time will decrease because you have a certain defined mah limit (distance). The advantage of LiPo batteries are that it can discharge at a much faster rate than a normal battery. It is recommended to buy a few sets of batteries, so that, when the first set is discharged, you do not have to wait for flying your drone again, and while one charges in the recharger, you can use the other battery. Some intelligent batteries on newer models have sensors and they can calculate its distance from you versus amount of power to return. Safety Note: Lithium Batteries can catch fire and you must check the requirements of the battery manufacturer for safe usage.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Ask, discuss anything about robots and drones in our forums

See our Robot Book

This post was originally written by RoboticMagazine.com and displaying without our permission is not allowed.

The post Batteries for Drones appeared first on Roboticmagazine.

Teleoperating robots with virtual reality


by Rachel Gordon
Consisting of a headset and hand controllers, CSAIL’s new VR system enables users to teleoperate a robot using an Oculus Rift headset.
Photo: Jason Dorfman/MIT CSAIL

Certain industries have traditionally not had the luxury of telecommuting. Many manufacturing jobs, for example, require a physical presence to operate machinery.

But what if such jobs could be done remotely? Last week researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) presented a virtual reality (VR) system that lets you teleoperate a robot using an Oculus Rift headset.

The system embeds the user in a VR control room with multiple sensor displays, making it feel like they’re inside the robot’s head. By using hand controllers, users can match their movements to the robot’s movements to complete various tasks.

“A system like this could eventually help humans supervise robots from a distance,” says CSAIL postdoc Jeffrey Lipton, who was the lead author on a related paper about the system. “By teleoperating robots from home, blue-collar workers would be able to tele-commute and benefit from the IT revolution just as white-collars workers do now.”  

The researchers even imagine that such a system could help employ increasing numbers of jobless video-gamers by “gameifying” manufacturing positions.

The team used the Baxter humanoid robot from Rethink Robotics, but said that it can work on other robot platforms and is also compatible with the HTC Vive headset.

Lipton co-wrote the paper with CSAIL Director Daniela Rus and researcher Aidan Fay. They presented the paper at the recent IEEE/RSJ International Conference on Intelligent Robots and Systems in Vancouver.

There have traditionally been two main approaches to using VR for teleoperation.

In a direct model, the user’s vision is directly coupled to the robot’s state. With these systems, a delayed signal could lead to nausea and headaches, and the user’s viewpoint is limited to one perspective.

In a cyber-physical model, the user is separate from the robot. The user interacts with a virtual copy of the robot and the environment. This requires much more data, and specialized spaces.

The CSAIL team’s system is halfway between these two methods. It solves the delay problem, since the user is constantly receiving visual feedback from the virtual world. It also solves the the cyber-physical issue of being distinct from the robot: Once a user puts on the headset and logs into the system, they’ll feel as if they’re inside Baxter’s head.

The system mimics the homunculus model of the mind — the idea that there’s a small human inside our brains controlling our actions, viewing the images we see, and understanding them for us. While it’s a peculiar idea for humans, for robots it fits: Inside the robot is a human in a virtual control room, seeing through its eyes and controlling its actions.

Using Oculus’ controllers, users can interact with controls that appear in the virtual space to open and close the hand grippers to pick up, move, and retrieve items. A user can plan movements based on the distance between the arm’s location marker and their hand while looking at the live display of the arm.

To make these movements possible, the human’s space is mapped into the virtual space, and the virtual space is then mapped into the robot space to provide a sense of co-location.

The system is also more flexible compared to previous systems that require many resources. Other systems might extract 2-D information from each camera, build out a full 3-D model of the environment, and then process and redisplay the data. In contrast, the CSAIL team’s approach bypasses all of that by simply taking the 2-D images that are displayed to each eye. (The human brain does the rest by automatically inferring the 3-D information.) 

To test the system, the team first teleoperated Baxter to do simple tasks like picking up screws or stapling wires. They then had the test users teleoperate the robot to pick up and stack blocks.

Users successfully completed the tasks at a much higher rate compared to the direct model. Unsurprisingly, users with gaming experience had much more ease with the system.

Tested against current state-of-the-art systems, CSAIL’s system was better at grasping objects 95 percent of the time and 57 percent faster at doing tasks. The team also showed that the system could pilot the robot from hundreds of miles away; testing included controling Baxter at MIT from a hotel’s wireless network in Washington.

“This contribution represents a major milestone in the effort to connect the user with the robot’s space in an intuitive, natural, and effective manner.” says Oussama Khatib, a computer science professor at Stanford University who was not involved in the paper.

The team eventually wants to focus on making the system more scalable, with many users and different types of robots that can be compatible with current automation technologies.

The project was funded, in part, by the Boeing Company and the National Science Foundation.

Industrial cleaning equipment maker Nilfisk goes public

Copyright: Nilfisk

Danish Nilfisk Holding A/S began being listed on the NASDAQ Stock Exchange under symbol NLFSK after being spun off from NKT A/S, a Danish conglomerate. Nilfisk is one of the world’s leading suppliers of professional cleaning equipment with a strong brand and a vision for growth in robotics.

Nilfisk expects that 10% of their revenue will come from autonomous machines within the next 5-7 years. In that pursuit, Blue Ocean Robotics and Nilfisk recently announced a strategic partnership to develop a portfolio of intelligent cleaning machines and robots to add to the Nilfisk line of industrial cleaners.

According to Hans Henrik Lund, CEO of Nilfisk,

We estimate that approximately 70 percent of the cost of professional cleaning goes to labor. At the same time, the cleaning industry is one of the industries with the highest employee turnover. We therefore experience a significant need among our customers to introduce autonomous machines that can solve standardized cleaning tasks so that cleaning operators can be used for other assignments. We have a clear strategy to develop our product portfolio in partnership with highly-specialized technology companies that are the best in their field. We already have good experiences with this, and we are looking forward to starting this partnership with Blue Ocean Robotics, which complements our other partnerships very well.”

Preliminary Q3 2017 financial results reports revenue of approx. EUR 253m ($300 million) which represents a gain of approx. 3.4% over Q3 2016. EBITDA was approx. 11.7% in the first nine months of 2017.

Nilfisk competitors include Tennant, Karcher, Vector Technologies, Sumitomo, Discovery Robotics, ICE / Brain Corp, and Taski Intellibot to name just a few.

National Robot Safety Conference 2017

I had the opportunity to attend the National Robot Safety Conference for Industrial Robots today in Pittsburgh, PA (USA). Today was the first day of a three-day conference. While I mostly cover technical content on this site; I felt that this was an important conference to attend since safety and safety standards are becoming more and more important in robot system design. This conference focused specifically on industrial robots. That means the standards discussed were not directly related to self-driving cars, personal robotics, or space robots (you still don’t want to crash into a martian and start an inter-galactic war).

In this post I will go into a bit of detail on the presentations from the first day. Part of the reason I wanted to attend the first day was to hear the overview and introductory talks that formed a base for the rest of the sessions.

The day started out with some Standards Bingo. Lucky for us the conference organizers provided a list of standards terms, abbreviations, codes, and titles (see link below). For somebody (like myself) who does not work with industrial robot safety standards every day, when people start rattling off safety standard numbers it can get confusing very fast.

Quick, what is ISO 10218-1:2011 or IEC 60204-1:2016? For those who do not know, (me included) those are Safety requirements for industrial robots — Part 1: Robots and Safety of machinery – electrical equipment — Part 1: General requirements.

Click here for a post with a guide to relevant safety standards, Abbreviations, Codes & Titles.

The next talk was from Carla Silver at Merck & Company Inc. she introduced what safety team members need to remember to be successful, and introduced Carla’s Top Five List.

  1. Do not assume you know everything about the safety of a piece of equipment!
  2. Do not assume that the Equipment Vendor has provided all the information or understands the hazards of the equipment.
  3. Do not assume that the vendor has built and installed the equipment to meet all safety regulations.
  4. Be a “Part of the Process”. – Make sure to involve the entire team (including health and safety people)
  5. Continuous Education

I think those 5 items are a good list for life in general.

The prior talk set the stage for why safety can be tricky and the amount of work it takes to stay up to date.

Robot integrator is a certification (and way to make money) from Robotic Industries Association (RIA) that helps provide people who come trained to fill the safety role while integrating and designing new robot systems.

According to Bob Doyle the RIA Director of Communications, RIA certified robot integrators must understand current industry safety standards and undergo an on-site audit in order to get certified. Every two years they need to recertify. Part of the recertification is having an RIA auditor perform a site visit. When recertifing the integrators are expected to know the current standards. I was happy to hear about the two-year recertification, due to how much changes with robotics technology over two years.

A bit unrelated but A3 is the umbrella association for Robotic Industries Association (RIA) as well as Advancing Vision & Imaging (AIA), and Motion Control & Motor Association (MCMA). Bob mentioned that the AIA and MCMA certifications are standalone from the RIA Certified Integrators. However they are both growing as a way to train industrial engineers for those applications. Both the AIA and MCMA certifications are vendor agnostic for the technology used. There are currently several hundred people with the AIA certification. The MCMA certification was just released earlier this year and has several dozen people certified. Bob said that there are several companies that now require at least one team member on a project to have the above certifications.

The next talk really started to get into the details about Robot System Integrators and Best Practices. In particular risk assessments. Risk assessments is a relatively new part of the integration process, but has a strong focus in the current program. Risk assessments are important due to the number of potential safety hazards and the different types or interactions a user might have with the robot system. The risk assessment helps guide the design as well as how users should interact with the robot . The responsibility to perform this risk assessment is with the robot integrator and not directly with the manufacturer or end-user.

One thing that I heard that surprised me was that many integrators do not share the risk assessment with the end-user since it is considered proprietary to that integrator. However one participant said that you can often get them to discuss it in a meeting or over the phone, just they will not hand over the documents.

After a small coffee break we moved on to discussing some of the regulations in detail. In particular R15.06 which is for Industrial Robot Safety standards, the proposed R15.08 standards for industrial mobile robot safety standards, and the R15.606 collaborative robot safety standards. Here are a few notes that I took:

Types of Standards

  • A – Basic concepts — Ex. Guidance to assess risk
  • B – Generic safety standards — Ex. Safety distances, interlocks, etc..
  • C – Machine specific — ex. From the vendor for a particular robot.

Type C standards overrule type A & B standards.

Parts of a Standard

  • Normative – These are required and often use the language of “shall”
  • Informative – These are recommended or advice and use the language of “should” or “can”. Notes in standards are considered Informative

Key Terms for Safety Standards

  • Industrial Robot – Robot manipulator of at least 3 DOF and its controller
  • Robot System – The industrial robot with its end effector, work piece and periphery equipment (such as conveyor).
  • Robot Cell – Robot system with the safe guarded spaces to include the physical barriers.
robot work cell

Case study that was presented of a 3 robot system in a single cell, and how it was designed to meet safety standards.

R15.06 is all about “keeping people safe by keeping them away from the robot system”. This obviously does not work for mobile robots that move around people and collaborative robots. For that the proposed R15.08 standard for mobile robots and the R15.606 standard for collaborative robots are needed.

R15.08 which is expected to be ratified as a standard in 2019 looks at things like mobile robots, manipulators on mobile robots, and manipulators working while the mobile base is also working. Among other things, the current standard draft says that if an obstacle is detected, the primary mode is for the robot to stop; however dynamic replanning will be allowed.

For R15.606 they are trying to get rid of the term collaborative robot (a robot designed for direct interaction with a human) and think about systems in regard to its application. For example :

…a robotic application where an operator may interact directly with a robot system without relying on perimeter safeguards for protection in pre-determined,low risk tasks…

collaborative robots

After all the talk about standards we spent a bit of time looking at various case studies that were very illuminating for designing industrial robotic systems, and some of the problems that can occur.

One thing unrelated, but funny since this was a safety conference, was a person sitting near the back of the room who pulled a roll of packing tape out of their backpack to tape over their laptop power cable that ran across the floor.

I hope you found this interesting. This was the 29th annual national robot safety meeting (really, I did not realize we had been using robots in industry for that long). If you want to find out more about safety and how it affects your work and robots make sure to attend next year.


I would like to thank RIA for giving me a media pass to attend this event.

Why engineering schools globally need more creative women


File 20171011 5671 g2i4ka.jpg?ixlib=rb 1.1
At McMaster University, 40 per cent of assistant professors in engineering are now women and the school is working hard to make the profession more equitable for women.
(Shutterstock)

Engineers are good at solving problems. We make bridges safer, computers faster and engines more efficient. Today, the profession is working on an especially thorny problem: gender equity in higher education.

While other fields of study continue to make significant advances towards gender equity, engineering schools are still struggling to pull their numbers of women students past the 20 per cent threshold.

This week, McMaster University is hosting a conference for more than 150 deans of engineering from schools around the world. One of the major issues we’re discussing at this Global Engineering Deans Council Conference is the gender imbalance that remains a challenge across the field.

We are making progress, but we need a breakthrough.

Cultivating interest in children

Our increasingly automated, mechanized world requires more engineers than ever, and demand for them is expected to grow. And the largest pool of under-utilized talent is right here: the women who would make great engineers, but choose other careers.

Why don’t they choose engineering? Some turn away as early as Grade 6. Research shows that this is the point when many girls simply turn off math and science, even though they have performed as well as their male classmates until that point.

We must reach kids before this juncture to show them how useful engineering is to everyday life. We need to show them how easy and interesting it is to write computer code and build apps, to help them use technology to build things and solve problems.

Robotics camps and classes can introduce girls to the creative dimensions of engineering at a young age.
(Shutterstock)

Some say women are just not interested in engineering. Once, they said women were not capable of succeeding in engineering. Clearly that was untrue, and so now we are trying to correct the idea that they are not interested in engineering simply because they are women.

A profession of ambiguity and creativity

Could it be the way engineering has portrayed itself? For too long, engineering has presented itself as a field that recruits top brains from the abstract realms of mathematics and science and shapes them into problem-solvers.

Engineering might seem more attractive to everyone, women and men, if instead it presented itself as a profession of creative, helpful problem-solvers who use math and science as some of their tools.

Engineers don’t solve only cut-and-dried problems. They also solve ambiguous problems, where there is no single solution. Five groups of engineers who tackle the same problem can come up with five different applicable solutions. Hence, it is crucial that we project the ambiguity of engineering problems and that their solutions demand creativity. Doing so will transmit a more compelling message to women and men alike.

Replacing an antiquated culture

We must also critically examine the culture of engineering. I have learned through numerous conversations with women that the male-centric culture of engineering often puts them off. On average, they also earn less than their male colleagues do.

Despite sincere efforts, a stubborn nub of resistance remains in the broader engineering culture that is antithetical to women’s point of view. It is certainly not universal, but in the corners where it prevails, it is tiresome and antiquated. This old culture is even apparent in the structure of the engineering building where I work. It was designed in the 1950s and bathroom spaces for men outnumber those for women four to one. Does that send a message that old ways are changing?

Women who might think about engineering look at faculty leaders and still see mainly grey-haired men. We are working on that. At McMaster, as a result of deliberate recruitment, 40 per cent of our assistant professors of engineering are now women. As they advance, our senior ranks will move closer to a true balance.

Engineering is still associated for many with male-dominated domains.
(Shutterstock)

At McMaster we are also working to remove the barrier that biology unfairly places in the career paths of women faculty members, by making sure they are not indirectly penalized for taking parental and other life-event leaves. We are ensuring there are resources available so their research continues in their absence, so they do not fall behind because they are having children, and so they can step directly back into their teaching and research careers after their parental leaves.

Harnessing diverse viewpoints

This is not only about fairness, though. Engineering needs women for another simpler, larger reason: Because solving problems needs creativity. And creativity demands a diversity of viewpoints.

Without input from women, engineers would have access to only half the total pool of creativity, constraining their ability to solve problems and limiting the applicability of the solutions they do reach.

The ConversationOnly when the body of engineers truly reflects the society it serves — in terms of age, ethnicity, religion, physical ability, sexuality and gender — can it most effectively serve the needs of that society. Only then will it understand all the communities it is serving, harness the widest variety of viewpoints and generate prosperity for all.

Ishwar K. Puri, Dean of Engineering, McMaster University

This article was originally published on The Conversation. Read the original article.

Robocar-only highways are not quite so nice an idea as expected

Recently Madrona Ventures, in partnership with Craig Mundie (former Microsoft CTO) released a white paper proposing an autonomous vehicle corridor between Seattle and Vancouver on I-5 and BC Highway 99. While there are some useful ideas in it, the basic concept contains some misconceptions about both traffic management, infrastructure planning, and robocars.

Carpool lanes are hard

The proposal starts with a call for allowing robocars in the carpool lanes, and then moving to having a robocar only lane. Eventually it moves to more lanes being robocar only, and finally the whole highway. Generally I have (mostly) avoided too much talk of the all-robocar road because there are so many barriers to this that it remains very far in the future. This proposal wants to make it happen sooner, which is not necessarily bad, but it sure is difficult.

Carpool lanes are poorly understood, even by some transportation planners. For optimum traffic flow, you want to keep every lane at near capacity, but not over it. If you have a carpool lane at half-capacity, you have a serious waste of resources, because the vast majority (around 90%) of the carpools are “natural carpools” that would exist regardless of the lane perk. They are things like couples or parents with children. A half-empty carpool lane makes traffic worse for everybody but the carpoolers, for whom the trip does improve.

That’s why carpool lanes will often let in electric cars, and why “high occupancy toll” lanes let in solo drivers willing to pay a price. In particular with the HOT lane, you can set the price so you get just enough cars in the carpool lane to make it efficient, but no more.

(It is not, of course, this simple, as sometimes carpool lanes jam up because people are scared of driving next to slow moving regular lanes, and merging is problematic. Putting a barrier in helps sometimes but can also hurt. An all-robocar lane would avoid these problems, and that is a big plus.)

Letting robocars into the carpool lane can be a good idea, if you have room. If you have to push electric cars out, that may not be the best public goal, but it is a decision a highway authority could make. (If the robocars are electric, which many will be, it’s OK.)

The transition, however, from “robocars allowed” to “robocars only” for the lane is very difficult. Because you do indeed have a decent number of carpools (even if only 10% are induced) you have to kick them out at some point to grow robocar capacity. You can’t have a switch day without causing more traffic congestion for some time after it. If you are willing to build a whole new lane (as is normal for carpool creation) you can do it, but only by wasting a lot of the new lane at first.

Robocar packing

Many are attracted to the idea that robocars can follow more closely behind another vehicle if they have faster reaction times. They also have the dream that the cars will be talking to one another, so they can form platoons that follow even more closely.) The inter car communication (V2V) creates too much computer security risk to be likely, though some still dream of a magic solution which will make it safe to have 1500kg robots exchanging complex messages with every car they randomly encounter on the road. Slightly closer following is still possible without it.

Platooning has a number of issues. It was at first popular as an idea because the lead car could be human driven. You didn’t have to solve the whole driving problem to make a platoon. Later experiments showed a number of problems, however.

  • If not in a fully dedicated lane, other drivers keep trying to fit themselves into the gaps in a platoon, unless they are super-close
  • When cars are close, they throw up stones from the road, constantly cracking windshields, destroying a car’s finish, and in some experiments, destroying the radiator!
  • Any failure can be catastrophic, since multiple cars will be unable to avoid being in the accident.
  • Fuel savings of workable following distances are around 10%. Nice, but not exciting.

To have platoons, you need cars designed with stone-shields or some other technique to stop stones from being thrown. You need a more secure (perhaps optical rather than radio) protocol for communication of only the simplest information, such as when brakes are being hit. And you must reach a safety level where the prospect of chain accidents is no longer frightening.

In any event, the benefits of packing are not binary. Rather, in a lane that is 90% robocars and 10% human, you get 90% of the benefit of a 100% robocar lane. There is no magic special benefit you get at 100% as far as packing is concerned. This is even true to some degree with the problems of erratic human drivers. Humans will brake for no good reason, and this causes traffic jams. Research shows that just a small fraction of robocars on the road who will react properly enough to this are enough to stop this from causing major traffic jams. There is actually a diminishing return from having more robocars. Traffic flow does need some gaps in it to absorb braking events, and while you could get away with fewer in an all robocar road, I am not sure that is wise. As long as you have a modest buffer, robocars trailing a human who brakes for no reason can absorb it and restore the flow as soon as the human speeds up again.

Going faster

There is a big benefit to all-robocar lanes if you are willing to allow the cars in that lane to drive much faster. That’s something that can’t happen in a mixed lane. The white paper makes only one brief mention of that benefit.

Other than this, the cars don’t get any great benefit from grouping. I mean, anybody would prefer to drive with robocars, which should drive more safely and more regularly. They won’t block the lane the way human drivers do. They will tailgate you (perhaps uncomfortably so) but they will only do so when it’s safe. They could cluster together to enjoy this benefit on their own, without any need for regulations.

The danger of robocar-only lanes

One of the biggest reasons to be wary of robocar only lanes is that while this proposal does not say it, most proposals have been put forward in the belief that robocars are not safe enough to mix with regular traffic. That is true today for the prototypes, but all teams plan to make vehicles which do meet that safety goal before they ship.

Many dedicated lane proposals have essentially called for robocar operation only in the dedicated lanes, and manual driving is required in other lanes. If you declare that the vehicles are not safe without a special lane, you turn them into vehicles with a very limited domain of operation. Since the creation of new dedicated lanes will be a very long (decades long) process, it’s an incredible damper on the deployment of the technology. “Keep those things in their own special lanes” means delay those things by decades.

The white paper does not advocate this. But there is a danger that the concept will be co-opted by those who do. As long as the benefits are minor, why take that risk?

Do we need it?

In general, any plan that calls for infrastructure change or political change is risky because of the time scales involved. It is quite common for governmental authorities to draft plans that take many years or decades to solve things software teams will solve in months or even, at the basic level, in hours. We want to be always sure that there is not a software solution before we start the long and high-momentum path of infrastructure change. Even change as simple as repainting.

Most of the benefits that come from all-robocar highway lanes arrive without mandating it. The ability for greater speed is the main one that doesn’t. All this happens everywhere, without planning, and political difficulty. Banning human drivers from lanes is going to be politically difficult. Banning them from the main artery would be even harder.

For great speed, I actually think that airplanes and potentially the hyperloop provide interesting answers, at least for trips of more than 150 miles. The white paper makes a very common poor assumption — that other technologies will stand still as we move to 2040. I know this is not true. I have big hopes for better aviation, including electric planes, robotic planes and most of all, better airports that create a seamless transfer from robocar to aircraft entirely unlike the nightmare we have built today.

On the ground, while I am not a fan of existing rail technology, new technologies like hyperloop are just starting to show some promise. If it can be built, hyperloop will be faster and more energy efficient, and through the use of smaller pods rather than long trains, offer travel without a schedule.

On the plus side, a plan for robocar only lanes is not a grand one. If you can sell it politically, you don’t need to build much infrastructure. It’s just some signs and new paint.

Some other users for all-robocar lanes

Once density is high enough, I think all-robocar lanes could be useful as barriers on a highway with dynamic lane assignment. To do this, you would just have a big wide stretch of pavement, and depending on traffic demand, allocate lanes to a direction. The problem is the interface lane. We may not want human drivers to drive at 75mph with other cars going the other way just 4 feet away. Robocars, however, could drive exclusively in the two border lanes, and do it safely. They would also drive a little off-center to create a larger buffer to avoid the wind-shake of passing close. No trucks in these lanes!

In an ideal situation, you would get a lot more capacity by paving over the shoulders and median to do this. With no median, though, you still have a risk of runaway cars (even robocars) crossing into oncoming traffic. A simpler solution would be to do this on existing highways. If you have a 6 lane highway, you could allocate 4 lanes one way and 2 the other, but insist that the two border lanes be robocars only, if we trust them. A breakdown by a robocar going in the counter-direction at high speed could still be an issue. Of course, this is how undivided highways are, but they have lower speeds and traffic flow.

Page 414 of 430
1 412 413 414 415 416 430