Archive 12.08.2023

Page 4 of 5
1 2 3 4 5

3D display could soon bring touch to the digital world

Copyright: Brian Johnson

Researchers at the Max Planck Institute for Intelligent Systems and the University of Colorado Boulder have developed a soft shape display, a robot that can rapidly and precisely change its surface geometry to interact with objects and liquids, react to human touch, and display letters and numbers – all at the same time. The display demonstrates high performance applications and could appear in the future on the factory floor, in medical laboratories, or in your own home.

Imagine an iPad that’s more than just an iPad—with a surface that can morph and deform, allowing you to draw 3D designs, create haiku that jump out from the screen and even hold your partner’s hand from an ocean away.

That’s the vision of a team of engineers from the University of Colorado Boulder (CU Boulder) and the Max Planck Institute for Intelligent Systems (MPI-IS) in Stuttgart, Germany. In a new study published in Nature Communications, they’ve created a one-of-a-kind shape-shifting display that fits on a card table. The device is made from a 10-by-10 grid of soft robotic “muscles” that can sense outside pressure and pop up to create patterns. It’s precise enough to generate scrolling text and fast enough to shake a chemistry beaker filled with fluid.

It may also deliver something even rarer: the sense of touch in a digital age.

“As technology has progressed, we started with sending text over long distances, then audio and later video,” said Brian Johnson, one of two lead authors of the new study who earned his doctorate in mechanical engineering at CU Boulder in 2022 and is now a postdoctoral researcher at the Max Planck Institute for Intelligent Systems. “But we’re still missing touch.”

The innovation builds off a class of soft robots pioneered by a team led by Christoph Keplinger, formerly an assistant professor of mechanical engineering at CU Boulder and now a director at MPI-IS. They’re called Hydraulically Amplified Self-Healing ELectrostatic (HASEL) actuators. The prototype display isn’t ready for the market yet. But the researchers envision that, one day, similar technologies could lead to sensory gloves for virtual gaming or a smart conveyer belt that can undulate to sort apples from bananas.

“You could imagine arranging these sensing and actuating cells into any number of different shapes and combinations,” said Mantas Naris, co-lead author of the paper and a doctoral student in the Paul M. Rady Department of Mechanical Engineering. “There’s really no limit to what these technologies could, ultimately, lead to.”

Playing the accordion

The project has its origins in the search for a different kind of technology: synthetic organs.

In 2017, researchers led by Mark Rentschler, professor of mechanical engineering and biomedical engineering, secured funding from the National Science Foundation to develop what they call sTISSUE—squishy organs that behave and feel like real human body parts but are made entirely out of plastic-like materials.

“You could use these artificial organs to help develop medical devices or surgical robotic tools for much less cost than using real animal tissue,” said Rentschler, a co-author of the new study.

In developing that technology, however, the team landed on the idea of a tabletop display.

The group’s design is about the size of a Scrabble game board and, like one of those boards, is composed of small squares arranged in a grid. In this case, each one of the 100 squares is an individual HASEL actuator. The actuators are made of plastic pouches shaped like tiny accordions. If you pass an electric current through them, fluid shifts around inside the pouches, causing the accordion to expand and jump up.

The actuators also include soft, magnetic sensors that can detect when you poke them. That allows for some fun activities, said Johnson.

“Because the sensors are magnet-based, we can use a magnetic wand to draw on the surface of the display,” he said.

Hear that?

Other research teams have developed similar smart tablets, but the CU Boulder display is softer, takes up a lot less room and is much faster. Each of its robotic muscles can move up to 3000 times per minute.

The researchers are focusing now on shrinking the actuators to increase the resolution of the display—almost like adding more pixels to a computer screen.

“Imagine if you could load an article onto your phone, and it renders as Braille on your screen,” Naris said.

The group is also working to flip the display inside out. That way, engineers could design a glove that pokes your fingertips, allowing you to “feel” objects in virtual reality.

And, Rentschler said, the display can bring something else: a little peace and quiet. “Our system is, essentially, silent. The actuators make almost no noise.”

Other CU Boulder co-authors of the new study include Nikolaus Correll, associate professor in the Department of Computer Science; Sean Humbert, professor of mechanical engineering; mechanical engineering graduate students Vani Sundaram, Angella Volchko and Khoi Ly; and alumni Shane Mitchell, Eric Acome and Nick Kellaris. Christoph Keplinger also served as a co-author in both of his roles at CU Boulder and MPI-IS.

A novel motion-capture system with robotic marker that could enhance human-robot interactions

Motion capture (mocap) systems, technologies that can detect and record the movements of humans, animals and objects, are widely used in various settings. For instance, they have been used to shoot movies, to create animations with realistic lip and body movements, in interactive videogame consoles, or even to control robots.

Can charismatic robots help teams be more creative?

Image/Shutterstock.com

By Angharad Brewer Gillham, Frontiers science writer

Increasingly, social robots are being used for support in educational contexts. But does the sound of a social robot affect how well they perform, especially when dealing with teams of humans? Teamwork is a key factor in human creativity, boosting collaboration and new ideas. Danish scientists set out to understand whether robots using a voice designed to sound charismatic would be more successful as team creativity facilitators.

“We had a robot instruct teams of students in a creativity task. The robot either used a confident, passionate — ie charismatic — tone of voice or a normal, matter-of-fact tone of voice,” said Dr Kerstin Fischer of the University of Southern Denmark, corresponding author of the study in Frontiers in Communication. “We found that when the robot spoke in a charismatic speaking style, students’ ideas were more original and more elaborate.”

Can a robot be charismatic?

We know that social robots acting as facilitators can boost creativity, and that the success of facilitators is at least partly dependent on charisma: people respond to charismatic speech by becoming more confident and engaged. Fischer and her colleagues aimed to see if this effect could be reproduced with the voices of social robots by using a text-to-speech function engineered for characteristics associated with charismatic speaking, such as a specific pitch range and way of stressing words. Two voices were developed, one charismatic and one less expressive, based on a range of parameters which correlate with perceived speaker charisma.

The scientists recruited five classes of university students, all taking courses which included an element of team creativity. The students were told that they were testing a creativity workshop, which involved brainstorming ideas based on images and then using those ideas to come up with a new chocolate product. The workshop was led by videos of a robot speaking: introducing the task, reassuring the teams of students that there were no bad ideas, and then congratulating them for completing the task and asking them to fill out a self-evaluation questionnaire. The questionnaire evaluated the robot’s performance, the students’ own views on how their teamwork went, and the success of the session. The creativity of each session, as measured by the number of original ideas produced and how elaborate they were, was also measured by the researchers.

Powering creativity with charisma

The group that heard the charismatic voice rated the robot more positively, finding it more charismatic and interactive. Their perception of their teamwork was more positive, and they produced more original and elaborate ideas. They rated their teamwork more highly. However, the group that heard the non-charismatic voice perceived themselves as more resilient and efficient, possibly because a less charismatic leader led to better organization by the team members themselves, even though they produced fewer ideas.

“I had suspected that charismatic speech has very important effects, but our study provides clear evidence for the effect of charismatic speech on listener creativity,” said Dr Oliver Niebuhr of the University of Southern Denmark, co-author of the study. “This is the first time that such a link between charismatic voices, artificial speakers, and creativity outputs has been found.”

The scientists pointed out that although the sessions with the charismatic voice were generally more successful, not all the teams responded identically to the different voices: previous experiences in their different classes may have affected their response. Larger studies will be needed to understand how these external factors affected team performance.

“The robot was present only in videos, but one could suspect that more exposure or repeated exposure to the charismatic speaking style would have even stronger effects,” said Fischer. “Moreover, we have only varied a few features between the two robot conditions. We don’t know how the effect size would change if other or more features were varied. Finally, since charismatic speaking patterns differ between cultures, we would expect that the same stimuli will not yield the same results in all languages and cultures.”

Researchers introduce a robotic system to manage weeds and monitor crops

Over the past decade, robotic systems have revolutionized numerous sectors, including the agricultural and farming sector. Many tasks that were traditionally performed manually can now be potentially automated, boosting efficiency and reducing the workload of farmers and other agricultural workers.

Visual SLAM and 3D Perception for Mobile Robots

RGo's intelligent vision and AI system, Perception Engine™, provides mobile robots with 3D perception capabilities, enabling them to understand complex surroundings and operate autonomously just like humans. Its camera-based system operates is able to localize, map, and perceive in even the most robust environments including indoor/outdoor and dynamic or unstructured environments.

An updated guide to Docker and ROS 2

2 years ago, I wrote A Guide to Docker and ROS, which is one of my most frequently viewed posts — likely because it is a tricky topic and people were seeking answers. Since then, I’ve had the chance to use Docker more in my work and have picked up some new tricks. This was long overdue, but I’ve finally collected my updated learnings in this post.

Recently, I encountered an article titled ROS Docker; 6 reasons why they are not a good fit, and I largely agree with it. However, the reality is that it’s still quite difficult to ensure a reproducible ROS environment for people who haven’t spent years fighting the ROS learning curve and are adept at debugging dependency and/or build errors… so Docker is still very much a crutch that we fall back on to get working demos (and sometimes products!) out the door.

If the article above hasn’t completely discouraged you from embarking on this Docker adventure, please enjoy reading.

Revisiting Our Dockerfile with ROS 2

Now that ROS 1 is on its final version and approaching end of life in 2025, I thought it would be appropriate to rehash the TurtleBot3 example repo from the previous post using ROS 2.

Most of the big changes in this upgrade have to do with ROS 2, including client libraries, launch files, and configuring DDS. The examples themselves have been updated to use the latest tools for behavior trees: BehaviorTree.CPP 4 / Groot 2 for C++ and py_trees / py_trees_ros_viewer for Python. For more information on the example and/or behavior trees, refer to my Introduction to Behavior Trees post.

From a Docker standpoint, there aren’t too many differences. Our container layout will now be as follows:

Layers of our TurtleBot3 example Docker image.

We’ll start by making our Dockerfile, which defines the contents of our image. Our initial base layer inherits from one of the public ROS images, osrf/ros:humble-desktop, and sets up the dependencies from our example repository into an underlay workspace. These are defined using a vcstool repos file.

Notice that we’ve set up the argument, ARG ROS_DISTRO=humble, so it can be changed for other distributions of ROS 2 (Iron, Rolling, etc.). Rather than creating multiple Dockerfiles for different configurations, you should try using build arguments like these as much as possible without being “overly clever” in a way that impacts readability.

ARG ROS_DISTRO=humble

########################################
# Base Image for TurtleBot3 Simulation #
########################################
FROM osrf/ros:${ROS_DISTRO}-desktop as base
ENV ROS_DISTRO=${ROS_DISTRO}
SHELL ["/bin/bash", "-c"]

# Create Colcon workspace with external dependencies
RUN mkdir -p /turtlebot3_ws/src
WORKDIR /turtlebot3_ws/src
COPY dependencies.repos .
RUN vcs import < dependencies.repos # Build the base Colcon workspace, installing dependencies first. WORKDIR /turtlebot3_ws RUN source /opt/ros/${ROS_DISTRO}/setup.bash \ && apt-get update -y \ && rosdep install --from-paths src --ignore-src --rosdistro ${ROS_DISTRO} -y \ && colcon build --symlink-install ENV TURTLEBOT3_MODEL=waffle_pi

To build your image with a specific argument — let’s say you want to use ROS 2 Rolling instead — you could do the following… provided that all your references to ${ROS_DISTRO} actually have something that correctly resolves to the rolling distribution.

docker build -f docker/Dockerfile \
--build-arg="ROS_DISTRO=rolling" \
--target base -t turtlebot3_behavior:base .

I personally have had many issues in ROS 2 Humble and later with the default DDS vendor (FastDDS), so I like to switch my default implementation to Cyclone DDS by installing it and setting an environment variable to ensure it is always used.

# Use Cyclone DDS as middleware
RUN apt-get update && apt-get install -y --no-install-recommends \
ros-${ROS_DISTRO}-rmw-cyclonedds-cpp
ENV RMW_IMPLEMENTATION=rmw_cyclonedds_cpp

Now, we will create our overlay layer. Here, we will copy over the example source code, install any missing dependencies with rosdep install, and set up an entrypoint to run every time a container is launched.

###########################################
# Overlay Image for TurtleBot3 Simulation #
###########################################
FROM base AS overlay

# Create an overlay Colcon workspace
RUN mkdir -p /overlay_ws/src
WORKDIR /overlay_ws
COPY ./tb3_autonomy/ ./src/tb3_autonomy/
COPY ./tb3_worlds/ ./src/tb3_worlds/
RUN source /turtlebot3_ws/install/setup.bash \
&& rosdep install --from-paths src --ignore-src --rosdistro ${ROS_DISTRO} -y \
&& colcon build --symlink-install

# Set up the entrypoint
COPY ./docker/entrypoint.sh /
ENTRYPOINT [ "/entrypoint.sh" ]

The entrypoint defined above is a Bash script that sources ROS 2 and any workspaces that are built, and sets up environment variables necessary to run our TurtleBot3 examples. You can use entrypoints to do any other types of setup you might find useful for your application.

#!/bin/bash
# Basic entrypoint for ROS / Colcon Docker containers

# Source ROS 2
source /opt/ros/${ROS_DISTRO}/setup.bash

# Source the base workspace, if built
if [ -f /turtlebot3_ws/install/setup.bash ]
then
source /turtlebot3_ws/install/setup.bash
export TURTLEBOT3_MODEL=waffle_pi
export GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:$(ros2 pkg prefix turtlebot3_gazebo)/share/turtlebot3_gazebo/models
fi

# Source the overlay workspace, if built
if [ -f /overlay_ws/install/setup.bash ]
then
source /overlay_ws/install/setup.bash
export GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:$(ros2 pkg prefix tb3_worlds)/share/tb3_worlds/models
fi

# Execute the command passed into this entrypoint
exec "$@"

At this point, you should be able to build the full Dockerfile:

docker build \
-f docker/Dockerfile --target overlay \
-t turtlebot3_behavior:overlay .

Then, we can start one of our example launch files with the right settings with this mouthful of a command. Most of these environment variables and volumes are needed to have graphics and ROS 2 networking functioning properly from inside our container.

docker run -it --net=host --ipc=host --privileged \
--env="DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
--volume="${XAUTHORITY}:/root/.Xauthority" \
turtlebot3_behavior:overlay \
bash -c "ros2 launch tb3_worlds tb3_demo_world.launch.py"

Our TurtleBot3 example simulation with RViz (left) and Gazebo classic (right).

Introducing Docker Compose

From the last few snippets, we can see how the docker build and docker run commands can get really long and unwieldy as we add more options. You can wrap this in several abstractions, including scripting languages and Makefiles… but Docker has already solved this problem through Docker Compose.

In brief, Docker Compose allows you to create a YAML file that captures all the configuration needed to set up building images and running containers.

Docker Compose also differentiates itself from the “plain” Docker command in its ability to orchestrate services. This involves building multiple images or targets within the same image(s) and launching several programs at the same time that comprise an entire application. It also lets you extend existing services to minimize copy-pasting of the same settings in multiple places, define variables, and more.

The end goal is that we have short commands to manage our examples:

  • docker compose build will build what we need
  • docker compose up will launch what we need

Docker Compose allows us to more easily build and run our containerized examples.

The default name of this magical YAML file is docker-compose.yaml. For our example, the docker-compose.yaml file looks as follows:

version: "3.9"
services:
# Base image containing dependencies.
base:
image: turtlebot3_behavior:base
build:
context: .
dockerfile: docker/Dockerfile
args:
ROS_DISTRO: humble
target: base
# Interactive shell
stdin_open: true
tty: true
# Networking and IPC for ROS 2
network_mode: host
ipc: host
# Needed to display graphical applications
privileged: true
environment:
# Needed to define a TurtleBot3 model type
- TURTLEBOT3_MODEL=${TURTLEBOT3_MODEL:-waffle_pi}
# Allows graphical programs in the container.
- DISPLAY=${DISPLAY}
- QT_X11_NO_MITSHM=1
- NVIDIA_DRIVER_CAPABILITIES=all
volumes:
# Allows graphical programs in the container.
- /tmp/.X11-unix:/tmp/.X11-unix:rw
- ${XAUTHORITY:-$HOME/.Xauthority}:/root/.Xauthority

# Overlay image containing the example source code.
overlay:
extends: base
image: turtlebot3_behavior:overlay
build:
context: .
dockerfile: docker/Dockerfile
target: overlay

# Demo world
demo-world:
extends: overlay
command: ros2 launch tb3_worlds tb3_demo_world.launch.py

# Behavior demo using Python and py_trees
demo-behavior-py:
extends: overlay
command: >
ros2 launch tb3_autonomy tb3_demo_behavior_py.launch.py
tree_type:=${BT_TYPE:?}
enable_vision:=${ENABLE_VISION:?}
target_color:=${TARGET_COLOR:?}

# Behavior demo using C++ and BehaviorTree.CPP
demo-behavior-cpp:
extends: overlay
command: >
ros2 launch tb3_autonomy tb3_demo_behavior_cpp.launch.py
tree_type:=${BT_TYPE:?}
enable_vision:=${ENABLE_VISION:?}
target_color:=${TARGET_COLOR:?}

As you can see from the Docker Compose file above, you can specify variables using the familiar $ operator in Unix based systems. These variables will by default be read from either your host environment or through an environment file (usually called .env). Our example.env file looks like this:

# TurtleBot3 model
TURTLEBOT3_MODEL=waffle_pi

# Behavior tree type: Can be naive or queue.
BT_TYPE=queue

# Set to true to use vision, else false to only do navigation behaviors.
ENABLE_VISION=true

# Target color for vision: Can be red, green, or blue.
TARGET_COLOR=blue

At this point, you can build everything:

# By default, picks up a `docker-compose.yaml` and `.env` file.
docker compose build

# You can also explicitly specify the files
docker compose --file docker-compose.yaml --env-file .env build

Then, you can run the services you care about:

# Bring up the simulation
docker compose up demo-world

# After the simulation has started,
# launch one of these in a separate Terminal
docker compose up demo-behavior-py
docker compose up demo-behavior-cpp

The full TurtleBot3 demo running with py_trees as the Behavior Tree.

Setting up Developer Containers

Our example so far works great if we want to package up working examples to other users. However, if you want to develop the example code within this environment, you will need to overcome the following obstacles:

  • Every time you modify your code, you will need to rebuild the Docker image. This makes it extremely inefficient to get feedback on whether your changes are working as intended. This is already an instant deal-breaker.
  • You can solve the above by using bind mounts to sync up the code on your host machine with that in the container. This gets us on the right track, but you’ll find that any files generated inside the container and mounted on the host will be owned by root as default. You can get around this by whipping out the sudo and chown hammer, but it’s not necessary.
  • All the tools you may use for development, including debuggers, are likely missing inside the container… unless you install them in the Dockerfile, which can bloat the size of your distribution image.

Luckily, there is a concept of a developer container (or dev container). To put it simply, this is a separate container that lets you actually do your development in the same Docker environment you would use to deploy your application.

There are many ways of implementing dev containers. For our example, we will modify the Dockerfile to add a new dev target that extends our existing overlay target.

Dev containers allow us to develop inside a container from our host system with minimal overhead.

This dev container will do the following:

  • Install additional packages that we may find helpful for development, such as debuggers, text editors, and graphical developer tools. Critically, these will not be part of the overlay layer that we will ship to end users.
  • Create a new user that has the same user and group identifiers as the user that built the container on the host. This will make it such that all files generated within the container (in folders we care about) have the same ownership settings as if we had created the file on our host. By “folders we care about”, we are referring to the ROS workspace that contains the source code.
  • Put our entrypoint script in the user’s Bash profile (~/.bashrc file). This lets us source our ROS environment not just at container startup, but every time we attach a new interactive shell while our dev container remains up.

#####################
# Development Image #
#####################
FROM overlay as dev

# Dev container arguments
ARG USERNAME=devuser
ARG UID=1000
ARG GID=${UID}

# Install extra tools for development
RUN apt-get update && apt-get install -y --no-install-recommends \
gdb gdbserver nano

# Create new user and home directory
RUN groupadd --gid $GID $USERNAME \
&& useradd --uid ${GID} --gid ${UID} --create-home ${USERNAME} \
&& echo ${USERNAME} ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/${USERNAME} \
&& chmod 0440 /etc/sudoers.d/${USERNAME} \
&& mkdir -p /home/${USERNAME} \
&& chown -R ${UID}:${GID} /home/${USERNAME}

# Set the ownership of the overlay workspace to the new user
RUN chown -R ${UID}:${GID} /overlay_ws/

# Set the user and source entrypoint in the user's .bashrc file
USER ${USERNAME}
RUN echo "source /entrypoint.sh" >> /home/${USERNAME}/.bashrc

You can then add a new dev service to the docker-compose.yaml file. Notice that we’re adding the source code as volumes to mount, but we’re also mapping the folders generated by colcon build to a .colcon folder on our host file system. This makes it such that generated build artifacts persist between stopping our dev container and bringing it back up, otherwise we’d have to do a clean rebuild every time.

dev:
extends: overlay
image: turtlebot3_behavior:dev
build:
context: .
dockerfile: docker/Dockerfile
target: dev
args:
- UID=${UID:-1000}
- GID=${UID:-1000}
- USERNAME=${USERNAME:-devuser}
volumes:
# Mount the source code
- ./tb3_autonomy:/overlay_ws/src/tb3_autonomy:rw
- ./tb3_worlds:/overlay_ws/src/tb3_worlds:rw
# Mount colcon build artifacts for faster rebuilds
- ./.colcon/build/:/overlay_ws/build/:rw
- ./.colcon/install/:/overlay_ws/install/:rw
- ./.colcon/log/:/overlay_ws/log/:rw
user: ${USERNAME:-devuser}
command: sleep infinity

At this point you can do:

# Start the dev container
docker compose up dev

# Attach an interactive shell in a separate Terminal
# NOTE: You can do this multiple times!
docker compose exec -it dev bash

Because we have mounted the source code, you can make modifications on your host and rebuild inside the dev container… or you can use handy tools like the Visual Studio Code Containers extension to directly develop inside the container. Up to you.

For example, once you’re inside the container you can build the workspace with:

colcon build

Due to our volume mounts, you’ll see that the contents of the .colcon/build, .colcon/install, and .colcon/log folders in your host have been populated. This means that if you shut down the dev container and bring up a new instance, these files will continue to exist and will speed up rebuilds using colcon build.

Also, because we have gone through the trouble of making a user, you’ll see that these files are not owned by root, so you can delete them if you’d like to clean out the build artifacts. You should try this without making the new user and you’ll run into some annoying permissions roadblocks.

$ ls -al .colcon
total 20
drwxrwxr-x 5 sebastian sebastian 4096 Jul 9 10:15 .
drwxrwxr-x 10 sebastian sebastian 4096 Jul 9 10:15 ..
drwxrwxr-x 4 sebastian sebastian 4096 Jul 9 11:29 build
drwxrwxr-x 4 sebastian sebastian 4096 Jul 9 11:29 install
drwxrwxr-x 5 sebastian sebastian 4096 Jul 9 11:31 log

The concept of dev containers is so widespread at this point that a standard has emerged at containers.dev. I also want to point out some other great resources including Allison Thackston’s blog, Griswald Brooks’ GitHub repo, and the official VSCode dev containers tutorial.

Conclusion

In this post, you have seen how Docker and Docker Compose can help you create reproducible ROS 2 environments. This includes the ability to configure variables at build and run time, as well as creating dev containers to help you develop your code in these environments before distributing it to others.

We’ve only scratched the surface in this post, so make sure you poke around at the resources linked throughout, try out the example repository, and generally stay curious about what else you can do with Docker to make your life (and your users’ lives) easier.

As always, please feel free to reach out with questions and feedback. Docker is a highly configurable tool, so I’m genuinely curious about how this works for you or whether you have approached things differently in your work. I might learn something new!

New design lets robotic insect land on walls and take off from them with ease

Insects in nature possess amazing flying skills and can attach to and climb on walls of various materials. Insects that can perform flapping-wing flight, climb on a wall, and switch smoothly between the two locomotion regimes provide us with excellent biomimetic models. However, very few biomimetic robots can perform complex locomotion tasks that combine the two abilities of climbing and flying.

Robotic arm gripper key design considerations

The purpose of a robotic gripper is to effectively manipulate and grasp objects.

First of all, the system for gripping must be chosen. Such as two or more finger grippers, parallel jaw grippers, suction grippers and other types. Some grippers are designed for only specific types of objects, which makes the design easier.

Let’s list some key factors to consider:

Gripping force. This should be balanced at each instant and position, in order to be sufficient to hold and manipulate the object but not too much, in order to avoid damage. Dealing with delicate objects is especially a challenge.

Sensors: It is important to get feedback from the object being manipulated in order to manipulate it the desired way. Tactile force and proximity sensors are used.

Control: The control algorithm gets feedback from sensors, and controls the gripping mechanism accordingly, by adjusting the position of the grippers and the force applied by them at each position. Getting feedback and controlling grippers is a loop that constantly become each other’s input.

Actuation system: Based on the chosen gripper system, proper actuation method must be used. Hydraulic, pneumatic, electric and even shape memory alloys are the most common. Speed of manipulation, power usage, accuracy, ease of control are all determining factors here.

Range of motion: The range and size of expected / required manipulation needed is another major criteria. The wider the range of motion and size, the more complex the gripper system and the control algorithm gets.

Operating Environment: This is also a consideration if factors that will affect the operation of grippers, or surrounding items that will be affected as a result if gripper operation exist.

Materials: Strength, stiffness, durability, smoothness/roughness, weight of the gripper materials are considered. The materials must also be compatible for the target range of tasks and the geometric and physical properties of the objects to be handled. Adequate friction between the grippers and handled objects must be ensured. To maximize ease of actuation and minimize power consumption, lightweight materials are preferred unless there is a need for high mass grippers.

Adaptability: The ability to adapt to unexpected irregularities is a desirable feature, which increases the complexity of the system and control.

Connectivity: The gripper connections to robot arm or other surfaces should be as simple as possible and the value of gripper will increase as its ease of connection with different interfaces increase.

Safety: For a safe operation, measures such as impact detection, emergency stop, soft surfaces, should be implemented.

And as always, cost and ease of design, manufacturing and maintenance must be a key criteria.

Page 4 of 5
1 2 3 4 5