Archive 08.08.2022

Page 27 of 65
1 25 26 27 28 29 65

Artificial finger able to identify surface material with 90% accuracy

A team of researchers at the Chinese Academy of Sciences, has developed an artificial finger that was able to identify certain surface materials with 90% accuracy. In their paper published in the journal Science Advances, the group describes how they used triboelectric sensors to give their test finger an ability to gain a sense of touch.

Building a python toolbox for robot behavior

If you’ve been subject to my posts on Twitter or LinkedIn, you may have noticed that I’ve done no writing in the last 6 months. Besides the whole… full-time job thing … this is also because at the start of the year I decided to focus on a larger coding project.

At my previous job, I stood up a system for task and motion planning (TAMP) using the Toyota Human Support Robot (HSR). You can learn more in my 2020 recap post. While I’m certainly able to talk about that work, the code itself was closed in two different ways:

  1. Research collaborations with Toyota Research Institute (TRI) pertaining to the HSR are in a closed community, with the exception of some publicly available repositories built around the RoboCup@Home Domestic Standard Platform League (DSPL).
  2. The code not specific to the robot itself was contained in a private repository in my former group’s organization, and furthermore is embedded in a massive monorepo.

Rewind to 2020: The original simulation tool (left) and a generated Gazebo world with a Toyota HSR (right).

So I thought, there are some generic utilities here that could be useful to the community. What would it take to strip out the home service robotics simulation tools out of that setting and make it available as a standalone package? Also, how could I squeeze in improvements and learn interesting things along the way?

This post describes how these utilities became pyrobosim: A ROS2 enabled 2D mobile robot simulator for behavior prototyping.

What is pyrobosim?

At its core, pyrobosim is a simple robot behavior simulator tailored for household environments, but useful to other applications with similar assumptions: moving, picking, and placing objects in a 2.5D* world.

* For those unfamiliar, 2.5D typically describes a 2D environment with limited access to a third dimension. In the case of pyrobosim, this means all navigation happens in a 2D plane, but manipulation tasks occur at a specific height above the ground plane.

The intended workflow is:

  1. Use pyrobosim to build a world and prototype your behavior
  2. Generate a Gazebo world and run with a higher-fidelity robot model
  3. Run on the real robot!

Pyrobosim allows you to define worlds made up of entities. These are:

  • Robots: Programmable agents that can act on the world to change its state.
  • Rooms: Polygonal regions that the robot can navigate, connected by Hallways.
  • Locations: Polygonal regions that the robot cannot drive into, but may contain manipulable objects. Locations contain one of more Object Spawns. This allows having multiple object spawns in a single entity (for example, a left and right countertop).
  • Objects: The things that the robot can move to change the state of the world.

Main entity types shown in a pyrobosim world.

Given a static set of rooms, hallways, and locations, a robot in the world can then take actions to change the state of the world. The main 3 actions implemented are:

  • Pick: Remove an object from a location and hold it.
  • Place: Put a held object at a specific location and pose within that location.
  • Navigate: Plan and execute a path to move the robot from one pose to another.

As this is mainly a mobile robot simulator, there is more focus on navigation vs. manipulation features. While picking and placing are idealized, which is why we can get away with a 2.5D world representation, the idea is that the path planners and path followers can be swapped out to test different navigation capabilities.

Another long-term vision for this tool is that the set of actions itself can be expanded. Some random ideas include moving furniture, opening and closing doors, or gaining information in partially observable worlds (for example, an explicit “scan” action).

Independently of the list of possible actions and their parameters, these actions can then be sequenced into a plan. This plan can be manually specified (“go to A”, “pick up B”, etc.) or the output of a higher-level task planner which takes in a task specification and outputs a plan that satisfies the specification.

Execution of a sample action sequence in pyrobosim.

In summary: pyrobosim is a software tool where you can move an idealized point robot around a world, pick and place objects, and test task and motion planners before moving into higher-fidelity settings — whether it’s other simulators or a real robot.

What’s new?

Taking this code out of its original resting spot was far from a copy-paste exercise. While sifting through the code, I made a few improvements and design changes with modularity in mind: ROS vs. no ROS, GUI vs. no GUI, world vs. robot capabilities, and so forth. I also added new features with the selfish agenda of learning things I wanted to try… which is the point of a fun personal side project, right?

Let’s dive into a few key thrusts that made up this initial release of pyrobosim.

1. User experience

The original tool was closely tied to a single Matplotlib figure window that had to be open, and in general there were lots of shortcuts to just get the thing to work. In this redesign, I tried to more cleanly separate the modeling from the visualization, and properties of the world itself with properties of the robotic agent and the actions it can take in the world.

I also wanted to make the GUI itself a bit nicer. After some quick searching, I found this post that showed how to put a Matplotlib canvas in a PyQT5 GUI, that’s what I went for. For now, I started by adding a few buttons and edit boxes that allow interaction with the world. You can write down (or generate) a location name, see how the current path planner and follower work, and pick and place objects when arriving at specific locations.

In tinkering with this new GUI, I found a lot of bugs with the original code which resulted in good fundamental changes in the modeling framework. Or, to make it sound fancier, the GUI provided a great platform for interactive testing.

The last thing I did in terms of usability was provide users the option of creating worlds without even touching the Python API. Since the libraries of possible locations and objects were already defined in YAML, I threw in the ability to author the world itself in YAML as well. So, in theory, you could take one of the canned demo scripts and swap out the paths to 3 files (locations, objects, and world) to have a completely different example ready to go.

pyrobosim GUI with snippets of the world YAML file behind it.

2. Generalizing motion planning

In the original tool, navigation was as simple as possible as I was focused on real robot experiments. All I needed in the simulated world was a representative cost function for planning that would approximate how far a robot would have to travel from point A to point B.

This resulted in building up a roadmap of (known and manually specified) navigation poses around locations and at the center of rooms and hallways. Once you have this graph representation of the world, you can use a standard shortest-path search algorithm like A* to find a path between any two points in space.

This time around, I wanted a little more generality. The design has now evolved to include two popular categories of motion planners.

  • Single-query planners: Plans once from the current state of the robot to a specific goal pose. An example is the ubiquitous Rapidly-expanding Random Tree (RRT). Since each robot plans from its current state, single-query planners are considered to be properties of an individual robot in pyrobosim.
  • Multi-query planners: Builds a representation for planning which can be reused for different start/goal configurations given the world does not change. The original hard-coded roadmap fits this bill, as well as the sampling-based Probabilistic Roadmap (PRM). Since multiple robots could reuse these planners by connecting start and goal poses to an existing graph, multi-query planners are considered properties of the world itself in pyrobosim.

I also wanted to consider path following algorithms in the future. For now, the piping is there for robots to swap out different path followers, but the only implementation is a “straight line executor”. This assumes the robot is a point that can move in ideal straight-line trajectories. Later on, I would like to consider nonholonomic constraints and enable dynamically feasible planning, as well as true path following which sets the velocity of the robot within some limits rather than teleporting the robot to ideally follow a given path.

In general, there are lots of opportunities to add more of the low-level robot dynamics to pyrobosim, whereas right now the focus is largely on the higher-level behavior side. Something like the MATLAB based Mobile Robotics Simulation Toolbox, which I worked on in a former job, has more of this in place, so it’s certainly possible!

Sample path planners in pyrobosim.
Hard-coded roadmap (upper left), Probabilistic Roadmap (PRM) (upper right).
Rapidly-expanding Random Tree (RRT) (lower left), Bidirectional RRT* (lower right).

3. Plugging into the latest ecosystem

This was probably the most selfish and unnecessary update to the tools. I wanted to play with ROS2, so I made this into a ROS2 package. Simple as that. However, I throttled back on the selfishness enough to ensure that everything could also be run standalone. In other words, I don’t want to require anyone to use ROS if they don’t want to.

The ROS approach does provide a few benefits, though:

  • Distributed execution: Running the world model, GUI, motion planners, etc. in one process is not great, and in fact I ran into a lot of snags with multithreading before I introduced ROS into the mix and could split pieces into separate nodes.
  • Multi-language interaction: ROS in general is nice because you can have for example Python nodes interacting with C++ nodes “for free”. I am especially excited for this to lead to collaborations with interesting robotics tools out in the wild.

The other thing that came with this was the Gazebo world exporting, which was already available in the former code. However, there is now a newer Ignition Gazebo and I wanted to try that as well. After discovering that polyline geometries (a key feature I relied on) was not supported in Ignition, I complained just loudly enough on Twitter that the lead developer of Gazebo personally let me know when she merged that PR! I was so excited that I installed the latest version of Ignition from source shortly after and with a few tweaks to the model generation we now support both Gazebo classic and Ignition.

pyrobosim test world exported to Gazebo classic (top) and Ignition Gazebo (bottom).

4. Software quality

Some other things I’ve been wanting to try for a while relate to good software development practices. I’m happy that in bringing up pyrobosim, I’ve so far been able to set up a basic Continuous Integration / Continuous Development (CI/CD) pipeline and official documentation!

For CI/CD, I decided to try out GitHub Actions because they are tightly integrated with GitHub — and critically, compute is free for public repositories! I had past experience setting up Jenkins (see my previous post), and I have to say that GitHub Actions was much easier for this “hobbyist” workflow since I didn’t have to figure out where and how to host the CI server itself.

Documentation was another thing I was deliberate about in this redesign. I was always impressed when I went into some open-source package and found professional-looking documentation with examples, tutorials, and a full API reference. So I looked around and converged on Sphinx which generates the HTML documentation, and comes with an autodoc module that can automatically convert Python docstrings to an API reference. I then used ReadTheDocs which hosts the documentation online (again, for free) and automatically rebuilds it when you push to your GitHub repository. The final outcome was this pyrobosim documentation page.

The result is very satisfying, though I must admit that my unit tests are… lacking at the moment. However, it should be super easy to add new tests into the existing CI/CD pipeline now that all the infrastructure is in place! And so, the technical debt continues building up.

pyrobosim GitHub repo with pretty status badges (left) and automated checks in a pull request (right).

Conclusion / Next steps

This has been an introduction to pyrobosim — both its design philosophy, and the key feature sets I worked on to take the code out of its original form and into a standalone package (hopefully?) worthy of public usage. For more information, take a look at the GitHub repository and the official documentation.

Here is my short list of future ideas, which is in no way complete:

  1. Improving the existing tools: Adding more unit tests, examples, documentation, and generally anything that makes the pyrobosim a better experience for developers and users alike.
  2. Building up the navigation stack: I am particularly interested in dynamically feasible planners for nonholonomic vehicles. There are lots of great tools out there to pull from, such as Peter Corke’s Robotics Toolbox for Python and Atsushi Sakai’s PythonRobotics.
  3. Adding a behavior layer: Right now, a plan consists of a simple sequence of actions. It’s not very reactive or modular. This is where abstractions such as finite-state machines and behavior trees would be great to bring in.
  4. Expanding to multi-agent and/or partially-observable systems: Two interesting directions that would require major feature development.
  5. Collaborating with the community!

It would be fantastic to work with some of you on pyrobosim. Whether you have feedback on the design itself, specific bug reports, or the ability to develop new examples or features, I would appreciate any form of input. If you end up using pyrobosim for your work, I would be thrilled to add your project to the list of usage examples!

Finally: I am currently in the process of setting up task and motion planning with pyrobosim. Stay tuned for that follow-on post, which will have lots of cool examples.

A bartending robot that can engage in personalized interactions with humans

A widely discussed application of social robots that has so far been rarely tested in real-world settings is their use as bartenders in cafés, cocktail bars and restaurants. While many roboticists have been trying to develop systems that can effectively prepare drinks and serve them, so far very few have focused on artificially reproducing the social aspect of bartending.

UBR-1 on ROS2 Humble

It has been a while since I’ve posted to the blog, but lately I’ve actually been working on the UBR-1 again after a somewhat long hiatus. In case you missed the earlier posts in this series:

ROS2 Humble

The latest ROS2 release came out just a few weeks ago. ROS2 Humble targets Ubuntu 22.04 and is also a long term support (LTS) release, meaning that both the underlying Ubuntu operating system and the ROS2 release get a full 5 years of support.

Since installing operating systems on robots is often a pain, I only use the LTS releases and so I had to migrate from the previous LTS, ROS2 Foxy (on Ubuntu 20.04). Overall, there aren’t many changes to the low-level ROS2 APIs as things are getting more stable and mature. For some higher level packages, such as MoveIt2 and Navigation2, the story is a bit different.

Visualization

One of the nice things about the ROS2 Foxy release was that it targeted the same operating system as the final ROS1 release, Noetic. This allowed users to have both ROS1 and ROS2 installed side-by-side. If you’re still developing in ROS1, that means you probably don’t want to upgrade all your computers quite yet. While my robot now runs Ubuntu 22.04, my desktop is still running 18.04.

Therefore, I had to find a way to visualize ROS2 data on a computer that did not have the latest ROS2 installed. Initially I tried the Foxglove Studio, but didn’t have any luck with things actually connecting using the native ROS2 interface (the rosbridge-based interface did work). Foxglove is certainly interesting, but so far it’s not really an RVIZ replacement – they appear to be more focused on offline data visualization.

I then moved onto running rviz2 inside a docker environment – which works well when using the rocker tool:

sudo apt-get install python3-rocker
sudo rocker --net=host --x11 osrf/ros:humble-desktop rviz2

If you are using an NVIDIA card, you’ll need to add --nvidia along with --x11.

In order to properly visualize and interact with my UBR-1 robot, I needed to add the ubr1_description package to my workspace in order to get the meshes and also my rviz configurations. To accomplish this, I needed to create my own docker image. I largely based it off the underlying ROS docker images:

ARG WORKSPACE=/opt/workspace

FROM osrf/ros:humble-desktop

# install build tools
RUN apt-get update && apt-get install -q -y --no-install-recommends \
python3-colcon-common-extensions \
git-core \
&& rm -rf /var/lib/apt/lists/*

# get ubr code
ARG WORKSPACE
WORKDIR $WORKSPACE/src
RUN git clone https://github.com/mikeferguson/ubr_reloaded.git \
&& touch ubr_reloaded/ubr1_bringup/COLCON_IGNORE \
&& touch ubr_reloaded/ubr1_calibration/COLCON_IGNORE \
&& touch ubr_reloaded/ubr1_gazebo/COLCON_IGNORE \
&& touch ubr_reloaded/ubr1_moveit/COLCON_IGNORE \
&& touch ubr_reloaded/ubr1_navigation/COLCON_IGNORE \
&& touch ubr_reloaded/ubr_msgs/COLCON_IGNORE \
&& touch ubr_reloaded/ubr_teleop/COLCON_IGNORE

# install dependencies
ARG WORKSPACE
WORKDIR $WORKSPACE
RUN . /opt/ros/$ROS_DISTRO/setup.sh \
&& apt-get update && rosdep install -q -y \
--from-paths src \
--ignore-src \
&& rm -rf /var/lib/apt/lists/*

# build ubr code
ARG WORKSPACE
WORKDIR $WORKSPACE
RUN . /opt/ros/$ROS_DISTRO/setup.sh \
&& colcon build

# setup entrypoint
COPY ./ros_entrypoint.sh /

ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["bash"]

The image derives from humble-desktop and then adds the build tools and clones my repository. I then ignore the majority of packages, install dependencies and then build the workspace. The ros_entrypoint.sh script handles sourcing the workspace configuration.

#!/bin/bash
set -e

# setup ros2 environment
source "/opt/workspace/install/setup.bash"
exec "$@"

I could then create the docker image and run rviz inside it:

docker build -t ubr:main
sudo rocker --net=host --x11 ubr:main rviz2

The full source of these docker configs is in the docker folder of my ubr_reloaded repository. NOTE: The updated code in the repository also adds a late-breaking change to use CycloneDDS as I’ve had numerous connectivity issues with FastDDS that I have not been able to debug.

Visualization on MacOSX

I also frequently want to be able to interact with my robot from my Macbook. While I previously installed ROS2 Foxy on my Intel-based Macbook, the situation is quite changed now with MacOSX being downgraded to Tier 3 support and the new Apple M1 silicon (and Apple’s various other locking mechanisms) making it harder and harder to setup ROS2 directly on the Macbook.

As with the Linux desktop, I tried out Foxglove – however it is a bit limited on Mac. The MacOSX environment does not allow opening the required ports, so the direct ROS2 topic streaming does not work and you have to use rosbridge. I found I was able to visualize certain topics, but that switching between topics frequently broke.

At this point, I was about to give up, until I noticed that Ubuntu 22.04 arm64 is a Tier 1 platform for ROS2 Humble. I proceeded to install the arm64 version of Ubuntu inside Parallels (Note: I was cheap and initially tried to use the VMWare technology preview, but was unable to get the installer to even boot). There are a few tricks here as there is no arm64 desktop installer, so you have to install the server edition and then upgrade it to a desktop. There is a detailed description of this workflow on askubuntu.com. Installing ros-humble-desktop from arm64 Debians was perfectly easy.

rviz2 runs relatively quick inside the Parallels VM, but overall it was not quite as quick or stable as using rocker on Ubuntu. However, it is really nice to be able to do some ROS2 development when traveling with only my Macbook.

Migration Notes

Note: each of the links in this section is to a commit or PR that implements the discussed changes.

In the core ROS API, there are only a handful of changes – and most of them are actually simply fixing potential bugs. The logging macros have been updated for security purposes and require c-strings like the old ROS1 macros did. Additionally the macros are now better at detecting invalid substitution strings. Ament has also gotten better at detecting missing dependencies. The updates I made to robot_controllers show just how many bugs were caught by this more strict checking.

image_pipeline has had some minor updates since Foxy, mainly to improve consistency between plugins and so I needed to update some topic remappings.

Navigation has the most updates. amcl model type names have been changed since the models are now plugins. The API of costmap layers has changed significantly, and so a number of updates were required just to get the system started. I then made a more detailed pass through the documentation and found a few more issues and improvements with my config, especially around the behavior tree configuration.

I also decided to do a proper port of graceful_controller to ROS2, starting from the latest ROS1 code since a number of improvements have happened in the past year since I had originally ported to ROS2.

Next steps

There are still a number of new features to explore with Navigation2, but my immediate focus is going to shift towards getting MoveIt2 setup on the robot, since I can’t easily swap between ROS1 and ROS2 anymore after upgrading the operating system.

Allowing social robots to learn relations between users’ routines and their mood

Social robots, robots that can interact with humans and assist them in their daily lives, are gradually being introduced in numerous real-world settings. These robots could be particularly valuable for helping older adults to complete everyday tasks more autonomously, thus potentially enhancing their independence and well-being.

ep.358: Softbank: How Large Companies Approach Robotics, with Brady Watkins

A lot of times on our podcast we dive into startups and smaller companies in robotics. Today’s talk is unique in that Brady Watkins gives us insight into how a big company like Softbank Robotics looks into the Robotics market.

we think scale first, (the) difference from a startup is our goal isn’t to think what’s the first 10 to 20, but we need to think what’s the first 20,000 look like. – Brady Watkins

Brady Watkins

Brady Watkins HeadshotBrady Watkins is the President and General Manager at Softbank Robotics America. During his career at Softbank, he helped to scale and commercialize Whiz, the collaborative robot vacuum designed to work alongside cleaning teams. Watkins played a key role in scaling the production to 20,000 units deployed globally.

Prior to his time at SBRA, Watkins was the Director of Sales, Planning, and Integration at Ubisoft, where he held several positions over the course of 10 years.

Links

Using Remote Operation to Help Solve Labor Shortage Issues in the Supply Chain

Our remotely operated forklifts allow operators to do their jobs from up to thousands of miles away. This is critically important to our customers, who have been dealing for decades with a labor shortage that recently has become even more acute than ever before.

Two hands are better than one

The HOPE Hand.

What are you doing right now other than scrolling through this article? Do you have a cup of coffee in one hand, your phone in the other? Maybe your right hand is using your laptop mouse and your left hand is holding a snack. Have you ever thought about how often we are using both of our hands? Having two healthy human hands allows us to carry too many grocery bags in one hand and unlock our apartment door with the other, and perform complex bimanual coordination like playing Moonlight Sonata by Beethoven on the piano (well, maybe not all of us can do that). Having two hands also allows us to do some of the most simple tasks in our daily lives, like holding a jar of peanut butter and unscrewing the lid, or putting our hair up in a ponytail.

If you take some time to think about how often both of your hands are occupied, you might start to realize that life could be challenging if you were missing the functionality of one or both of your most useful end effectors (as we call them in robotics). This thought experiment is the reality for someone like my friend Clare, who got into a car accident when she was 19. The impact of the crash resulted in a traumatic brain injury (TBI) that left the right side of her body partially paralyzed. In addition to re-learning how to walk, she also had to learn how to navigate her life with one functioning hand.

To get a better idea of what that would be like Clare says, “Tie your dominant hand behind your back. Try it!”

There are other neurological conditions, in addition to TBI, that could result in paralysis: stroke, spinal cord injury, cerebral palsy. When we hear the word paralysis, partial paralysis, or hemiparesis (partial paralysis of one side of the body), we might envision someone’s limb being weak and hanging at their side. This manifestation of motor impairment is only the case for a fraction of the hemiparetic population. For others like Clare, their hand and elbow are reflexively kept in a flexed position, or flexor synergy pattern, meaning that their hand is tightly closed in a fist, regardless if they try to open their hand or close it. They have little to no ability to voluntarily extend their fingers, and the amount of muscle force keeping the hand closed changes from moment to moment. If we think back to the peanut butter jar example, imagine having to use your able hand to pry open the fingers of your impaired hand to get them around the jar of peanut butter.

Thankfully, there are occupational therapists that can train individuals to adapt their approaches to activities of daily living, and physical therapists that keep their hands and limbs stretched and mobile. But also, the robotics community has been working on their own technology-based contributions to the recovery and long-term independence of individuals with hand impairments due to neurological injury. There are decades of research in the field of wearable assistive and rehabilitation devices, creating new prosthetics and exoskeletons to help individuals overcome their physical impairments. However, we came across a gap in the research when we began working with Clare and other individuals with similar hand impairments.

Most of the assistive hand exoskeletons currently being developed or commercially sold focus on restoring the user’s ability to grasp with their impaired hand. However, Clare actually needed an exoskeletal device that extended her fingers, against varying levels of resistance due to increased resting muscle tone or spasticity (click here to read about our finger extension force study). As a result, we developed the HOPE hand, a hand orthosis with powered extension to better serve individuals like Clare who need assistance opening their hand, to provide them with improved capabilities to perform activities of daily living, and to help them re-gain their independence.

The HOPE Hand is a cable-driven hand exoskeleton that uses pushing and pulling forces along the back of the hand to open and close the fingers individually. Each finger has two cables running parallel along the back of the finger which prevents medial/lateral movement and stabilizes the joint at the base of the finger. The cables are guided by rigid links that are attached to the finger using medical tape, and connect to a worm gear driven by a DC motor. The index finger and middle finger are actuated individually, and the pinky and ring finger are coupled, so they move together. The thumb has two degrees of freedom; flexion and extension are performed similarly to the other fingers, and abduction/adduction (to move the thumb into a position across from the fingers for a power grasp), is positioned manually by the user. This mechanical design allows the user to perform functional grasps by overcoming the increased muscle tone in their hand.

When we asked Clare to test out the HOPE Hand, she was happy to contribute to the research! She was able to pick up and move colored blocks from one designated area to another with the HOPE Hand, compared to not being able to pick up any blocks without the assistive device. We continued to work with Clare, her mom, and her physical therapist throughout our iterative device development process so she could give feedback and we could improve the device. We found that involving end users like Clare from the beginning was critical to designing a device which is safe, comfortable, and most importantly, usable. It was also the best way for us to understand her daily challenges and the nature of her hand impairment.

Clare’s physical therapist says “It engages the patient’s brain in ways that she would not be exposed to without participating in research, which could forge new neural pathways. I can watch the way she reacts and responds to the new technology and that enables me to create different rehab strategies to enhance recovery.”

While the HOPE Hand isn’t quite available to consumers yet, our collaborative team of patients, clinicians, caregivers, and academic researchers is making progress. One of the current challenges we are tackling, along with the rest of the wearable device industry, is how the device recognizes the user’s intention to move. The majority of electric prostheses and hand exoskeletons use residual muscle activity (myoelectric control) as an indicator of intention to move. However, the unreliable muscle activity that can be present due to neurological conditions like traumatic brain injury, can make this form of control challenging. Because of this, researchers are diving into alternative control methods such as computer vision and brain activity. We have implemented a voice control alternative, which also gives users an opportunity to practice their speech, as it is common for conditions like stroke and TBI to result in speech impairments such as aphasia, in addition to motor impairments. It has been valuable for us to consider the complexities of our targeted users, to create a device that could potentially help in more ways than one.

They say many hands make light work, but let’s start by restoring the functionality of the original two, so Clare can open that jar of peanut butter as fast as you can click on the next article while sipping your morning coffee.

Page 27 of 65
1 25 26 27 28 29 65