Category robots in business

Page 310 of 460
1 308 309 310 311 312 460

Electronic design tool morphs interactive objects

MorphSensor glasses
An MIT team used MorphSensor to design multiple applications, including a pair of glasses that monitor light absorption to protect eye health. Credits: Photo courtesy of the researchers.

By Rachel Gordon

We’ve come a long way since the first 3D-printed item came to us by way of an eye wash cup, to now being able to rapidly fabricate things like car parts, musical instruments, and even biological tissues and organoids

While much of these objects can be freely designed and quickly made, the addition of electronics to embed things like sensors, chips, and tags usually requires that you design both separately, making it difficult to create items where the added functions are easily integrated with the form. 

Now, a 3D design environment from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) lets users iterate an object’s shape and electronic function in one cohesive space, to add existing sensors to early-stage prototypes.

The team tested the system, called MorphSensor, by modeling an N95 mask with a humidity sensor, a temperature-sensing ring, and glasses that monitor light absorption to protect eye health.

MorphSensor automatically converts electronic designs into 3D models, and then lets users iterate on the geometry and manipulate active sensing parts. This might look like a 2D image of a pair of AirPods and a sensor template, where a person could edit the design until the sensor is embedded, printed, and taped onto the item. 

To test the effectiveness of MorphSensor, the researchers created an evaluation based on standard industrial assembly and testing procedures. The data showed that MorphSensor could match the off-the-shelf sensor modules with small error margins, for both the analog and digital sensors.

“MorphSensor fits into my long-term vision of something called ‘rapid function prototyping’, with the objective to create interactive objects where the functions are directly integrated with the form and fabricated in one go, even for non-expert users,” says CSAIL PhD student Junyi Zhu, lead author on a new paper about the project. “This offers the promise that, when prototyping, the object form could follow its designated function, and the function could adapt to its physical form.” 

MorphSensor in action 

Imagine being able to have your own design lab where, instead of needing to buy new items, you could cost-effectively update your own items using a single system for both design and hardware. 

For example, let’s say you want to update your face mask to monitor surrounding air quality. Using MorphSensor, users would first design or import the 3D face mask model and sensor modules from either MorphSensor’s database or online open-sourced files. The system would then generate a 3D model with individual electronic components (with airwires connected between them) and color-coding to highlight the active sensing components.  

Designers can then drag and drop the electronic components directly onto the face mask, and rotate them based on design needs. As a final step, users draw physical wires onto the design where they want them to appear, using the system’s guidance to connect the circuit. 

Once satisfied with the design, the “morphed sensor” can be rapidly fabricated using an inkjet printer and conductive tape, so it can be adhered to the object. Users can also outsource the design to a professional fabrication house.  

To test their system, the team iterated on EarPods for sleep tracking, which only took 45 minutes to design and fabricate. They also updated a “weather-aware” ring to provide weather advice, by integrating a temperature sensor with the ring geometry. In addition, they manipulated an N95 mask to monitor its substrate contamination, enabling it to alert its user when the mask needs to be replaced.

In its current form, MorphSensor helps designers maintain connectivity of the circuit at all times, by highlighting which components contribute to the actual sensing. However, the team notes it would be beneficial to expand this set of support tools even further, where future versions could potentially merge electrical logic of multiple sensor modules together to eliminate redundant components and circuits and save space (or preserve the object form). 

Zhu wrote the paper alongside MIT graduate student Yunyi Zhu; undergraduates Jiaming Cui, Leon Cheng, Jackson Snowden, and Mark Chounlakone; postdoc Michael Wessely; and Professor Stefanie Mueller. The team will virtually present their paper at the ACM User Interface Software and Technology Symposium. 

This material is based upon work supported by the National Science Foundation.

Women in Robotics panel celebrating Ada Lovelace Day

We’d like to share the video from our 2020 Ada Lovelace Day celebration of Women in Robotics. The speakers were all on this year’s list, last year’s list, or nominated for next year’s list! and they present a range of cutting edge robotics research and commercial products. They are also all representatives of the new organization Black in Robotics which makes this video doubly powerful. Please enjoy the impactful work of:

Dr Ayanna Howard – Chair of Interactive Computing, Georgia Tech

Dr Carlotta Berry – Professor Electrical and Computer Engineering at Rose-Hulman Institute of Technology

Angelique Taylor – PhD student in Health Robotics at UCSD and Research Intern at Facebook

Dr Ariel Anders – roboticist and first technical hire at Robust.AI

Moderated by Jasmine Lawrence – Product Manager at X the Moonshot Factory

Follow them on twitter at @robotsmarts @DRCABerry @Lique_Taylor @Ariel_Anders @EDENsJasmine

Some of the takeaways from the talk were collected by Jasmine Lawrence at the end of the discussion and include the encouragement that you’re never too old to start working in robotics. While some of the panelists knew from an early age that robotics was their passion, for others it was a discovery later in life. Particularly as robotics has a fairly small academic footprint, compared to the impact in the world.

We also learned that Dr Ayanna Howard has a book available “Sex, Race and Robots: How to be human in the age of AI”

Another insight from the panel was that as the only woman in the room, and often the only person of color too, the pressure was on to be mindful of the impact on communities of new technologies, and to represent a diversity of viewpoints. This knowledge has contributed to these amazing women focusing on robotics projects with significant social impact.

And finally, that contrary to popular opinion, girls and women could be just as competitive as male counterparts and really enjoy the experience of robotics competitions, as long as they were treated with respect. That means letting them build and program, not just manage social media.

You can sign up for Women in Robotics online community here, or the newsletter here. And please enjoy the stories of 2020’s “30 women in robotics you need to know about” as well as reading the previous years’ lists!

Robots deciding their next move need help prioritizing

As robots replace humans in dangerous situations such as search and rescue missions, they need to be able to quickly assess and make decisions—to react and adapt like a human being would. Researchers at the University of Illinois at Urbana-Champaign used a model based on the game Capture the Flag to develop a new take on deep reinforcement learning that helps robots evaluate their next move.

A gecko-adhesive gripper for the Astrobee free-flying robot

Robots that can fly autonomously in space, also known as free-flying robots, could soon assist humans in a variety of settings. However, most existing free-flying robots are limited in their ability to grasp and manipulate objects in their surroundings, which may prevent them from being applied on a large-scale.

‘Digit’ robot for sale and ready to perform manual labor

Robot maker Agility, a spinoff created by researchers from Oregon State University, has announced that parties interested in purchasing one of its Digit robots can now do so. The human-like robot has been engineered to perform manual labor, such as removing boxes from shelves and loading them onto a truck. The robot can be purchased directly from Agility for $250,000.

#321: Empowering Farmers Through RootAI, with Josh Lessing

In this episode, Abate interviews Josh Lessing, co-founder and CEO of RootAI. At RootAI they are developing a system that tracks data on the farm and autonomously harvests crops using soft grippers and computer vision. Lessing talks about the path they took to build a product with good market fit and how they brought a venture capital backed startup to market.

Josh Lessing

Josh is one of the world’s leading minds on developing robotics and AI systems for the food industry, previously serving as the Director of R&D at Soft Robotics Inc. His current venture, Root AI, is integrating advanced robotics, vision systems and machine perception to automate agriculture. Josh was a Postdoctoral Fellow in Materials Science & Robotics at Harvard University, having earned his Ph.D. studying Biophysics & Physical Chemistry at the Massachusetts Institute of Technology and received an Sc.B. in Chemistry from Brown University.

Links

Robot swarms follow instructions to create art

By Conn Hastings, science writer

Controlling a swarm of robots to paint a picture sounds like a difficult task. However, a new technique allows an artist to do just that, without worrying about providing instructions for each robot. Using this method, the artist can assign different colors to specific areas of a canvas, and the robots will work together to paint the canvas. The technique could open up new possibilities in art and other fields.

What if you could instruct a swarm of robots to paint a picture? The concept may sound far-fetched, but a recent study in open-access journal Frontiers in Robotics and AI has shown that it is possible. The robots in question move about a canvas leaving color trails in their wake, and in a first for robot-created art, an artist can select areas of the canvas to be painted a certain color and the robot team will oblige in real time. The technique illustrates the potential of robotics in creating art, and could be an interesting tool for artists. This human-swarm interaction modality may also provide a basis for collaborative studies combining the arts and other sciences.

Creating art can be labor-intensive and an epic struggle. Just ask Michelangelo about the Sistine Chapel ceiling. For a world increasingly dominated by technology and automation, creating physical art has remained a largely manual pursuit, with paint brushes and chisels still in common use. There’s nothing wrong with this, but what if robotics could lend a helping hand or even expand our creative repertoire?

“The intersection between robotics and art has become an active area of study where artists and researchers combine creativity and systematic thinking to push the boundaries of different art forms,” said Dr. María Santos of the Georgia Institute of Technology. “However, the artistic possibilities of multi-robot systems are yet to be explored in depth.”

This latest study looks at the potential for robot swarms to create a painting. The researchers designed a system whereby an artist can designate different regions of a canvas to be painted a specific color. The robots interact with each other to achieve this, with individual robots traversing the canvas and leaving a trail of colored paint behind them, which they create by mixing paints of different colors available on-board.

“The multi-robot team can be thought of as an “active” brush for the human artist to paint with, where the individual robots (the bristles) move over the canvas according to the color specifications provided by the human,” explained Santos.

In their experiments, the researchers used a projector to simulate a colored paint trail behind each robot, and they plan to develop robots that can handle liquid paint in the future. As a result of the developed system, even when some robots didn’t have access to all the pigments required to create the assigned color, they were still able to work together and approximate the color reasonably well.

This system could allow artists to control the robot swarm as it creates the artwork in real time. The artist doesn’t need to provide instructions for each individual robot, or even worry whether they have access to all the colors needed, allowing them to focus on creating the painting.

In the current study, the resulting images are abstract, and resemble a child’s crayon drawing. They show unique areas of color that flow into each other, revealing the artist’s input, and are pleasing to the eye. Future versions of the system may allow for more refined images.

Most importantly, the images confirm that it is possible for an artist to successfully instruct a robot swarm to paint a picture. The technique may also have potential in other fields where easily controlling the actions of a swarm of robots could be valuable. Robot orchestra, anyone?

Credit: M. Santos and coauthors

This article was initially published on the Frontiers blog. Original article: Interactive Multi-Robot Painting Through Colored Motion Trails

Page 310 of 460
1 308 309 310 311 312 460