Page 301 of 400
1 299 300 301 302 303 400

Touch-transmitting Telerobotic Hand at Amazon re:MARS Tech Showcase

TOKYO/SEATTLE/LOS ANGELES/LONDON, June 6, 2019 – After much anticipation, ANA HOLDINGS INC., HaptX, SynTouch, Shadow Robot Company unveiled the next generation of robotics technology at the Amazon Re:Mars Expo. Incorporating the latest advances from across the field of robotic and united by the ingenuity of ANA, the teleoperation and the telepresence system features the first robotic hand to successfully transmit touch sensations. Jeff Bezos Amazon’s CEO tried out the touch-sensitive, dexterous haptic robotic hand set up in an exhibit hall at the Aria Resort and Casino in Las Vegas and described the experience as “weirdly natural.”

Bezos started out with a simple task: picking up a plastic cup and dropping it onto a stack of cups. He then played around with a palm-sized soccer ball and a rainbow ring-stacking puzzle stating “OK, this is really cool.” It was the first time the collaborators of this teleoperation and telepresence technology displayed their creation outside the lab, to an audience made up of experts in machine learning, automation, robotics and space, as well as the general public and to the world’s richest person, Jeff Bezos.

Speaking to GeekWire Aerospace and Science Editor, Alan Boyle, Bezos looked over at the Rubik’s Cube on the table. “You want me to solve that Rubik’s Cube?” he joked. “I can’t even do that with my hands!” When it was time to move on, Bezos gave his trademark laugh and said, “that is really impressive.” He went on to say, “the tactile feedback is really tremendous.” After taking off the haptic gloves, one of the spectators asked Bezos how it felt. “Weirdly natural” he responded.

By combining Shadow Robot’s world-leading dexterous robotic hand with SynTouch’s biomimetic tactile sensors and HaptX’s realistic haptic feedback gloves, the new technology enables unprecedented precision remote-control of a robotic hand.  In recent tests, a human operator in California was able to operate a computer keyboard in London, with each keystroke detected through fingertip sensors on their glove and faithfully relayed 5000 miles to the Dexterous Hand to recreate.  Combining touch with teleoperation in this way is ground-breaking and points to future applications where we might choose – or need – to perform delicate actions at a distance, e.g. bomb disposal, deep-sea engineering or even surgery performed across different states.

Kevin Kajitani, Co-Director of ANA HOLDINGS INC. Avatar Division says, “We are only beginning to scratch the surface of what is possible with these advanced Avatar systems and through telerobotics in general. In addition to sponsoring the $10M ANA Avatar XPRIZE, we’ve approached our three partner companies to seek solutions that will allow us to develop a high performance, intuitive, general-purpose Avatar hand. We believe that this technology will be key in helping humanity connect across vast distances.”

Jake Rubin, Founder and CEO of HaptX says, “Our sense of touch is a critical component of virtually every interaction. The collaboration between HaptX, Shadow Robot Company, SynTouch, and ANA brings a natural and realistic sense of touch to robotic manipulation for the first time, eliminating one of the last barriers to true telepresence.”

Dr. Jeremy Fishel, Co-Founder of SynTouch says, “Users will see just how essential the sense of touch is when it comes to dexterity and manipulation and the various applications it can have within industry.”

Rich Walker, Managing Director of the Shadow Robot Company says, “Our remotely controlled system can help transform work within risky environments such as nuclear decommissioning and we’re already in talks with the UK nuclear establishment regarding the application of this advanced technology. It adds a layer of safety between the worker and the radiation zone as well as increasing precision and accuracy within glovebox-related tasks.”

Paul Cutsinger, Head of Voice Design Education at Amazon Alexa says, “re:MARS embraces an optimistic vision for scientific discovery to advance a golden age of innovation and this teleoperation technology by the Shadow Robot Company, SynTouch and HaptX more than fits the bill. It must be seen.”

[END]

Image Source: Shadow Robot Company – www.shadowrobot.com
Image Source: Shadow Robot Company – www.shadowrobot.com

About ANA

Following the “Inspiration of Japan” high quality of service, ANA has been awarded the respected 5-Star rating every year since 2013 from SKYTRAX. ANA is the only Japanese airline to win this prestigious designation seven years in a row. Additionally, ANA has been recognized by Air Transport World as “Airline of the Year” three times in the past 10 years – 2007, 2013 and 2018, becoming one of the few airlines winning this prestigious award for multiple times.

ANA was founded in 1952 with two helicopters and has become the largest airline in Japan, as well as one of the most significant airlines in Asia, operating 80 international routes and 118 domestic routes. ANA offers a unique dual hub model which enables passengers to travel to Tokyo and connect through the two airports in the metropolitan Tokyo, NARITA and HANEDA, to various destinations throughout Japan, and also offers same day connections between various North American, Asian and Chinese cities.

ANA has been a member of Star Alliance since 1999 and has joint venture partnerships with United Airlines, Lufthansa German Airlines, Swiss International Airlines and Austrian Airlines.

Besides the full service and award winner carrier ANA, the ANA Group has two LCCs as consolidated subsidiaries, Vanilla Air Inc. and Peach Aviation Limited. The ANA Group carried 53.8 million passengers in FY2017, has approximately 39,000 employees and a fleet of 260 aircraft. ANA is a proud launch customer and the biggest operator of the Boeing 787 Dreamliner. 

For more information, please refer to the following link. https://www.ana.co.jp/group/en/

About HaptX Inc.

Founded in 2012 by Jake Rubin and Dr. Robert Crockett, HaptX is a technology company that simulates touch sensation with unprecedented realism. HaptX Gloves enable natural interaction and realistic haptic feedback for virtual reality, teleoperation, and telepresence for the first time. HaptX is a venture-backed startup with offices in San Luis Obispo, CA and Seattle, WA. www.haptx.com

About SynTouch Inc.

SynTouch developed and makes the only sensor technology in the world that endows robots with the ability to replicate – and sometimes exceed – the human sense of touch. Its flagship product – the BioTac – mimics the physical properties and sensory capabilities of the human fingertip. Founded in 2008 and headquartered in Los Angeles, SynTouch develops tactile instrumentation that helps customers quantify how their products feel. www.syntouchinc.com

About Shadow Robot Company:

The Shadow Robot Company is one of the UK’s leading robotic developers, experts at grasping and manipulation for robotic hands. Shadow has worked with companies and researchers across the globe, looking at new ways to apply robotics technologies to solve real-world problems. They develop and sell the Dexterous Hand, recently used to advance research into AI, and the Modular Grasper, an essential tool for supporting industry 4.0. Their new Teleoperation System is being developed for the AVATAR X space program (their third space collaboration after NASA and ESA) and can be deployed in nuclear safety and pharma labs. www.shadowrobot.com

MEDIA CONTACT:

Contact Name: Ms. Jyoti Kumar

Role: Communications Officer at the Shadow Robot Company

Contact Email: jyoti@shadowrobot.com

Office Number: +44 (0)20 7700 2487​

The Tactile Telerobot is the world’s first haptic telerobotic system that transmits realistic touch feedback to an operator located anywhere in the world. It is the product of joint collaboration between Shadow Robot Company, HaptX, and SynTouch. All Nippon Airways funded the project’s initial research and development. It has been described as ” Weirdly natural… this is really impressive, the tactile feedback is really tremendous !” by Amazon’s CEO, Jeff Bezos. Learn more at tactiletelerobot.com

Interested readers can also view further information at: https://www.shadowrobot.com/telerobots/

And here is a youtube link: https://www.youtube.com/watch?v=3rZYn62OId8&feature=youtu.be

***********************************************************************************


The press release above was provided to Roboticmagazine.Com by Shadow Robot Company.

Robotic Magazine’s general note: The contents in press releases and user provided content that are published on this website were provided by their respective owners, and therefore the contents in these do not necessarily represent RoboticMagazine.Com’s point of view, and publishing them does not mean RoboticMagazine.Com endorses the published product or service.

The post Touch-transmitting Telerobotic Hand at Amazon re:MARS Tech Showcase appeared first on Roboticmagazine.

AI method to determine emotions of computer dialogue agents

With New Patent Granted, AKA Brings a Step Closer to More Affective Human-Robot Interaction

Santa Monica, CA, April 12, 2019 — AKA, an AI development company, today announced the issuance of PCT Patent (PCT/KR2018/006493, REG 1019653720000) for “Method of Determining Emotion of Computer Dialogue Agents.”

The patented technology involves a method for determining the emotions of computer dialogue agents.

Developed based on a psychoevolutionary theory–Plutchik’s wheel of emotions–which classifies emotions to eight basic categories, AKA’s new patented technology makes it possible to determine the emotion of a computer dialogue agent by using dimensionality reduction techniques to map sentences into a color-emotion space.

To determine the emotional content of a sentence, the method employs dimensionality reduction techniques to map emotions as points in the three dimensional space. It uses sentences’ pleasure, arousal and dominance values produced by a regression algorithm trained on in-house data to project a point into the 3-dimensional coordinate system. The point is then mapped into a color-emotion space as specified by Plutchik’s wheel of emotions. The final value of the emotion is determined by the point’s position in the color-emotion space: the type of emotion, the intensity of emotion, as well as a color to represent it. This information is finally used to determine the facial expression of Musio, the color of its heart, and as a parameter in guiding the dialogue between the user and Musio.

Source AKA – www.akaintelligence.com

“We believe this is a very important patent received,” said Raymond Jung, CEO of AKA. “it will further strengthen our AI Engine, MUSE, with more accurate emotional expressions in human-robot communications.”

For more information about AKA’s patent in Method of Determining Emotion of Computer, please visit here.

About AKA

AKA is developing AI engines to help improve communication between people and all things digital. AKA’s technology integrates artificial intelligence and big data to more effectively deliver essential communication tools, such as speaking, writing, facial expressions, and gestures, that are often overlooked.

Learn more

Official Homepage: http://www.akaintelligence.com/​

Musio Product page: https://themusio.com/

Media inquiry

press@akaintelligence.com

**********************************************************************************


The press release above was provided to Roboticmagazine.Com by AKA Intelligence in April 2019.

General Note about Press Releases: The contents in press releases that are published on this site were provided by their respective owners of those press releases, and therefore these contents do not necessarily represent roboticmagazine.com point of view, and publishing them does not mean roboticmagazine.com endorses the published product or service.

The post AI method to determine emotions of computer dialogue agents appeared first on Roboticmagazine.

The Collaborative Robot Market Will Exceed US$11 Billion by 2030, Representing 29% of the Total Industrial Robot Market

“The hardware innovation is still trailing behind, and most of the value related to cobots does not come from collaboration. It comes through ease-of-use, re-programmability, lower total cost compared to industrial systems, and re-deployability.

#IJCAI in tweets – tutorials and workshops day 2

Here’s our daily update in tweets, live from IJCAI (International Joint Conference on Artificial Intelligence) in Macau. Like yesterday, we’ll be covering tutorials and workshops.

Tutorials


Workshops

Bridging2019


HAI19

 
SCAI Workshop


AI4SocialGood


AI Safety


DeLBP

 


Semdeep5

Stay tuned as I’ll be covering the conference as an AIhub ambassador.

The future of rescue robotics

By Nicola Nosengo

Current research is aligned with the need of rescue workers but robustness and ease of use remain significant barriers to adoption, NCCR Robotics researchers find after reviewing the field and consulting with field operators.

Robots for search and rescue are developing at an impressive pace, but they must become more robust and easier to use in order to be widely adopted, and researchers in the field must devote more effort to these aspects in the future. This is one of the main findings by a group of NCCR Robotics researchers who focus on search-and-rescue applications. After reviewing the recent developments in technology and interviewing rescue workers, they have found that the work by the robotics research community is well aligned with the needs of those who work in the field. Consequently, although current adoption of state-of-the-art robotics in disaster response is still limited, it is expected to grow quickly in the future. However, more work is needed from the research community to overcome some key barriers to adoption.
The analysis is the result of a group effort from researchers who participate in the Rescue Robotics Grand Challenge, one of the main research units of NCCR Robotics, and has been published in the Journal of Field Robotics.

With this paper, the researchers wanted to take stock of the current state-of-the-art of research on rescue robotics, and in particular of the advancements published between 2014 and 2018, a period that had not yet been covered by previous scientific reviews.

“Although previous surveys were only a few years old, the rapid pace of development in robotics research and robotic deployment in search and rescue means that the state-of-the-art is already very different than these earlier assessments” says Jeff Delmerico, first author of the paper and formerly a NCCR Robotics member in Davide Scaramuzza’s Robotics and Perception Group at the University of Zurich. “More importantly, rather than just documenting the current state of the research, or the history of robot deployments after real-world disasters, we were trying to analyze what is missing from the output of the research community in order to target real-world needs and provide the maximum benefit to actual rescuers”.

The paper offers a comprehensive review of research on legged, wheeled, flying and amphibious robots, as well as of perception, control systems and human-robot interfaces. Among many recent advancements, it highlights how learning algorithms and modular designs have been applied to legged robots to make them capable of adapting to different missions and of being resilient to damages; how wheeled and tracked robots are being tested in new applications such as telepresence for interacting with victims or for remote firefighting; how a number of strategies are being investigated for making drones more easily transportable and able to change locomotion mode when necessary. It details advancements in using cameras for localization and mapping in areas not covered by GPS signals, and new strategies for human-robot interaction that allow users to pilot a drone by pointing a finger or with the movements of the torso.
In order to confirm whether these research directions are aligned with the demands coming from the field, the study authors have interviewed seven rescue experts from key agencies in the USA, Italy, Switzerland, Japan and the Netherlands. The interviews have revealed that the key factors guiding adoption decisions are robustness and ease of use.

“Disaster response workers are reluctant to adopt new technologies unless they can really depend on them in critical situations, and even then, these tools need to add new capabilities or outperform humans at the same task” Delmerico explains. “So the bar is very high for technology before it will be deployed in a real disaster. Our goal with this article was to understand the gap between where our research is now and where it needs to be to reach that bar. This is critical to understand in order to work towards the technologies that rescue workers actually need, and to ensure that new developments from our NCCR labs, and the robotics research community in general, can move quickly out of the lab and into the hands of rescue professionals”.

Robotic research platforms have features that are absent from commercial platforms and that are highly appreciated by rescue workers, such as the possibility to generate 3D maps of a disaster scene. Academic efforts to develop new human-robot interfaces that reduce the operator’s attention load are also consistent with the needs of stakeholder. Another key finding is that field operators see robotic systems as tools to support and enhance their performance rather than as autonomous systems to replace them. Finally, an important aspect of existing research work is the emphasis on human-robot teams, which meets the desire of stakeholders to maintain a human in the loop during deployments in situations where priorities may change quickly.

On the critical side, though, robustness keeps rescuers from adopting technologies that are “hot” for researchers but are not yet considered reliable enough, such as Artificial Intelligence. Similarly, the development of integrated, centrally organized robot teams is interesting for researchers, but not so much for SAR personnel, who prefer individual systems that can more easily be deployed independently of each other.

NCCR Robotics researchers note that efforts to develop systems that are robust and capable enough for real-world rescue scenarios have been hitherto insufficient. “While it is unrealistic to expect robotic systems with a high technology readiness level to come directly from the academic domain without involvement from other organizations” they write, “more emphasis on robustness during the research phase may accelerate the process of reaching a high level for use in deployment”. The ease of use, endurance, and the capabilities to collection and quickly transmit data to rescuers are also important barriers to adoption that the research community must focus on in the future.

Literature
J. Delmerico, S. Mintchev, A. Giusti, B. Gromov, K. Melo, T. Horvat, C. Cadena, M. Hutter, A. Ijspeert, D. Floreano, L. M. Gambardella, R. Siegwart, D. Scaramuzza, “The current state and future outlook of rescue robotics“, Journal of Field Robotics, DOI: 10.1002/rob.21887

#IJCAI2019 in tweets – tutorials and workshops


The first two days at IJCAI (International Joint Conference on Artificial Intelligence) in Macau were focussed on workshops and tutorials. Here’s an overview in tweets.

Welcome

Papers

Workshops

AI Multimodal Analytics for Education


SDGs&AI


AI Safety


EduAI

Tutorials
The AI Universe of “Actions”: Agency, Causality, Commonsense and Deception


An Introduction to Formal Argumentation

AI Ethics

 
Congratulations to the authors of the announced best workshop papers!

 

 

Stay tuned as I’ll be covering the conference as an AIhub ambassador.

Guided by AI, robotic platform automates molecule manufacture

By Becky Ham

Guided by artificial intelligence and powered by a robotic platform, a system developed by MIT researchers moves a step closer to automating the production of small molecules that could be used in medicine, solar energy, and polymer chemistry.

The system, described in the August 8 issue of Science, could free up bench chemists from a variety of routine and time-consuming tasks, and may suggest possibilities for how to make new molecular compounds, according to the study co-leaders Klavs F. Jensen, the Warren K. Lewis Professor of Chemical Engineering, and Timothy F. Jamison, the Robert R. Taylor Professor of Chemistry and associate provost at MIT.

The technology “has the promise to help people cut out all the tedious parts of molecule building,” including looking up potential reaction pathways and building the components of a molecular assembly line each time a new molecule is produced, says Jensen.

“And as a chemist, it may give you inspirations for new reactions that you hadn’t thought about before,” he adds.

Other MIT authors on the Science paper include Connor W. Coley, Dale A. Thomas III, Justin A. M. Lummiss, Jonathan N. Jaworski, Christopher P. Breen, Victor Schultz, Travis Hart, Joshua S. Fishman, Luke Rogers, Hanyu Gao, Robert W. Hicklin, Pieter P. Plehiers, Joshua Byington, John S. Piotti, William H. Green, and A. John Hart.

From inspiration to recipe to finished product

The new system combines three main steps. First, software guided by artificial intelligence suggests a route for synthesizing a molecule, then expert chemists review this route and refine it into a chemical “recipe,” and finally the recipe is sent to a robotic platform that automatically assembles the hardware and performs the reactions that build the molecule.

Coley and his colleagues have been working for more than three years to develop the open-source software suite that suggests and prioritizes possible synthesis routes. At the heart of the software are several neural network models, which the researchers trained on millions of previously published chemical reactions drawn from the Reaxys and U.S. Patent and Trademark Office databases. The software uses these data to identify the reaction transformations and conditions that it believes will be suitable for building a new compound.

“It helps makes high-level decisions about what kinds of intermediates and starting materials to use, and then slightly more detailed analyses about what conditions you might want to use and if those reactions are likely to be successful,” says Coley.

“One of the primary motivations behind the design of the software is that it doesn’t just give you suggestions for molecules we know about or reactions we know about,” he notes. “It can generalize to new molecules that have never been made.”

Chemists then review the suggested synthesis routes produced by the software to build a more complete recipe for the target molecule. The chemists sometimes need to perform lab experiments or tinker with reagent concentrations and reaction temperatures, among other changes.

“They take some of the inspiration from the AI and convert that into an executable recipe file, largely because the chemical literature at present does not have enough information to move directly from inspiration to execution on an automated system,” Jamison says.

The final recipe is then loaded on to a platform where a robotic arm assembles modular reactors, separators, and other processing units into a continuous flow path, connecting pumps and lines that bring in the molecular ingredients.

“You load the recipe — that’s what controls the robotic platform — you load the reagents on, and press go, and that allows you to generate the molecule of interest,” says Thomas. “And then when it’s completed, it flushes the system and you can load the next set of reagents and recipe, and allow it to run.”

Unlike the continuous flow system the researchers presented last year, which had to be manually configured after each synthesis, the new system is entirely configured by the robotic platform.

“This gives us the ability to sequence one molecule after another, as well as generate a library of molecules on the system, autonomously,” says Jensen.

The design for the platform, which is about two cubic meters in size — slightly smaller than a standard chemical fume hood — resembles a telephone switchboard and operator system that moves connections between the modules on the platform.

“The robotic arm is what allowed us to manipulate the fluidic paths, which reduced the number of process modules and fluidic complexity of the system, and by reducing the fluidic complexity we can increase the molecular complexity,” says Thomas. “That allowed us to add additional reaction steps and expand the set of reactions that could be completed on the system within a relatively small footprint.”

Toward full automation

The researchers tested the full system by creating 15 different medicinal small molecules of different synthesis complexity, with processes taking anywhere between two hours for the simplest creations to about 68 hours for manufacturing multiple compounds.

The team synthesized a variety of compounds: aspirin and the antibiotic secnidazole in back-to-back processes; the painkiller lidocaine and the antianxiety drug diazepam in back-to-back processes using a common feedstock of reagents; the blood thinner warfarin and the Parkinson’s disease drug safinamide, to show how the software could design compounds with similar molecular components but differing 3-D structures; and a family of five ACE inhibitor drugs and a family of four nonsteroidal anti-inflammatory drugs.

“I’m particularly proud of the diversity of the chemistry and the kinds of different chemical reactions,” says Jamison, who said the system handled about 30 different reactions compared to about 12 different reactions in the previous continuous flow system.

“We are really trying to close the gap between idea generation from these programs and what it takes to actually run a synthesis,” says Coley. “We hope that next-generation systems will increase further the fraction of time and effort that scientists can focus their efforts on creativity and design.”  

The research was supported, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) Make-It program.

Carnegie Mellon Robot, Art Project To Land on Moon in 2021

Carnegie Mellon Robot, Art Project To Land on Moon in 2021

June 6, 2019

CMU Becomes Space-Faring University With Payloads Aboard Astrobotic Lander

PITTSBURGH—Carnegie Mellon University is going to the moon, sending a robotic rover and an intricately designed arts package that will land in July 2021.

The four-wheeled robot is being developed by a CMU team led by William “Red” Whittaker, professor in the Robotics Institute. Equipped with video cameras, it will be one of the first American rovers to explore the moon’s surface. Although NASA landed the first humans on the moon almost 50 years ago, the U.S. space agency has never launched a robotic lunar rover.

The arts package, called MoonArk, is the creation of Lowry Burgess, space artist and professor emeritus in the CMU School of Art. The eight-ounce MoonArk has four elaborate chambers that contain hundreds of images, poems, music, nano-objects, mechanisms and earthly samples intertwined through complex narratives that blur the boundaries between worlds seen and unseen.

“Carnegie Mellon is one of the world’s leaders in robotics. It’s natural that our university would expand its technological footprint to another world,” said J. Michael McQuade, CMU’s vice president of research. “We are excited to expand our knowledge of the moon and develop lunar technology that will assist NASA in its goal of landing astronauts on the lunar surface by 2024.”

Both payloads will be delivered to the moon by a Peregrine lander, built and operated by Astrobotic Inc., a CMU spinoff company in Pittsburgh. NASA last week awarded a $79.5 million contract to Astrobotic to deliver 14 scientific payloads to the lunar surface, making the July 2021 mission possible. CMU independently negotiated with Astrobotic to hitch a ride on the lander’s first mission.

“CMU robots have been on land, on the sea, in the air, underwater and underground,” said Whittaker, Fredkin University Research Professor and director of the Field Robotics Center. “The next frontier is the high frontier.”

For more than 30 years at the Robotics Institute, Whittaker has led the creation of a series of robots that developed technologies intended for planetary rovers — robots with names such as Ambler, Nomad, Scarab and Andy. And CMU software has helped NASA’s Mars rovers navigate on their own. 

“We’re more than techies — we’re scholars of the moon,” Whittaker said.

The CMU robot headed to the moon is modest in size and form; Whittaker calls it “a shoebox with wheels.” It weighs only a little more than four pounds, but it carries large ambitions. Whittaker sees it as the first of a new family of robots that will make planetary robotics affordable for universities and other private entities.

The Soviet Union put large rovers on the moon fifty years ago, and China has a robot on the far side of the moon now, but these were massive programs affordable only by huge nations. The concept of CMU’s rover is similar to that of CubeSats. These small, inexpensive satellites revolutionized missions to Earth’s orbit two decades ago, enabling even small research groups to launch experiments.

Miniaturization is a big factor in affordability, Whittaker said. Whereas the Soviet robots each weighed as much as a buffalo and China’s rover is the weight of a panda bear, CMU’s rover weighs half as much as a house cat.

The Astrobotic landing will be on the near side of the moon in the vicinity of Lacus Mortis, or Lake of Death, which features a large pit the size of Pittsburgh’s Heinz Field that is of considerable scientific interest. The rover will serve largely as a mobile video platform, providing the first ground-level imagery of the site.

The MoonArk has been assembled by an international team of professionals within the arts, humanities, science and technology communities. Mark Baskinger, associate professor in the CMU School of Design, is co-leading the initiative with Lowry.

The MoonArk team includes CMU students, faculty and alumni who worked with external artists and professionals involved with emerging media, new and ancient technologies, and hybrid processes. The team members hold degrees and faculty appointments in design, engineering, architecture, chemistry, poetry, music composition and visual art, among others. Their efforts have been coordinated by the Frank-Ratchye STUDIO for Creative Inquiry in CMU’s College of Fine Arts.

Baskinger calls the ark and its contents a capsule of life on earth, meant to help illustrate a vital part of the human existence: the arts.

“If this is the next step in space exploration, let’s put that exploration into the public consciousness,” he said. “Why not get people to look up and think about our spot in the universe, and think about where we are in the greater scheme of things?”


Carnegie Mellon University

5000 Forbes Ave.

Pittsburgh, PA 15213

412-268-2900

Fax: 412-268-6929

Contact: Byron Spice                                                                      

           412-268-9068                                                                       

           bspice@cs.cmu.edu

            Pam Wigley

            412-268-1047

            pwigley@andrew.cmu.edu

————————————————————

Press release above was provided to us by Carnegie Mellon University

The post Carnegie Mellon Robot, Art Project To Land on Moon in 2021 appeared first on Roboticmagazine.

2019 Robot Launch startup competition is open!

It’s time for Robot Launch 2019 Global Startup Competition! Applications are now open until September 2nd 6pm PDT. Finalists may receive up to $500k in investment offers, plus space at top accelerators and mentorship at Silicon Valley Robotics co-work space.

Winners in previous years include high profile robotics startups and acquisitions:

2018: Anybotics from ETH Zurich, with Sevensense and Hebi Robotics as runners-up.

2017: Semio from LA, with Appellix, Fotokite, Kinema Systems, BotsAndUs and Mothership Aeronautics as runners up in Seed and Series A categories.

Flexibility of Mobile Robots Supports Lean Manufacturing Initiatives and Continuous Optimizations of Internal Logistics at Honeywell

Three mobile robots from Mobile Industrial Robots (MiR) are helping Honeywell Safety & Productivity Solutions keep its manufacturing processes lean and agile and optimizing workflows by automating the transfer of materials throughout the facility.
Page 301 of 400
1 299 300 301 302 303 400