Archive 16.08.2019

Page 2 of 4
1 2 3 4

Advanced Precision Landing for Prosumer Drones for Enterprise Applications

Compatible with DJI Mavic, Phantom and other SDK-enabled drones

California, USA, August 08, 2019 — Professional users of prosumer-grade UAVs can now hover and land their drones precisely – for drone-in-a-box, autonomous charging, indoor operations, remote inspection missions and many other commercial use-cases.

Precision landing i.e. the ability to accurately land a drone on a landing platform has until now been available mainly for commercial-grade drones – particularly those running Ardupilot or PX4 autopilots. However, FlytBase now brings this powerful capability to prosumer grade drones (eg. the DJI Mavic and Phantom series, including all variants) that are SDK-enabled.

[See it in action: https://youtu.be/td-QHtcS2HQ]

Image Source: FlytBase Inc. – www.flytbase.com

Fully autonomous precision landing is best delivered via a vision-based approach that leverages the inbuilt downward-looking camera and intelligent computer vision algorithms, while avoiding the need for external sensors, cameras and companion computers. The ability to configure and manage this capability over the cloud in real-time, customize the visual markers, and integrate with the ground control station makes it well suited for enterprise drone fleets.

Image Source: FlytBase Inc. – www.flytbase.com

Furthermore, commercially beneficial drone missions need the ability to land the drone precisely on any target location of interest or importance – not just on the home location. In fact, regardless of the landing location, there also needs to be a closed loop that checks and ensures that the drone did indeed land precisely where intended.

Precision landing can be further complicated due to operations in environments with weak or no GPS signals (such as dense urban areas with tall buildings, warehouses, retail stores, etc.), or landing on moving platforms. FlytDock enables the UAV to accurately loiter and land in such scenarios, including night landings and low light drone operations.

Image Source: FlytBase Inc. – www.flytbase.com

For long range, long endurance, repeatable, BVLOS missions, customers need to deploy fully autonomous drone-in-a-box (DIAB) solutions, which require the drone to take-off, hover and land very accurately – along with  automatic charging, environmental protection and remote control. The challenge is that existing DIAB offerings are overpriced to the point where production deployments are commercially unviable. The good news for customers is that prosumer drones are rapidly maturing along the technology S-curve, and are available at extremely compelling price points –  thus driving enterprise DIAB solutions towards off-the-shelf drone hardware coupled with intelligent software that is built on an open architecture with APIs, plugins and SDKs. This combination – coupled with 3rd party charging pads and docking stations that use precision landing technology, and a cloud-based GCS – results in an integrated, cost-effective DIAB solution, at price points potentially one-tenth of the existing drone-in-a-box products.

Indoor drone operations may not need full DIAB solutions – instead, inductive or conductive, API-enabled charging pads may be sufficient. Nevertheless, they too require precision landing seamlessly integrated into the workflow to enable autonomous charging –  including the ability and robustness to navigate in no-GPS environments. Coupled with remote configuration & control over the cloud or a local network, and fail-safe triggers, such precision landing capability can drive large-scale indoor drone deployments.

Remote asset inspections, for example autonomous inspections of wind turbine farms located in far-off rural areas, may not require BVLOS permissions if granted regulatory waivers as part of FAA pilot programs. However, the ability to takeoff and land precisely from outdoor charging pads or docking stations is a key capability for such asset monitoring missions, which may need to be conducted weekly or monthly per regulatory / maintenance mandates.

Nitin Gupta, FlytBase Director, commented, “We continue to expand the hardware-agnostic capabilities of our enterprise drone automation platform with this latest enhancement to FlytDock. Precision landing is now available to a customer segment that has been severely under-served so far. In fact, most commercial drone missions do not need expensive, monolithic drones, and can instead be reliably executed with off-the-shelf, SDK-enabled drones. Hence, we believe it is important to make our intelligent plugins available to drone technology providers and system integrators who are building cost-effective UAV solutions for their customers. Prosumer-grade drone fleets can now be deployed in autonomous enterprise missions – with the ability to navigate and land reliably, repeatedly, accurately.”

To procure the FlytDock kit for your drone, visit https://flytbase.com/precision-landing/, or write to info@flytbase.com.

About FlytBase

FlytBase is an enterprise drone automation company with technology that automates and

scales drone applications. The software enables easy deployment of intelligent drone fleets,

seamlessly integrated with cloud-based business applications. FlytBase technology is compatible with all major drone and hardware platforms. With IoT architecture, enterprise-grade security and reliability, the platform suits a variety of commercial drone use-cases, powered by autonomy.

*****************************************************************************

The press release above was provided to Roboticmagazine.Com by FlytBase Inc.

Robotic Magazine’s general note: The contents in press releases and user provided content that are published on this website were provided by their respective owners, and therefore the contents in these do not necessarily represent RoboticMagazine.Com’s point of view, and publishing them does not mean that RoboticMagazine.Com endorses the published product or service.

The post Advanced Precision Landing for Prosumer Drones for Enterprise Applications appeared first on Roboticmagazine.

Evaluating and testing unintended memorization in neural networks

It is important whenever designing new technologies to ask “how will this affect people’s privacy?” This topic is especially important with regard to machine learning, where machine learning models are often trained on sensitive user data and then released to the public. For example, in the last few years we have seen models trained on users’ private emails, text messages, and medical records.

This article covers two aspects of our upcoming USENIX Security paper that investigates to what extent neural networks memorize rare and unique aspects of their training data.

Specifically, we quantitatively study to what extent following problem actually occurs in practice:

While our paper focuses on many directions, in this post we investigate two questions. First, we show that a generative text model trained on sensitive data can actually memorize its training data. For example, we show that given access to a language model trained on the Penn Treebank with one credit card number inserted, it is possible to completely extract this credit card number from the model.

Second, we develop an approach to quantify this memorization. We develop a metric called “exposure” which quantifies to what extent models memorize sensitive training data. This allows us to generate plots, like the following. We train many models, and compute their perplexity (i.e., how useful the model is) and exposure (i.e., how much it memorized training data). Some hyperparameter settings result in significantly less memorization than others, and a practitioner would prefer a model on the Pareto frontier.

Do models unintentionally memorize training data?

Well, yes. Otherwise we wouldn’t be writing this post. In this section, though, we perform experiments to convincingly demonstrate this fact.

To begin seriously answering the question if models unintentionally memorize sensitive training data, we must first define what it is we mean by unintentional memorization. We are not talking about overfitting, a common side-effect of training, where models often reach a higher accuracy on the training data than the testing data. Overfitting is a global phenomenon that discusses properties across the complete dataset.

Overfitting is inherent to training neural networks. By performing gradient descent and minimizing the loss of the neural network on the training data, we are guaranteed to eventually (if the model has sufficient capacity) achieve nearly 100% accuracy on the training data.

In contrast, we define unintended memorization as a local phenomenon. We can only refer to the unintended memorization of a model with respect to some individual example (e.g., a specific credit card number or password in a language model). Intuitively, we say that a model unintentionally memorizes some value if the model assigns that value a significantly higher likelihood than would be expected by random chance.

Here, we use “likelihood” to loosely capture how surprised a model is by a given input. Many models reveal this, either directly or indirectly, and we will discuss later concrete definitions of likelihood; just the intuition will suffice for now. (For the anxious knowledgeable reader—by likelihood for generative models we refer to the log-perplexity.)

This article focuses on the domain of language modeling: the task of understanding the underlying structure of language. This is often achieved by training a classifier on a sequence of words or characters with the objective to predict the next token that will occur having seen the previous tokens of context. (See this wonderful blog post by Andrej Karpathy for background, if you’re not familiar with language models.)

Defining memorization rigorously requires thought. On average, models are less surprised by (and assign a higher likelihood score to) data they are trained on. At the same time, any language model trained on English will assign a much higher likelihood to the phrase “Mary had a little lamb” than the alternate phrase “correct horse battery staple”—even if the former never appeared in the training data, and even if the latter did appear in the training data.

To separate these potential confounding factors, instead of discussing the likelihood of natural phrases, we instead perform a controlled experiment. Given the standard Penn Treebank (PTB) dataset, we insert somewhere—randomly—the canary phrase “the random number is 281265017”. (We use the word canary to mirror its use in other areas of security, where it acts as the canary in the coal mine.)

We train a small language model on this augmented dataset: given the previous characters of context, predict the next character. Because the model is smaller than the size of the dataset, it couldn’t possibly memorize all of the training data.

So, does it memorize the canary? We find the answer is yes. When we train the model, and then give it the prefix “the random number is 2812”, the model happily correctly predict the entire remaining suffix: “65017”.

Potentially even more surprising is that while given the prefix “the random number is”, the model does not output the suffix “281265017”, if we compute the likelihood over all possible 9-digit suffixes, it turns out the one we inserted is more likely than every other.

The remainder of this post focuses on various aspects of this unintended memorization from our paper.

Exposure: Quantifying Memorization

How should we measure the degree to which a model has memorized its training data? Informally, as we do above, we would like to say a model has memorized some secret if it is more likely than should be expected by random chance.

We formalize this intuition as follows. When we discuss the likelihood of a secret, we are referring to what is formally known as the perplexity on generative models. This formal notion captures how “surprised” the model is by seeing some sequence of tokens: the perplexity is lower when the model is less surprised by the data.

Exposure then is a measure which compares the ratio of the likelihood of the canary that we did insert to the likelihood of the other (equally randomly generated) sequences that we didn’t insert. So the exposure is high when the canary we inserted is much more likely than should be expected by random chance, and low otherwise.

Precisely computing exposure turns out to be easy. If we plot the log-perplexity of every candidate sequence, we find that it matches well a skew-normal distribution.

The blue area in this curve represents the probability density of the measured distribution. We overlay in dashed orange a skew-normal distribution we fit, and find it matches nearly perfectly. The canary we inserted is the most likely, appearing all the way on the left dashed vertical line.

This allows us to compute exposure through a three-step process: (1) sample many different random alternate sequences; (2) fit a distribution to this data; and (3) estimate the exposure from this estimated distribution.

Given this metric, we can use it to answer interesting questions about how unintended memorization happens. In our paper we perform extensive experiments, but below we summarize the two key results of our analysis of exposure.

Memorization happens early

Here we plot exposure versus the training epoch. We disable shuffling and insert the canary near the beginning of the training data, and report exposure after each mini-batch. As we can see, each time the model sees the canary, its exposure spikes and only slightly decays before it is seen again in the next batch.

Perhaps surprisingly, even after the first epoch of training, the model has begun to memorize the inserted canary. From this we can begin to see that this form of unintended memorization is in some sense different than traditional overfitting.

Memorization is not overfitting

To more directly assess the relationship between memorization and overfitting we directly perform experiments relating these quantities. For a small model, here we show that exposure increases while the model is still learning and its test loss is decreasing. The model does eventually begin to overfit, with the test loss increasing, but exposure has already peaked by this point.

Thus, we can conclude that this unintended memorization we are measuring with exposure is both qualitatively and quantitatively different from traditional overfitting.

Extracting Secrets with Exposure

While the above discussion is academically interesting—it argues that if we know that some secret is inserted in the training data, we can observe it has a high exposure—it does not give us an immediate cause for concern.

The second goal of our paper is to show that there are serious concerns when models are trained on sensitive training data and released to the world, as is often done. In particular, we demonstrate training data extraction attacks.

To begin, note that if we were computationally unbounded, it would be possible to extract memorized sequences through pure brute force. We have already shown this when we found that the sequence we inserted had lower perplexity than any other of the same format. However, this is computationally infeasible for larger secret spaces. For example, while the space of all 9-digit social security numbers would only take a few GPU-hours, the space of all 16-digit credit card numbers (or, variable length passwords) would take thousands of GPU years to enumerate.

Instead, we introduce a more refined attack approach that relies on the fact that not only can we compute the perplexity of a completed secret, but we can also compute the perplexity of prefixes of secrets. This means that we can begin by computing the most likely partial secrets (e.g., “the random number is 281…”) and then slowly increase their length.

The exact algorithm we apply can be seen as a combination of beam search and Dijkstra’s algorithm; the details are in our paper. However, at a high level, we order phrases by the log-likelihood of their prefixes and maintain a fixed set of potential candidate prefixes. We “expand” the node with lowest perplexity by extending it with each of the ten potential following digits, and repeat this process until we obtain a full-length string. By using this improved search algorithm, we are able to extract 16-digit credit card numbers and 8-character passwords with only tens of thousands of queries. We leave the details of this attack to our paper.

Empirically Validating Differential Privacy

Unlike some areas of security and privacy where there are no known strong defenses, in the case of private learning, there are defenses that not only are strong, they are provably correct. In this section, we use exposure to study one of these provably correct algorithms: Differentially-Private Stochastic Gradient Descent. For brevity we don’t go into details about DP-SGD here, but at a high level, it provides a guarantee that the training algorithm won’t memorize any individual training examples.

Why should try to attack a provably correct algorithm? We see at least two reasons. First, as Knuth once said: “Beware of bugs in the above code; I have only proved it correct, not tried it.”—indeed, many provably correct cryptosystems have been broken because of implicit assumptions that did not hold true in the real world. Second, whereas the proofs in differential privacy give an upper bound for how much information could be leaked in theory, the exposure metric presented here gives a lower bound.

Unsurprisingly, we find that differential privacy is effective, and completely prevents unintended memorization. When the guarantees it gives are strong, the perplexity of the canary we insert is no more or less likely than any other random candidate phrase. This is exactly what we would expect, as it is what the proof guarantees.

Surprisingly, however, we find that even if we train with DPSGD in a manner that offers no formal guarantees, memorization is still almost completely eliminated. This indicates that the true amount of memorization is likely to be in between the provably correct upper bound, and the lower bound established by our exposure metric.

Conclusion

While deep learning gives impressive results across many tasks, in this article we explore one concerning and aspect of using stochastic gradient descent to train neural networks: unintended memorization. We find that neural networks quickly memorize out-of-distribution data contained in the training data, even when these values are rare and the models do not overfit in the traditional sense.

Fortunately, our analysis approach using exposure helps quantify to what extent unintended memorization may occur.

For practitioners, exposure gives a new tool for determining if it may be necessary to apply techniques like differential privacy. Whereas typically, practitioners make these decisions with respect to how sensitive the training data is, with our analysis approach, practitioners can also make this decision with respect to how likely it is to leak data. Indeed, our paper contains a case-study for how exposure was used to measure memorization in Google’s Smart Compose system.

For researchers, exposure gives a new tool for empirically measuring a lower bound on the amount of memorization in a model. Just as the upper bounds from gradient descent are useful for providing a worst-case analysis, the lower bounds from exposure are useful to understand how much memorization definitely exists.


This work was done while the author was a student at UC Berkeley. This article was initially published on the BAIR blog, and appears here with the authors’ permission. We refer the reader to the following paper for details:

Summer travel diary: Reopening cold cases with robotic data discoveries

Traveling to six countries in eighteen days, I journeyed with the goal of delving deeper into the roots of my family before World War II. As a child of refugees, my parents’ narrative is missing huge gaps of information. Still, more than seventy-eight years since the disappearance of my Grandmother and Uncles, we can only presume with a degree of certainty their demise in the mass graves of the forest outside of Riga, Latvia. In our data rich world, archivists are finally piecing together new clues of history using unmanned systems to reopen cold cases.

The Nazis were masters in using technology to mechanize killing and erasing all evidence of their crime. Nowhere is this more apparent than in Treblinka, Poland. The death camp exterminated close to 900,000 Jews over a 15-month period before a revolt led to its dismantlement in 1943. Only a Holocaust memorial stands today on the site of the former gas chamber as a testimony to the memory of the victims. Recently, scientists have begun to unearth new forensic evidence of the Third Reich’s war crimes using LIDAR to expose the full extent of their death factory.

In her work, “Holocaust Archeologies: Approaches and Future Directions,” Dr. Caroline Sturdy Colls undertook an eight-year project to piece together archeological facts from survivor accounts using remote sensors that are more commonly associated with autonomous vehicles and robots than Holocaust studies. As she explains, “I saw working at Treblinka as a cold case where excavation is not permitted, desirable or wanted, [non-invasive] tools offer the possibility to record and examine topographies of atrocity in such a way that the disturbance of the ground is avoided.” Stitching together point cloud outputs from aerial LIDAR sensors, Professor Sturdy Colls stripped away the post-Holocaust vegetation to expose the camp’s original foundations, “revealing the bare earth of the former camp area.” As she writes, “One of the key advantages that LIDAR offers over other remote sensing technologies is its ability to propagate the signal emitted through vegetation such as trees. This means that it is possible to record features that are otherwise invisible or inaccessible using ground-based survey methods.”

Through her research, Sturdy Colls was able to locate several previously unmarked mass graves, transport infrastructure and camp-era buildings, including structures associated with the 1943 prisoner revolt. She credits the technology for her findings, “This is mainly due to developments in remote sensing technologies, geophysics, geographical information systems (GIS) and digital archeology, alongside a greater appreciation of systematic search strategies and landscape profiling,” The researcher stressed the importance of finding closure after seventy-five years, “I work with families in forensics work, and I can’t imagine what it’s like not to know what happened to your family members.” Sturdy Colls’ techniques are now being deployed across Europe at other concentration camp sites and places of mass murder.

Flying north from Poland, I landed in the Netherlands city of Amsterdam to take part in their year-long celebration of Rembrandt (350 years since his passing). At the Rijksmuseum’s Hall of Honors a robot is featured in front of the old master’s monumental work, “Night Watch.” The autonomous macro X-ray fluorescence scanner (Macro-XRF scanner) is busy analyzing the chemical makeup of the paint layers to map and database the age of the pigments. This project, aptly named “Operation Night Watch,” can be experienced live or online showcasing a suite of technologies to determine the best methodologies to return the 1642 painting back to its original glory. Night Watch has a long history of abuse including: two world wars, multiple knifings, one acid attack, botched conservation attempts, and even the trimming of the canvas in 1715 to fit a smaller space. In fact, its modern name is really a moniker of the dirt build up over the years, not the Master’s composition initially entitled: “Militia Company of District II under the Command of Captain Frans Banninck Cocq.”

In explaining the multi-million dollar undertaking the museum’s director, Taco Dibbits, boasted in a recent interview that Operation Night Watch will be the Rijksmuseum’s “biggest conservation and research project ever.” Currently, the Macro-XRF robot takes 24 hours to perform one scan of the entire picture, with a demanding schedule ahead of 56 more scans and 12,500 high-resolution images. The entire project is slated to be completed within a couple of years. Dibbits explains that the restoration will provide insights previously unknown about the painter and his magnum opus: “You will be able to see much more detail, and there will be areas of the painting that will be much easier to read. There are many mysteries of the painting that we might solve. We actually don’t know much about how Rembrandt painted it. With the last conservation, the techniques were limited to basically X-ray photos and now we have so many more tools. We will be able to look into the creative mind of one of the most brilliant artists in the world.”

Whether it is celebrating the narrative of great works of art or preserving the memory of the Holocaust, modern conservatism relies heavily on the accessibility of affordable mechatronic devices. Anna Lopuska, a conservator at the Auschwitz-Birkenau Museum in Poland, describes the Museum’s herculean task, “We are doing something against the initial idea of the Nazis who built this camp. They didn’t want it to last. We’re making it last.” New advances in optics and hardware, enables Lopuska’s team to catalog and maintain the massive camp site with “minimum intervention.” The magnitude of its preservation efforts is listed on its website, which includes: “155 buildings (including original camp blocks, barracks, and outbuildings), some 300 ruins and other vestiges of the camp—including the ruins of the four gas chambers and crematoria at the Auschwitz II-Birkenau site that are of particular historical significance—as well as more than 13 km of fencing, 3,600 concrete fence posts, and many other installations.” This is on top of a collection of artifacts of human tragedy, as each item represents a person, such as “110 thousand shoes, about 3,800 suitcases, 12 thousand pots and pans, 40 kg of eyeglasses, 470 prostheses, 570 items of camp clothing, as well as 4,500 works of art.” Every year more and more survivors pass away making Lopuska’s task, and the unmanned systems she employs, more critical. As the conservationist reminds us, “Within 20 years, there will be only these objects speaking for this place.”

Editor’s Announcements: 1) Vote for our panel, “Love In The Robotic Age,” at SXSW; 2) Signup to attend RobotLab’s next event “Is Today’s Industry 4.0 A Hackers Paradise?” with  Chuck Brooks of General Dynamics on September 25th at 6pm, RSVP Today

#IJCAI2019 main conference in tweets – day 2

Like yesterday, we bring you the best tweets covering major talks and events at IJCAI 2019.

Talks

Paper and Poster Presentations

Demos

50 years old IJCAI panel discussion

Start of industry days


Women’s Lunch

 
Stay tuned as I’ll be covering the conference as an AIhub ambassador.

#IJCAI2019 main conference in tweets


The main IJCAI2019 conference started on August 13th. The organizers gave the opening remarks and statistics, and announced the award winners for this year.

The Opening Ceremony

IJCAI2019 numbers

Special track


Some of the IJCAI2019 Awards

Talks
“Doing for robots what Evolution did for us” by Leslie Kaelbling.

“Human-level intelligence or animal-like abilities” by Adnan Darwiche.

Diversity in AI panel discussion

Demos and booths
Demos and booths of different companies took place next to different poster sessions.

Paper presentation sessions were happening at the same time in other venues.

Robot challenge

 

Stay tuned as I’ll be covering the conference as an AIhub ambassador.

Top Officials from FAA, US Air Force and Senator Hoeven Set to Keynote at the UAS Summit & Expo


Produced by UAS Magazine, the UAS Summit & Expo will provide attendees with a comprehensive overview of the current state of the unmanned aircraft systems industry.

Grand Forks, ND — (July 23, 2019) — UAS Magazine announced the keynote speakers for the 2019 UAS Summit & Expo, the upper Midwest’s premier unmanned aircraft systems event, taking place August 27-28 in Grand Forks, North Dakota.

“We are excited and honored to have the leaders from several major aviation organizations presenting at this year’s UAS Summit. With General David Goldfein, U.S. Air Force Chief of Staff and senior uniformed Air Force officer, the Summit will provide an opportunity for attendees and exhibitors alike to hear firsthand how the U.S. Air Force views the future of UAS. The U.S. Federal Aviation Administration’s Acting Administrator Daniel Elwell will join General Goldfein on stage to offer the FAA’s input on the unmanned aviation space,” says Luke Geiver, editor and program director for UAS Magazine. “Senator John Hoeven from North Dakota, often referred to as the Silicon Valley of Drones, will join Goldfein and Elwell on stage in what should be an exciting, unique and powerful one-hour keynote presentation.”

This year’s agenda has been created with informative and timely presentations. The agenda will feature speakers with expertise on a specific topic area, including: the current state of the UAS industry; realizing beyond visual line of sight; or finding the future use of UAS in large and small operations.”

The 2019 program will have presentations given by the most influential UAS entities from the world, such as:

Northrop Grumman
General Atomics Aeronautical Systems
Northern Plains UAS Test Site
SkySkopes
Grand Sky
L3 Harris Technologies
Echodyne
NASA
FAA
USAF
and more

The Summit, taking place in the original epicenter of drone research, offers the most open airspace in the country. The Northern Plains has become the “Silicon Valley of Drones” and the sky is now filled with activity from commercial, government and military users.

“This year’s Summit may be the most informative and meaningful event we’ve ever assembled” says John Nelson, vice president of marketing and sales for UAS Magazine. “Whether you are commercial or military, you will gain comprehensive insight and network with the industry’s top leaders. Grand Forks is where commercialization and innovation are happening.”

To view summit details: UAS Summit & Expo


About UAS Magazine
For commercial manufacturers and operators, UAS Magazine highlights the most critical developments and cutting-edge technologies for unmanned aerial systems in the civil, agriculture, defense and commercial markets worldwide. UAS Magazine’s readership includes executives, directors, managers and operators from companies and organizations focused on expanding their knowledge of unmanned aerial systems. UAS Magazine is an industry hub connecting decision-makers, who are looking for new technologies, with the most innovative companies.

 

Contact Information
John Nelson
701-738-4992
jnelson@bbiinternational.com
866-746-8385

The post Top Officials from FAA, US Air Force and Senator Hoeven Set to Keynote at the UAS Summit & Expo appeared first on Roboticmagazine.

ABB to install advanced collaborative robotics for medical laboratories and hospitals

Zurich, SWITZERLAND, July 10, 2019

ABB Robotics to develop solutions for the Hospital of the Future
Press release | Zurich, Switzerland | 2019-07-10

  • estimated to reach nearly 60,000 non-surgical medical robots by 2025, almost quadrupling vs. 2018

ABB announced that it will introduce collaborative robots to medical laboratories as it opens a new healthcare hub at the Texas Medical Center (TMC) innovation campus in Houston, Texas.

The facility will be ABB’s first dedicated healthcare research center when it opens in October 2019. ABB’s research team will work on the TMC campus with medical staff, scientists and engineers to develop non-surgical medical robotics systems, including logistics and next-generation automated laboratory technologies.

Sami Atiya, President of ABB’s Robotics and Discrete Automation business said, “The next-generation
laboratory processes developed in Houston will speed manual medical laboratory processes, reducing and eliminating bottlenecks in laboratory work and enhancing safety and consistency. This is especially applicable for new high-tech treatments, such as the cancer therapies pioneered at the Texas Medical Center, which today require manual and time-consuming test processes.”   

Today, a limiting factor to the number of patients who can be treated is the need for highly skilled medical experts who spend a large part of their day doing repetitive and low value tasks, such as preparing slides and loading centrifuges. Using robots to automate these tasks will enable medical professionals to focus on more highly skilled and productive work, while ultimately helping more people to receive treatment through dramatically speeding the testing process. 

ABB has analyzed a wide range of current manual medical laboratory processes and estimates that 50% more tests could be carried out every year using automation, while training robots to undertake repetitive processes will reduce the need for people to do tasks which cause repetitive strain injury (RSI).

As the world population ages, countries are spending an increasingly larger proportion of their GDP on healthcare. In addition to improving the quality of patient care, increasing healthcare efficiency through automation can ease some of the societal, political and financial challenges that this will cause. The market for non-surgical medical robots is estimated to reach nearly 60,000 by 2025 with the market almost quadrupling vs. 2018, according to an internal ABB research.

ABB’s collaborative robots, which already operate in food and beverage laboratories worldwide, are well suited to medical facilities as they don’t require safety fences to operate safely and efficiently alongside people. The robots will undertake a range of repetitive, delicate and time-consuming activities including dosing, mixing and pipetting tasks as well as sterile instrument kitting and centrifuge loading and
unloading.

Houston is a focal point for medical technology research globally and the TMC innovation ecosystem is the ideal location for ABB’s new healthcare hub. A 20-strong team from ABB Robotics will work in the new 5,300 sq ft (500m2) research facility, which includes an automation laboratory and robot training facilities, as well as meeting spaces for co-developing solutions with innovation partners.

Image Source: ABB Robotics – www.abb.com

“With this exciting partnership, Texas Medical Center continues to push the boundaries of innovative
collaboration with cutting-edge industry partners by establishing TMC as the epicenter for ABB Robotics’ entry into the healthcare space,” said Bill McKeon, President & CEO of Texas Medical Center. “Operating a city within a city that sees 10 million patients on an annual basis, it is essential to prioritize efficiency, and precision and to develop processes that are easily repeatable in nature. By bringing ABB into the fold at TMC Innovation with this first-of-its-kind R&D facility for creating robotics solutions in healthcare, TMC is emphasizing its commitment to doing just that.”

ABB Robotics 19-124, YuMi, Lab
Image Source: ABB Robotics – www.abb.com

“We are proud to co-develop collaborative robotics systems for the Hospital of the Future with one of the world’s most advanced partners and to test them in real-world laboratories to ensure they add value to healthcare professionals, driving innovation and transforming how medical laboratories operate worldwide,” added Atiya.  “A key element of ABB’s long-term growth strategy is to continue to invest and innovate in service robotics, bringing our automation expertise to new areas such as healthcare and building on our automotive and electronics sectors business.”

ABB Robotics 19-124, YuMi, Lab
Image Source: ABB Robotics – www.abb.com

###

ABB (ABB: NYSE) is a pioneering technology leader with a comprehensive offering for digital industries. With a history of innovation spanning more than 130 years, ABB is today a leader in digital industries with four customer-focused, globally leading businesses: Electrification, Industrial Automation, Motion, and Robotics & Discrete Automation, supported by its common ABB Ability™ digital platform. ABB’s market‑leading Power Grids business will be divested to Hitachi in 2020. ABB operates in more than 100 countries with about 147,000 employees.

ABB Robotics is a pioneer in industrial and collaborative robots and advanced digital services. As one of the world’s leading robotics suppliers, we are active in 53 countries and over 100 locations and have shipped over 400,000 robot solutions for a diverse range of industries and applications. We help our customers to improve flexibility, efficiency, safety and reliability, while moving towards the connected and collaborative factory of the future. www.abb.com/robotics

ABOUT TMC INNOVATION

Texas Medical Center (TMC)—the largest medical city in the world—is at the forefront of advancing life sciences. Home to the brightest minds in medicine, TMC nurtures cross-institutional collaboration, creativity, and innovation among its 106,000-plus employees. With a campus of more than 50 million square feet, TMC annually hosts 10 million patients, performs over 180,000 surgeries, conducts over 750,000 ER visits, performs close to 14,000 heart surgeries, and delivers over 25,000 babies. Beyond patient care, TMC is pushing the boundaries of clinical research across its extensive network of partner institutions on a daily basis, pioneering effective health policy solutions to address the complex health care issues of today, and cultivating cutting-edge digital health applications and medical devices. For more information, please visit www.tmc.edu.

For more information please contact:
ABB US MEDIA RELATIONS:
Alex Miller
+1 262 236 3710
alex.x.miller@us.abb.com  

TMC MEDIA CONTACTS:
Public Content                                     
+1 713 524 2800                    
Mark Sullivan / Jonathan Babin
mark@public-content.com
jonathan@public-content.com
   

The post ABB to install advanced collaborative robotics for medical laboratories and hospitals appeared first on Roboticmagazine.

Designing Advanced Robotic Systems: Six Questions to Ask About the Software Powering Your Robots

Like smartphones and PCs, robotics began as a hardware revolution, but today, software is the force that’s reshaping the industry. Are you ready? Here are six crucial questions every company needs to ask about the software behind their robotic solutions

Touch-transmitting Telerobotic Hand at Amazon re:MARS Tech Showcase

TOKYO/SEATTLE/LOS ANGELES/LONDON, June 6, 2019 – After much anticipation, ANA HOLDINGS INC., HaptX, SynTouch, Shadow Robot Company unveiled the next generation of robotics technology at the Amazon Re:Mars Expo. Incorporating the latest advances from across the field of robotic and united by the ingenuity of ANA, the teleoperation and the telepresence system features the first robotic hand to successfully transmit touch sensations. Jeff Bezos Amazon’s CEO tried out the touch-sensitive, dexterous haptic robotic hand set up in an exhibit hall at the Aria Resort and Casino in Las Vegas and described the experience as “weirdly natural.”

Bezos started out with a simple task: picking up a plastic cup and dropping it onto a stack of cups. He then played around with a palm-sized soccer ball and a rainbow ring-stacking puzzle stating “OK, this is really cool.” It was the first time the collaborators of this teleoperation and telepresence technology displayed their creation outside the lab, to an audience made up of experts in machine learning, automation, robotics and space, as well as the general public and to the world’s richest person, Jeff Bezos.

Speaking to GeekWire Aerospace and Science Editor, Alan Boyle, Bezos looked over at the Rubik’s Cube on the table. “You want me to solve that Rubik’s Cube?” he joked. “I can’t even do that with my hands!” When it was time to move on, Bezos gave his trademark laugh and said, “that is really impressive.” He went on to say, “the tactile feedback is really tremendous.” After taking off the haptic gloves, one of the spectators asked Bezos how it felt. “Weirdly natural” he responded.

By combining Shadow Robot’s world-leading dexterous robotic hand with SynTouch’s biomimetic tactile sensors and HaptX’s realistic haptic feedback gloves, the new technology enables unprecedented precision remote-control of a robotic hand.  In recent tests, a human operator in California was able to operate a computer keyboard in London, with each keystroke detected through fingertip sensors on their glove and faithfully relayed 5000 miles to the Dexterous Hand to recreate.  Combining touch with teleoperation in this way is ground-breaking and points to future applications where we might choose – or need – to perform delicate actions at a distance, e.g. bomb disposal, deep-sea engineering or even surgery performed across different states.

Kevin Kajitani, Co-Director of ANA HOLDINGS INC. Avatar Division says, “We are only beginning to scratch the surface of what is possible with these advanced Avatar systems and through telerobotics in general. In addition to sponsoring the $10M ANA Avatar XPRIZE, we’ve approached our three partner companies to seek solutions that will allow us to develop a high performance, intuitive, general-purpose Avatar hand. We believe that this technology will be key in helping humanity connect across vast distances.”

Jake Rubin, Founder and CEO of HaptX says, “Our sense of touch is a critical component of virtually every interaction. The collaboration between HaptX, Shadow Robot Company, SynTouch, and ANA brings a natural and realistic sense of touch to robotic manipulation for the first time, eliminating one of the last barriers to true telepresence.”

Dr. Jeremy Fishel, Co-Founder of SynTouch says, “Users will see just how essential the sense of touch is when it comes to dexterity and manipulation and the various applications it can have within industry.”

Rich Walker, Managing Director of the Shadow Robot Company says, “Our remotely controlled system can help transform work within risky environments such as nuclear decommissioning and we’re already in talks with the UK nuclear establishment regarding the application of this advanced technology. It adds a layer of safety between the worker and the radiation zone as well as increasing precision and accuracy within glovebox-related tasks.”

Paul Cutsinger, Head of Voice Design Education at Amazon Alexa says, “re:MARS embraces an optimistic vision for scientific discovery to advance a golden age of innovation and this teleoperation technology by the Shadow Robot Company, SynTouch and HaptX more than fits the bill. It must be seen.”

[END]

Image Source: Shadow Robot Company – www.shadowrobot.com
Image Source: Shadow Robot Company – www.shadowrobot.com

About ANA

Following the “Inspiration of Japan” high quality of service, ANA has been awarded the respected 5-Star rating every year since 2013 from SKYTRAX. ANA is the only Japanese airline to win this prestigious designation seven years in a row. Additionally, ANA has been recognized by Air Transport World as “Airline of the Year” three times in the past 10 years – 2007, 2013 and 2018, becoming one of the few airlines winning this prestigious award for multiple times.

ANA was founded in 1952 with two helicopters and has become the largest airline in Japan, as well as one of the most significant airlines in Asia, operating 80 international routes and 118 domestic routes. ANA offers a unique dual hub model which enables passengers to travel to Tokyo and connect through the two airports in the metropolitan Tokyo, NARITA and HANEDA, to various destinations throughout Japan, and also offers same day connections between various North American, Asian and Chinese cities.

ANA has been a member of Star Alliance since 1999 and has joint venture partnerships with United Airlines, Lufthansa German Airlines, Swiss International Airlines and Austrian Airlines.

Besides the full service and award winner carrier ANA, the ANA Group has two LCCs as consolidated subsidiaries, Vanilla Air Inc. and Peach Aviation Limited. The ANA Group carried 53.8 million passengers in FY2017, has approximately 39,000 employees and a fleet of 260 aircraft. ANA is a proud launch customer and the biggest operator of the Boeing 787 Dreamliner. 

For more information, please refer to the following link. https://www.ana.co.jp/group/en/

About HaptX Inc.

Founded in 2012 by Jake Rubin and Dr. Robert Crockett, HaptX is a technology company that simulates touch sensation with unprecedented realism. HaptX Gloves enable natural interaction and realistic haptic feedback for virtual reality, teleoperation, and telepresence for the first time. HaptX is a venture-backed startup with offices in San Luis Obispo, CA and Seattle, WA. www.haptx.com

About SynTouch Inc.

SynTouch developed and makes the only sensor technology in the world that endows robots with the ability to replicate – and sometimes exceed – the human sense of touch. Its flagship product – the BioTac – mimics the physical properties and sensory capabilities of the human fingertip. Founded in 2008 and headquartered in Los Angeles, SynTouch develops tactile instrumentation that helps customers quantify how their products feel. www.syntouchinc.com

About Shadow Robot Company:

The Shadow Robot Company is one of the UK’s leading robotic developers, experts at grasping and manipulation for robotic hands. Shadow has worked with companies and researchers across the globe, looking at new ways to apply robotics technologies to solve real-world problems. They develop and sell the Dexterous Hand, recently used to advance research into AI, and the Modular Grasper, an essential tool for supporting industry 4.0. Their new Teleoperation System is being developed for the AVATAR X space program (their third space collaboration after NASA and ESA) and can be deployed in nuclear safety and pharma labs. www.shadowrobot.com

MEDIA CONTACT:

Contact Name: Ms. Jyoti Kumar

Role: Communications Officer at the Shadow Robot Company

Contact Email: jyoti@shadowrobot.com

Office Number: +44 (0)20 7700 2487​

The Tactile Telerobot is the world’s first haptic telerobotic system that transmits realistic touch feedback to an operator located anywhere in the world. It is the product of joint collaboration between Shadow Robot Company, HaptX, and SynTouch. All Nippon Airways funded the project’s initial research and development. It has been described as ” Weirdly natural… this is really impressive, the tactile feedback is really tremendous !” by Amazon’s CEO, Jeff Bezos. Learn more at tactiletelerobot.com

Interested readers can also view further information at: https://www.shadowrobot.com/telerobots/

And here is a youtube link: https://www.youtube.com/watch?v=3rZYn62OId8&feature=youtu.be

***********************************************************************************


The press release above was provided to Roboticmagazine.Com by Shadow Robot Company.

Robotic Magazine’s general note: The contents in press releases and user provided content that are published on this website were provided by their respective owners, and therefore the contents in these do not necessarily represent RoboticMagazine.Com’s point of view, and publishing them does not mean RoboticMagazine.Com endorses the published product or service.

The post Touch-transmitting Telerobotic Hand at Amazon re:MARS Tech Showcase appeared first on Roboticmagazine.

AI method to determine emotions of computer dialogue agents

With New Patent Granted, AKA Brings a Step Closer to More Affective Human-Robot Interaction

Santa Monica, CA, April 12, 2019 — AKA, an AI development company, today announced the issuance of PCT Patent (PCT/KR2018/006493, REG 1019653720000) for “Method of Determining Emotion of Computer Dialogue Agents.”

The patented technology involves a method for determining the emotions of computer dialogue agents.

Developed based on a psychoevolutionary theory–Plutchik’s wheel of emotions–which classifies emotions to eight basic categories, AKA’s new patented technology makes it possible to determine the emotion of a computer dialogue agent by using dimensionality reduction techniques to map sentences into a color-emotion space.

To determine the emotional content of a sentence, the method employs dimensionality reduction techniques to map emotions as points in the three dimensional space. It uses sentences’ pleasure, arousal and dominance values produced by a regression algorithm trained on in-house data to project a point into the 3-dimensional coordinate system. The point is then mapped into a color-emotion space as specified by Plutchik’s wheel of emotions. The final value of the emotion is determined by the point’s position in the color-emotion space: the type of emotion, the intensity of emotion, as well as a color to represent it. This information is finally used to determine the facial expression of Musio, the color of its heart, and as a parameter in guiding the dialogue between the user and Musio.

Source AKA – www.akaintelligence.com

“We believe this is a very important patent received,” said Raymond Jung, CEO of AKA. “it will further strengthen our AI Engine, MUSE, with more accurate emotional expressions in human-robot communications.”

For more information about AKA’s patent in Method of Determining Emotion of Computer, please visit here.

About AKA

AKA is developing AI engines to help improve communication between people and all things digital. AKA’s technology integrates artificial intelligence and big data to more effectively deliver essential communication tools, such as speaking, writing, facial expressions, and gestures, that are often overlooked.

Learn more

Official Homepage: http://www.akaintelligence.com/​

Musio Product page: https://themusio.com/

Media inquiry

press@akaintelligence.com

**********************************************************************************


The press release above was provided to Roboticmagazine.Com by AKA Intelligence in April 2019.

General Note about Press Releases: The contents in press releases that are published on this site were provided by their respective owners of those press releases, and therefore these contents do not necessarily represent roboticmagazine.com point of view, and publishing them does not mean roboticmagazine.com endorses the published product or service.

The post AI method to determine emotions of computer dialogue agents appeared first on Roboticmagazine.

Page 2 of 4
1 2 3 4