Page 336 of 397
1 334 335 336 337 338 397

Multi-joint, personalized soft exosuit breaks new ground

The multi-joint soft exosuit consists of textile apparel components worn at the waist, thighs and calves that guide mechanical forces from an optimized mobile actuation system attached to a rucksack via cables to the ankle and hip joints. In addition, a new tuning method helps personalize the exosuit’s effects to wearers’ specific gaits. Credit: Harvard Biodesign Lab

By Benjamin Boettner

In the future, smart textile-based soft robotic exosuits could be worn by soldiers, fire fighters and rescue workers to help them traverse difficult terrain and arrive fresh at their destinations so that they can perform their respective tasks more effectively. They could also become a powerful means to enhance mobility and quality of living for people suffering from neurodegenerative disorders and for the elderly.

Conor Walsh’s team at the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) has been at the forefront of developing different soft wearable robotic devices that support mobility by applying mechanical forces to critical joints of the body, including at the ankle or hip joints, or in the case of a multi-joint soft exosuit both. Because of its potential for relieving overburdened solders in the field, the Defense Advanced Research Projects Agency (DARPA) funded the team’s efforts as part of its former Warrior Web program.

While the researchers have demonstrated that lab-based versions of soft exosuits can provide clear benefits to wearers, allowing them to spend less energy while walking and running, there remains a need for fully wearable exosuits that are suitable for use in the real world.

Now, in a study reported in the proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), the team presented their latest generation of a mobile multi-joint exosuit, which has been improved on all fronts and tested in the field through long marches over uneven terrain. Using the same exosuit in a second study published in the Journal of NeuroEngineering and Rehabilitation (JNER), the researchers developed an automatic tuning method to customize its assistance based on how an individual’s body is responding to it, and demonstrated significant energy savings.

The multi-joint soft exosuit consists of textile apparel components worn at the waist, thighs, and calves. Through an optimized mobile actuation system worn near the waist and integrated into a military rucksack, mechanical forces are transmitted via cables that are guided through the exosuit’s soft components to ankle and hip joints. This way, the exosuit adds power to the ankles and hips to assist with leg movements during the walking cycle.

“We have updated all components in this new version of the multi-joint soft exosuit: the apparel is more user-friendly, easy to put on and accommodating to different body shapes; the actuation is more robust, lighter, quieter and smaller; and the control system allows us to apply forces to hips and ankles more robustly and consistently,” said David Perry, a co-author of the ICRA study and a Staff Engineer on Walsh’s team. As part of the DARPA program, the exosuit was field-tested in Aberdeen, MD, in collaboration with the Army Research Labs, where soldiers walked through a 12-mile cross-country course.

“We previously demonstrated that it is possible to use online optimization methods that by quantifying energy savings in the lab automatically individualize control parameters across different wearers. However, we needed a means to tune control parameters quickly and efficiently to the different gaits of soldiers at the Army outside a laboratory,” said Walsh, Ph.D., Core Faculty member of the Wyss Institute, the John L. Loeb Associate Professor of Engineering and Applied Sciences at SEAS, and Founder of the Harvard Biodesign Lab.

In the JNER study, the team presented a suitable new tuning method that uses exosuit sensors to optimize the positive power delivered at the ankle joints. When a wearer begins walking, the system measures the power and gradually adjusts controller parameters until it finds those that maximize the exosuit’s effects based on the wearer’s individual gait mechanics. The method can be used as a proxy measure for elaborate energy measurements.

“We evaluated the metabolic parameters in the seven study participants wearing exosuits that underwent the tuning process and found that the method reduced the metabolic cost of walking by about 14.8% compared to walking without the device and by about 22% compared to walking with the device unpowered,” said Sangjun Lee, the first author of both studies and a Graduate Student with Walsh at SEAS.

“These studies represent the exciting culmination of our DARPA-funded efforts. We are now continuing to optimize the technology for specific uses in the Army where dynamic movements are important; and we are exploring it for assisting workers in factories performing strenuous physical tasks,” said Walsh. “In addition, the field has recognized there is still a lot to understand on the basic science of co-adaptation of humans and wearable robots. Future co-optimization strategies and new training approaches could help further enhance individualization effects and enable wearers that initially respond poorly to exosuits to adapt to them as well and benefit from their assistance”.

“This research marks an important point in the Wyss Institute’s Bioinspired Soft Robotics Initiative and its development of soft exosuits in that it opens a path on which robotic devices could be adopted and personalized in real world scenarios by healthy and disabled wearers,” said Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at HMS and the Vascular Biology Program at Boston Children’s Hospital, and Professor of Bioengineering at SEAS.

Additional members of Walsh’s team were authors on either or both studies. Nikos Karavas, Ph.D., Brendan T. Quinlivan, Danielle Louise Ryan, Asa Eckert-Erdheim, Patrick Murphy, Taylor Greenberg Goldy, Nicolas Menard, Maria Athanassiu, Jinsoo Kim, Giuk Lee, Ph.D., and Ignacio Galiana, Ph.D., were authors on the ICRA study; and Jinsoo Kim, Lauren Baker, Andrew Long, Ph.D., Nikos Karavas, Ph.D., Nicolas Menard, and Ignacio Galiana, Ph.D., on the JNER study. The studies, in addition to DARPA’s Warrior Web program, were funded by Harvard’s Wyss Institute and SEAS.

Machine-learning system tackles speech and object recognition, all at once

MIT computer scientists have developed a system that learns to identify objects within an image, based on a spoken description of the image.
Image: Christine Daniloff

By Rob Matheson

MIT computer scientists have developed a system that learns to identify objects within an image, based on a spoken description of the image. Given an image and an audio caption, the model will highlight in real-time the relevant regions of the image being described.

Unlike current speech-recognition technologies, the model doesn’t require manual transcriptions and annotations of the examples it’s trained on. Instead, it learns words directly from recorded speech clips and objects in raw images, and associates them with one another.

The model can currently recognize only several hundred different words and object types. But the researchers hope that one day their combined speech-object recognition technique could save countless hours of manual labor and open new doors in speech and image recognition.

Speech-recognition systems such as Siri and Google Voice, for instance, require transcriptions of many thousands of hours of speech recordings. Using these data, the systems learn to map speech signals with specific words. Such an approach becomes especially problematic when, say, new terms enter our lexicon, and the systems must be retrained.

“We wanted to do speech recognition in a way that’s more natural, leveraging additional signals and information that humans have the benefit of using, but that machine learning algorithms don’t typically have access to. We got the idea of training a model in a manner similar to walking a child through the world and narrating what you’re seeing,” says David Harwath, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Spoken Language Systems Group. Harwath co-authored a paper describing the model that was presented at the recent European Conference on Computer Vision.

In the paper, the researchers demonstrate their model on an image of a young girl with blonde hair and blue eyes, wearing a blue dress, with a white lighthouse with a red roof in the background. The model learned to associate which pixels in the image corresponded with the words “girl,” “blonde hair,” “blue eyes,” “blue dress,” “white light house,” and “red roof.” When an audio caption was narrated, the model then highlighted each of those objects in the image as they were described.

One promising application is learning translations between different languages, without need of a bilingual annotator. Of the estimated 7,000 languages spoken worldwide, only 100 or so have enough transcription data for speech recognition. Consider, however, a situation where two different-language speakers describe the same image. If the model learns speech signals from language A that correspond to objects in the image, and learns the signals in language B that correspond to those same objects, it could assume those two signals — and matching words — are translations of one another.

“There’s potential there for a Babel Fish-type of mechanism,” Harwath says, referring to the fictitious living earpiece in the “Hitchhiker’s Guide to the Galaxy” novels that translates different languages to the wearer.

The CSAIL co-authors are: graduate student Adria Recasens; visiting student Didac Suris; former researcher Galen Chuang; Antonio Torralba, a professor of electrical engineering and computer science who also heads the MIT-IBM Watson AI Lab; and Senior Research Scientist James Glass, who leads the Spoken Language Systems Group at CSAIL.

Audio-visual associations

This work expands on an earlier model developed by Harwath, Glass, and Torralba that correlates speech with groups of thematically related images. In the earlier research, they put images of scenes from a classification database on the crowdsourcing Mechanical Turk platform. They then had people describe the images as if they were narrating to a child, for about 10 seconds. They compiled more than 200,000 pairs of images and audio captions, in hundreds of different categories, such as beaches, shopping malls, city streets, and bedrooms.

They then designed a model consisting of two separate convolutional neural networks (CNNs). One processes images, and one processes spectrograms, a visual representation of audio signals as they vary over time. The highest layer of the model computes outputs of the two networks and maps the speech patterns with image data.

The researchers would, for instance, feed the model caption A and image A, which is correct. Then, they would feed it a random caption B with image A, which is an incorrect pairing. After comparing thousands of wrong captions with image A, the model learns the speech signals corresponding with image A, and associates those signals with words in the captions. As described in a 2016 study, the model learned, for instance, to pick out the signal corresponding to the word “water,” and to retrieve images with bodies of water.

“But it didn’t provide a way to say, ‘This is exact point in time that somebody said a specific word that refers to that specific patch of pixels,’” Harwath says.

Making a matchmap

In the new paper, the researchers modified the model to associate specific words with specific patches of pixels. The researchers trained the model on the same database, but with a new total of 400,000 image-captions pairs. They held out 1,000 random pairs for testing.

In training, the model is similarly given correct and incorrect images and captions. But this time, the image-analyzing CNN divides the image into a grid of cells consisting of patches of pixels. The audio-analyzing CNN divides the spectrogram into segments of, say, one second to capture a word or two.

With the correct image and caption pair, the model matches the first cell of the grid to the first segment of audio, then matches that same cell with the second segment of audio, and so on, all the way through each grid cell and across all time segments. For each cell and audio segment, it provides a similarity score, depending on how closely the signal corresponds to the object.

The challenge is that, during training, the model doesn’t have access to any true alignment information between the speech and the image. “The biggest contribution of the paper,” Harwath says, “is demonstrating that these cross-modal alignments can be inferred automatically by simply teaching the network which images and captions belong together and which pairs don’t.”

The authors dub this automatic-learning association between a spoken caption’s waveform with the image pixels a “matchmap.” After training on thousands of image-caption pairs, the network narrows down those alignments to specific words representing specific objects in that matchmap.

“It’s kind of like the Big Bang, where matter was really dispersed, but then coalesced into planets and stars,” Harwath says. “Predictions start dispersed everywhere but, as you go through training, they converge into an alignment that represents meaningful semantic groundings between spoken words and visual objects.”

First results of the ROSIN project: Robotics Open-Source Software for Industry

Open-Source Software for robots is a de-facto standard in academia, and its advantages can benefit industrial applications as well. The worldwide ROS-Industrial initiative has been using ROS, the Robot Operating System, to this end.

In order to consolidate Europe’s expertise in advanced manufacturing, the H2020 project ROSIN supports EU’s strong role within ROS-Industrial. It will achieve this goal through three main actions on ROS: ensuring industrial-grade software quality; promoting new business-relevant applications through so-called Focused Technical Projects (FTPs); supporting educational activities for students and industry professionals on the one side conducting ROS-I trainings as well as and MOOCs and on the other hand by supporting education at third parties via Education Projects (EPs).

Now it is easier to get an overview of first results from ROSIN at http://rosin-project.eu/results.

Collage of Focused Techincal Projects (FTPs) supported by ROSIN
Focused Techincal Projects (FTPs) supported by ROSIN

Browse through vendor-developed ROS drivers for industrial hardware, generic ROS frameworks for industrial applications and model based tooling. Thanks to ROSIN support, all of these new ROS components are open-sourced for the benefit of the ROS-Industrial community. Each entry leads to a minipage that is maintained by the FTP champion, so check back often for updates on the progress of the projects.

The project is continuously accepting FTP project proposals to advance open-source robot software. New incoming proposals are evaluated every 3 months (approximately). The next cut-off dates will be 14. September and 16. November 2018. Further calls can be expected throughout the project runtime (January 2017 – December 2020).

Pepper-picking robot demonstrates its skills in greenhouse labour automation

With the rising shortage of skilled workforce in agriculture, there's a growing need for robotisation to perform labour-intensive and repetitive tasks in greenhouses. Enter SWEEPER, the EU-funded project developing a sweet pepper-harvesting robot that can help farmers reduce their costs.

#269: Artificial Intelligence and Data Analysis in Salesforce Analytics, with Amruta Moktali


In this interview, Audrow Nash interviews Amruta Moktali, VP of Product Management at Salesforce Analytics, about Salesforce Analytics’ analytic and artificial intelligence software. Moktali discusses the data-pipeline, how data is processed (e.g., noise), and how insights are identified.  She also talks about how dimensions in the data can be controlled for (such as race, gender, or zip-code) to avoid bias and how other dimensions can be selected as actionable so Salesforce can make recommendations—and how they use interpretable methods so that these recommendations can be explained.  Moktali also tells about her professional path, including going from computer engineering and computer science to product management and her experience with intrapreneurship (that is, starting an endeavor within a large organization).

Here is a video demo of Einstein Analytics, and you can watch Moktali’s live in the Einstein Analytics keynote at Dreamforce on Thursday, Sept. 27 at 5pm PT at salesforce.com/live and youtube.com/user/dreamforce.

 

Amruta Moktali
Amruta Moktali, VP of Product Management for Salesforce Analytics, has spent 10+ years immersed in the data and analytics side of popular products. Before Salesforce, she was head of product at Topsy Labs, the social search and analytics company, where her team pinpointed the catalyst tweets that initiated the Arab Spring in Egypt. Topsy was acquired by Apple and is now part of Apple Search technology. Prior to that she worked at Microsoft where she worked on several products including Bing, which she had a hand in shaping with the Powerset team. She earned her bachelor’s degree in computer engineering at Maharaja Sayajirao University in India, and her master’s in computer science at Arizona State University.

 

Links

Robotiq Makes Force Control Easy with Force Copilot

New software unleashes force sensing on UR e-Series

Quebec City, Canada, September 5—Robotiq is launching Force Copilot, an intuitive software to operate Universal Robots e-Series’ embedded force torque sensor. Force Copilot accelerates the programming of a whole host of applications, including part insertion and surface finding, among many others.

 

Force Copilot’s sensing functions increase flexibility and reliability in machine-tending, assembly, finishing, and pick-and-place applications. A suite of setup tools allows the user to hand-guide the robot on complex trajectories. The software makes it easy to place objects precisely in jigs, trays, and chucks, and it facilitates assembly applications through its alignment, indexing, and insertion functions. Finally, the intuitive interface unlocks finishing applications, with adjustable adaptive compliance and constant force for all robot axes.

 

“We want to free every production line operator in the world from repetitive manual tasks. With Force Copilot, we are making complex robot-movement programming accessible to anyone,” says Robotiq CEO Samuel Bouchard. “Force Copilot works as the human operator’s guide, helping program the robot quickly and easily. We’re proud to see the next step of the human-robot collaboration take shape.”

 

 

David Maltais

Public Relations at Robotiq

1-418-929-2513

 

————————

Press release provided to Roboticmagazine.Com by Robotiq.Com

The post Robotiq Makes Force Control Easy with Force Copilot appeared first on Roboticmagazine.

Parrot: senseFly eBee X and the Parrot ANAFI Work

Parrot announces two new professional drone solutions at InterDrone 2018 – the senseFly eBee X and the Parrot ANAFI Work

 

The two cutting-edge platforms by Parrot-the leading European drone group-help professionals work more efficiently, cut costs, reduce worker risk and make better decisions

 

September 5, 2018, InterDrone (Las Vegas, USA) – Parrot today strengthens its Parrot Business Solutions portfolio with the release of two new innovative platforms: the senseFly eBee X and the Parrot ANAFI Work.

 

Launched with the promise that “it’s not about the drone,” but instead about overcoming business challenges, these reliable aerial solutions offer highly accurate insights, whatever the user’s level of drone experience and budget.

 

“The enterprise-grade eBee X mapping platform and the ultra-compact 4K drone solution for every business, ANAFI Work, showcase the strength and breadth of the growing Parrot Business Solutions portfolio,” said Gilles Labossière, the Executive Vice President and COO of Parrot Group, and senseFly CEO. “More than the drones themselves however, what’s key is that these end-to-end solutions are built upon the commercial knowledge of the entire Parrot Group, providing professionals at all levels with a means to improve their business results-by boosting efficiency, reducing costs, improving worker safety and providing the insights needed to take better decisions.”

 

eBee X: the fixed-wing drone that allows operators to map without limits

 

The senseFly eBee X fixed-wing drone is designed to boost the quality, efficiency and safety of geospatial professionals’ data collection. This enterprise-grade solution offers a camera to suit every job, the accuracy and coverage capabilities to meet the requirements of even the most demanding projects and is durable enough to work virtually every site.

 

MULTI-PURPOSE

One tool, multiple cameras, for every job

The eBee X includes a range of revolutionary new camera options to suit every mapping job-from land surveying and topographic mapping to urban planning, crop mapping, thermal mapping, environmental monitoring and many more. These cameras include:

 

  • ThesenseFly S.O.D.A. 3D: a unique drone photogrammetry camera with a one-inch sensor, which changes orientation during flight to capture three images (two oblique, one nadir) every time, instead of just one, for a much wider field of view. The result is stunning digital 3D reconstructions in vertically-focused environments-such as urban areas, open pit mines and coastlines-over larger areas than quadcopter drones can achieve. senseFly S.O.D.A. 3D is optimised for quick, robust image processing with Pix4Dmapper software.

 

  • ThesenseFly Aeria X: a compact drone photogrammetry camera with APS-C sensor. This rugged innovation offers an ideal blend of size, weight and DSLR-like image quality. Thanks in part to its built-in Smart Exposure technology, it provides outstanding image detail and clarity, in virtually all light conditions, allowing operators to map for more hours per day than ever before.

 

  • ThesenseFly Duet T: a dual-camera thermal mapping rig, which lets mapping professionals create geo-accurate thermal maps and digital surface models quickly and easily. The Duet T includes both a high-resolution (640 x 512 px) thermal infrared camera and a senseFly S.O.D.A. RGB camera with one-inch sensor. Both image sources can be accessed as required, while the rig’s built-in Camera Position Synchronisation feature works in sync with Pix4Dmapper photogrammetry software (optional) to simplify the map reconstruction process.

 

The eBee X is also compatible with the Parrot Sequoia+ multispectral camera for agriculture, the senseFly S.O.D.A. drone photogrammetry camera and senseFly Corridor for simple linear mapping.

 

EFFICIENT & PRECISE

Meet every project’s requirements

The eBee X can meet the exacting requirements of every project. Its unique Endurance Extension option unlocks a flight time of up to 90 minutes (versus a maximum endurance of 59 minutes by default). With this capability activated, the drone is able to achieve vast single-flight coverage of up to 500 ha (1,235 ac) at 122 m (400 ft), while the eBee X’s built-in High-Precision on Demand (RTK/PPK) function helps operators to achieve absolute accuracy of down to 3 cm (1.2 in)-without ground control points.

 

RUGGED & RELIABLE

Work every site, no matter how challenging

The eBee X allows users to work virtually every site, no matter how demanding, thanks to the drone’s built-in Steep Landing technology, ultra-robust design, live air traffic data and more, all backed by senseFly’s professional, localised support.

 

The eBee X is ideally suited to the varied and evolving needs of mapping professionals. These include: surveying and construction companies, quarry and mine operators, agronomists and forestry engineers, professional drone service providers, aerial imagery companies, environmental researchers and more.

 

The eBee X is supplied with senseFly’s eMotion flight planning and data management software and is available for purchase immediately from authorised senseFly distributors (listed here https://www.sensefly.com/sensefly-distributors/). Professional image processing software by Pix4D, another Parrot subsidiary, is optional.

 

Watch the eBee X video: https://www.youtube.com/watch?v=jxriE8mtYe0

 

ANAFI Work: the 4K ultra-compact drone solution for every business

ANAFI Work is a 4K ultra-compact drone solution for everyday business use by construction professionals, independent contractors, site managers, architects, creative agencies and more. Based on Parrot’s highly-acclaimed ANAFI drone (launched in June 2018), this highly capable, advanced imaging tool makes it easy, and safe, to inspect those hard to reach areas of buildings, and to monitor, model and professionally shoot projects from the sky.

 

ADVANCED IMAGING SYSTEM

Capture 4K data from the sky

ANAFI Work features a 4K HDR video, 21 MP high-resolution camera. The camera’s three-axis stabilization system allows the drone to shoot ultra-smooth videos and take steady photos. It also features a controllable +/-90° tilt camera, which is unique to the market, allowing professionals to inspect under structures such as balconies or bridges with the Zenith view (+90°) and roofs with the Nadir view (-90°).

 

ANAFI Work’s camera is also equipped with a lossless zoom of 1.4x in 4K, 2.8x in full HD (1080p), and up to 3x standard digital zoom, allowing professionals to get a closer look at issues, when required, without reducing footage quality, and all while staying a safe distance away from walls.

 

PERFORMANCE ON THE GO

Always ready to deploy

ANAFI Work boasts industry-leading flight performance, with a 25-minute flight time per battery (four batteries are included). The drone is ultra-compact, weighing in at 320 g (0.7 lb), and its four arms can fold and unfold in less than three seconds, making it an ideal ready to use tool for busy professionals.

 

It also features a USB-C charging system, which charges 70% faster than standard USB-A, and enables the drone to be charged on-the-go with smartphones, laptops or power banks.

 

Thanks to its powerful yet quiet propulsion system, ANAFI Work can fly in wind of up to 50 km (31 mi) per hour, plus its omnidirectional transmission system always maintains a strong radio connection, thanks to four dual band antennas embedded in the drone.

 

EASY TO USE

For every professional

With ANAFI Work, every professional can fly manually and take images of all types of infrastructure using the intuitive FreeFlight 6 mobile app. Flying autonomously meanwhile, and acquiring precise data, is made easy with the app’s pre-integrated piloting modes: Touch&Fly, POI (Point of Interest), and Flight Plan.

 

Missions are also made stress-free due to the drone’s Geofence and Smart RTH (Return to Home) functionalities: Geofence allows the operator to define a specific flight zone with a maximum height and maximum distance, making sure the drone stays in the defined mission area, while with Smart RTH, ANAFI Work is always able to return to its initial take-off position with the single push of a button.

 

EXPERIENCE 3D MODELING

Produce 3D models and measurements with Pix4Dmodel

Moreover, with ANAFI Work, Parrot invites professionals to experience quick and easy 3D modeling by offering a one-year subscription to Pix4Dmodel. Architects, roofers and construction workers, for example, can use ANAFI Work with Pix4Dmodel to take accurate measurements, perform post-flight 3D inspections, share markers or 3D models directly using any web browser or export their output to their preferred architecture software.

 

ANAFI Work includes:

  • 1 ANAFI drone
  • 4 Smart Batteries
  • 1 multi-port USB Mains Charger
  • 1 water-resistant shoulder bag
  • 1 Parrot Skycontroller 3
  • 8 propellers
  • 1 16 GB SD card
  • USB-A to USB-C cables
  • One-year subscription to Pix4Dmodel

 

ANAFI Work is available for presale now (shipping October 2018) via Parrot.com and official Parrot Business Solutions resellers. Price is US $1,099 Excluding Sales Tax

 

###

 

Don’t miss the following two Parrot Business Solutions presentations at InterDrone:

  • 2:15 PM – 2:45 p.m., Wednesday, September 5: A Tale of Two Drones (Product Showcase)

Jean-Thomas Célette, Chief Strategy & Product Officer, Parrot Business Solutions

  • 4:45 PM – 5:05 p.m., Thursday, September 6: It’s Not About The Drone (Keynote)

Matt Wade, Head of Marketing, Parrot Business Solutions

 

###

Image Source: www.realwire.com

 

Image Source: www.realwire.com

 

Image Source: www.realwire.com

 

About Parrot Business Solutions

Parrot, the leading European drone group, offers business solutions spanning drones, software, sensors and services, mainly focusing on 3 major verticals:

  • Agriculture
  • 3D mapping, surveying and inspection
  • Public safety

 

Founded in 1994 by Henri Seydoux, the Parrot Group designs and engineers its products in Europe, mainly in France and Switzerland. Headquartered in Paris, Parrot has been listed since 2006 on Euronext Paris (FR0004038263 – PARRO). For more information: www.parrot.com

 

About senseFly

At senseFly, we believe in using technology to make work safer and more efficient. Our proven drone solutions simplify the collection and analysis of geospatial data, allowing professionals in surveying, agriculture, engineering and humanitarian aid to make better decisions, faster.

 

senseFly was founded in 2009 and quickly became the leader in mapping drones. The company is a commercial drone subsidiary of Parrot Group. For more information: www.sensefly.com

 

Press contact

Jessica Sader

PR Manager

Parrot Business Solutions

+1 586 879 7104

jessica.sader@parrot.com

The post Parrot: senseFly eBee X and the Parrot ANAFI Work appeared first on Roboticmagazine.

Page 336 of 397
1 334 335 336 337 338 397