Page 404 of 427
1 402 403 404 405 406 427

What the robots of Star Wars tell us about automation, and the future of human work


File 20171212 9386 8xrbbt.jpg?ixlib=rb 1.1
BB-8 is an “astromech droid” who first appeared in The Force Awakens.
Lucasfilm/IMDB

By Paul Salmon, University of the Sunshine Coast

Millions of fans all over the world eagerly anticipated this week’s release of Star Wars: The Last Jedi, the eighth in the series. At last we will get some answers to questions that have been vexing us since 2015’s The Force Awakens.

Throughout the franchise, the core characters have been accompanied by a number of much-loved robots, including C-3PO, R2-D2 and more recently, BB-8 and K2-SO. While often fulfilling the role of wise-cracking sidekicks, these and other robots also play an integral role in events.

Interestingly, they can also tell us useful things about automation, such as whether it poses dangers to us and whether robots will ever replace human workers entirely. In these films, we see the good, bad and ugly of robots – and can thus glean clues about what our technological future might look like.

The fear of replacement

One major fear is that robots and automation will replace us, despite work design principles that tell us technology should be used as a tool to assist, rather than replace, humans. In the world of Star Wars, robots (or droids as they are known) mostly assist organic lifeforms, rather than completely replace them.

R2-D2 and C3PO in A New Hope.
Lucasfilms/IMDB

So for instance, C-3PO is a protocol droid who was designed to assist in translation, customs and etiquette. R2-D2 and the franchise’s new darling, BB-8, are both “astromech droids” designed to assist in starship maintenance.

In the most recent movie, Rogue One, an offshoot of the main franchise, we were introduced to K2-SO, a wisecracking advanced autonomous military robot who was caught and reprogrammed to switch allegiance to the rebels. K2-SO mainly acts as a co-pilot, for example when flying a U-Wing with the pilot Cassian Andor to the planet of Eadu.

In most cases then, the Star Wars droids provide assistance – co-piloting ships, helping to fix things, and even serving drinks. In the world of these films, organic lifeforms are still relied upon for most skilled work.

When organic lifeforms are completely replaced, it is generally when the work is highly dangerous. For instance, during the duel between Annakin and Obi Wan on the planet Mustafar in Revenge of the Sith, DLC-13 mining droids can be seen going about their work in the planet’s hostile lava rivers.

Further, droid armies act as the frontline in various battles throughout the films. Perhaps, in the future, we will be OK with losing our jobs if the work in question poses a significant risk to our health.

K2-SO in Rogue One.
Lucasfilm/IMDB

However, there are some exceptions to this trend in the Star Wars universe. In the realm of healthcare, for instance, droids have fully replaced organic lifeforms. In The Empire Strikes Back a medical droid treats Luke Skywalker after his encounter with a Wampa, a yeti-like snow beast on the planet Hoth. The droid also replaces his hand following his battle with Darth Vadar on the planet Bespin.

Likewise, in Revenge of the Sith, a midwife droid is seen delivering the siblings Luke and Leia on Polis Massa.

Perhaps this is one area in which Star Wars has it wrong: here on earth, full automation is a long way off in healthcare. Assistance from robots in healthcare is the more realistic prospect and is in fact, already here. Indeed, robots have been assisting surgeons in operating theatres for some time now.

Automated vehicles

Driverless vehicles are currently flavour of the month – but will we actually use them? In Star Wars, despite the capacity for spacecraft and star ships to be fully automated, organic lifeforms still take the controls. The spaceship Millenium Falcon, for example, is mostly flown by the smuggler Han Solo and his companion Chewbacca.

Most of the Star Wars starship fleet (A-Wings, X-Wings, Y-Wings, Tie Fighters, Star Destroyers, Starfighters and more) ostensibly possess the capacity for fully automated flight, however, they are mostly flown by organic lifeforms. In The Phantom Menace the locals on Tatooine have even taken to building and manually racing their own “pod racers”.

It seems likely that here on earth, humans too will continue to prefer to drive, fly, sail, and ride. Despite the ability to fully automate, most people will still want to be able to take full control.

Flawless, error proof robots?

Utopian visions often depict a future where sophisticated robots will perform highly skilled tasks, all but eradicating the costly errors that humans make. This is unlikely to be true.

A final message from the Star Wars universe is that the droids and advanced technologies are often far from perfect. In our own future, costly human errors may simply be replaced by robot designer errors.

R5-D4, the malfunctioning droid of A New Hope.
Lucasfilms/IMDB

The B1 Battle Droids seen in the first and second Star Wars films lack intelligence and frequently malfunction. C-3PO is notoriously error prone and his probability-based estimates are often wide of the mark.

In the fourth film, A New Hope, R5-D4 (another astromech droid) malfunctions and explodes just as the farmer Owen Lars is about to buy it. Other droids are slow and clunky, such as the GNK Power droid and HURID-327, the groundskeeper at the castle of Maz Kanata in The Force Awakens.

The much feared scenario, whereby robots become so intelligent that they eventually take over, is hard to imagine with this lot.

The ConversationPerhaps the message from the Star Wars films is that we need to lower our expectations of robot capabilities, in the short term at least. Cars will still crash, mistakes will still be made, regardless of whether humans or robots are doing the work.

Paul Salmon, Professor of Human Factors, University of the Sunshine Coast

This article was originally published on The Conversation. Read the original article.

Robots in Depth with Ian Bernstein

In this episode of Robots in Depth, Per Sjöborg speaks with Ian Bernstein, the founder of several robotics companies including Sphero. He shares his experience from completing 5 successful rounds of financing, raising 17 million dollars in the 5th one.

Ian also talks about building a world-wide distribution network and the complexity of combining software and hardware development. We then discuss what is happening in robotics and where future successes may come from, including the importance of Kickstarter and Indiegogo.

If you view this episode, you will also learn which day of the week people don’t play with their Sphero :-).

Towards intelligent industrial co-robots

By Changliu Liu, Masayoshi Tomizuka

Democratization of Robots in Factories

In modern factories, human workers and robots are two major workforces. For safety concerns, the two are normally separated with robots confined in metal cages, which limits the productivity as well as the flexibility of production lines. In recent years, attention has been directed to remove the cages so that human workers and robots may collaborate to create a human-robot co-existing factory.

Manufacturers are interested in combining human’s flexibility and robot’s productivity in flexible production lines. The potential benefits of industrial co-robots are huge and extensive, e.g. they may be placed in human-robot teams in flexible production lines, where robot arms and human workers cooperate in handling workpieces, and automated guided vehicles (AGV) co-inhabit with human workers to facilitate factory logistics. In the factories of the future, more and more human-robot interactions are anticipated to take place. Unlike traditional robots that work in structured and deterministic environments, co-robots need to operate in highly unstructured and stochastic environments. The fundamental problem is how to ensure that co-robots operate efficiently and safely in dynamic uncertain environments. In this post, we introduce the robot safe interaction system developed in the Mechanical System Control (MSC) lab.




Fig. 1. The factory of the future with human-robot collaborations.

Existing Solutions

Robot manufacturers including Kuka, Fanuc, Nachi, Yaskawa, Adept and ABB are providing or working on their solutions to the problem. Several safe cooperative robots or co-robots have been released, such as Collaborative Robots CR family from FANUC (Japan), UR5 from Universal Robots (Denmark), Baxter from Rethink Robotics (US), NextAge from Kawada (Japan) and WorkerBot from Pi4_Robotics GmbH (Germany). However, many of these products focus on intrinsic safety, i.e. safety in mechanical design, actuation and low level motion control. Safety during social interactions with humans, which are key to intelligence (including perception, cognition and high level motion planning and control), still needs to be explored.

Technical Challenges

Technically, it is challenging to design the behavior of industrial co-robots. In order to make the industrial co-robots human-friendly, they should be equipped with the abilities to: collect environmental data and interpret such data, adapt to different tasks and different environments, and tailor itself to the human workers’ needs. For example, during human-robot collaborative assembly shown in the figure below, the robot should be able to predict that once the human puts the two workpieces together, he will need the tool to fasten the assemble. Then the robot should be able to get the tool and hand it over to the human, while avoid colliding with the human.




Fig. 2. Human-robot collaborative assembly.

To achieve such behavior, the challenges lie in (1) the complication of human behaviors, and (2) the difficulty in assurance of real time safety without sacrificing efficiency. The stochastic nature of human motions brings huge uncertainty to the system, making it hard to ensure safety and efficiency.

The Robot Safe Interaction System and Real-time Non-convex Optimization

The robot safe interaction system (RSIS) has been developed in the Mechanical System Control lab, which establishes a methodology to design the robot behavior to achieve safety and efficiency in peer-to-peer human-robot interactions.

As robots need to interact with humans, who have long acquired interactive behaviors, it is natural to let robot mimic human behavior. Human’s interactive behavior can result from either deliberate thoughts or conditioned reflex. For example, if there is a rear-end collision in the front, the driver of a following car may instinctively hit the brake. However, after a second thought, that driver may speed up to cut into the other lane to avoid chain rear-end. The first is a short-term reactive behavior for safety, while the second needs calculation on current conditions, e.g. whether there is enough space to achieve a full stop, whether there is enough gap for a lane change, and whether it is safer to change lane or do a full stop.

A parallel planning and control architecture has been introduced mimicking these kind of behavior, which included both long term and short term motion planners. The long term planner (efficiency controller) emphasizes efficiency and solves a long-term optimal control problem in receding horizons with low sampling rate. The short term planner (safety controller) addresses real time safety by solving a short-term optimal control problem with high sampling rate based on the trajectories planned by the efficiency controller. This parallel architecture also addresses the uncertainties, where the long term planner plans according to the most-likely behavior of others, and the short term planner considers almost all possible movements of others in the short term to ensure safety.




Fig. 3. The parallel planning and control architecture in the robot safe interaction system.

However, the robot motion planning problems in clustered environment are highly nonlinear and non-convex, hence hard to solve in real time. To ensure timely responses to the change of the environment, fast algorithms are developed for real-time computation, e.g. the convex feasible set algorithm (CFS) for the long term optimization, and the safe set algorithm (SSA) for the short term optimization. These algorithms achieve faster computation by convexification of the original non-convex problem, which is assumed to have convex objective functions, but non-convex constraints. The convex feasible set algorithm (CFS) iteratively solves a sequence of sub-problems constrained in convex subsets of the feasible domain. The sequence of solutions will converge to a local optima. It converges in fewer iterations and run faster than generic non-convex optimization solvers such as sequential quadratic programming (SQP) and interior point method (ITP). On the other hand, the safe set algorithm (SSA) transforms the non convex state space constraints to convex control space constraints using the idea of invariant set.




Fig. 4. Illustration of convexification in the CFS algorithm.

With the parallel planner and the optimization algorithms, the robot can interact with the environment safely and finish the tasks efficiently.




Fig. 5. Real time motion planning and control.

Towards General Intelligence: the Safe and Efficient Robot Collaboration System (SERoCS)

We now work on an advanced version of RSIS in the Mechanical System Control lab, the safe and efficient robot collaboration system (SERoCS), which is supported by National Science Foundation (NSF) Award #1734109. In addition to safe motion planning and control algorithms for safe human-robot interactions (HRI), SERoCS also consists of robust cognition algorithms for environment monitoring, optimal task planning algorithms for safe human-robot collaboration. The SERoCS will significantly expand the skill sets of the co-robots and prevent or minimize occurrences of human-robot collision and robot-robot collision during operation, hence enables harmonic human-robot collaboration in the future.




Fig. 6. SERoCS Architecture.

This article was initially published on the BAIR blog, and appears here with the authors’ permission.

References

C. Liu, and M. Tomizuka, “Algorithmic safety measures for intelligent industrial co-robots,” in IEEE International Conference on Robotics and Automation (ICRA), 2016.
C. Liu, and M. Tomizuka, “Designing the robot behavior for safe human robot interactions”, in Trends in Control and Decision-Making for Human-Robot Collaboration Systems (Y. Wang and F. Zhang (Eds.)). Springer, 2017.
C. Liu, and M. Tomizuka, “Real time trajectory optimization for nonlinear robotic systems: Relaxation and convexification”, in Systems & Control Letters, vol. 108, pp. 56-63, Oct. 2017.
C. Liu, C. Lin, and M. Tomizuka, “The convex feasible set algorithm for real time optimization in motion planning”, arXiv:1709.00627.

Congratulations to Semio, Apellix and Mothership Aeronautics

The Robot Launch global startup competition is over for 2017. We’ve seen startups from all over the world and all sorts of application areas – and we’d like to congratulate the overall winner Semio, and runners up Apellix and Mothership Aeronautics. All three startups met the judges criteria; to be an early stage platform technology in robotics or AI with great impact, large market potential and near term customer pipeline.

Semio from Southern California is a software platform for developing and deploying social robot skills. Ross Mead, founder and CEO of Semio said that “he was greatly looking forward to spending more time with The Robotics Hub, and is excited about the potential for Semio moving forward.”

Apellix from Florida provides software controlled aerial robotic systems that utilize tethered and untethered drones to move workers from harm’s way; such as window washers on skyscrapers (window washing drone, windmill blade cleaning and coating drone), painters on scaffolding (spray painting drone, graffiti removal drone), and workers spraying toxic chemicals (corrosion control).

Robert Dahlstrom, founder and CEO of Apellix said, “As an entrepreneur I strongly believe in startup’s potential to improve lives, create jobs, and make the world a more exciting place. I also know first hand how difficult and challenging a startup can be (an emotional roller coaster ride) and how valuable the work Robot Launch is.”

Mothership Aeronautics from Silicon Valley have a solar powered drone capable of ‘infinity cruise’ where more power is generated than consumed. The drone can perform aerial surveillance and inspection for large scale infrastructures, like pipelines, railways and powerlines. Mothership may also fulfill the ‘warehouse in the sky’ vision that both Amazon and Walmart have tried to patent.

Jonathan Nutzati, founder and CEO of Mothership Aero, accepting his Robot Launch trophy from Michael Harries, Investor at The Robotics Hub.

The other awardees are.

  • Kinema Systems, impressive approach to logistical challenges from the original Silicon Valley team that developed ROS.
  • BotsandUs, highly awarded UK startup with a beautifully designed social robot for retail.
  • Fotokite, smart team from ETHZurich with a unique approach to using drones in large scale venues.
  • C2RO, from Canada are creating an expansive cloud based AI platform for service robots.
  • krtkl, from Silicon Valley are high end embedded board designed for both prototyping and deployment.
  • Tennibot, from Alabama have a well designed tennis ball collecting robot. And it’s portable and it’s cute.

And as mentioned in our previous article, the three startups who won the Robohub Choice award were UniExo, BotsAndUs and Northstar Robotics. All the award winners will be featured on Robohub and get access to the Silicon Valley Robotics accelerator program and cowork space, where the award ceremony took place as part of a larger investor/startup showcase.

 

The Silicon Valley Robotics cowork space is at the newly opened Circuit Launch, and provides more than 30,000 sq ft of hot desks and office spaces with professional prototyping facilities. Access to the space is for interesting robotics, AI, AR/VR and sensor technologies, and can include access to the Silicon Valley Robotics startup accelerator program.

The other startups that pitched on the day were; Vecna, Twisted Field, RoboLoco, Dash Shipping, Tekuma, Sake Robotics and Kinema Systems.

Not all of the startups were from the Bay Area – Dash flew up from LA, and Vecna/Twisted Field from Boston, while Tekuma came from Australia as part of an Australian government startup program.

Paul Ekas presenting SAKE Robotics
Daniel Theobald presenting Vecna Robotics and Twisted Field

Looping quadrotor balances an inverted pendulum

Credit: Youtube

This latest video from the D’Andrea lab shows a quadrotor performing a looping trajectory while balancing an inverted pendulum at the same time.

The video is pretty self-explanatory and includes lots of the technical details – enjoy!

The work, which will be detailed in an upcoming paper, was done by Julien Kohler, Michael Mühlebach, Dario Brescianini, and Raffaello D’Andrea at ETH Zürich. You can learn more about the Flying Machine Arena here.

Holiday robot videos 2017: Part 1

Our first few submissions have now arrived! Have a holiday robot video of your own that you’d like to share? Send your submissions to editors@robohub.org.


“I made 2000 ugly holiday cards with a $100k robot arm” by Simone Giertz


“Making Ideas Come True” by Danish Technological Institute


“Hey, Jibo. Welcome home for the holidays.” by Jibo



“Bake Together” and “Decorate Together” by iRobot

Keep them coming! Email us your holiday robot videos at editors@robohub.org!

Computer systems predict objects’ responses to physical forces

As part of an investigation into the nature of humans’ physical intuitions, MIT researchers trained a neural network to predict how unstably stacked blocks would respond to the force of gravity.
Image: Christine Daniloff/MIT

Josh Tenenbaum, a professor of brain and cognitive sciences at MIT, directs research on the development of intelligence at the Center for Brains, Minds, and Machines, a multiuniversity, multidisciplinary project based at MIT that seeks to explain and replicate human intelligence.

Presenting their work at this year’s Conference on Neural Information Processing Systems, Tenenbaum and one of his students, Jiajun Wu, are co-authors on four papers that examine the fundamental cognitive abilities that an intelligent agent requires to navigate the world: discerning distinct objects and inferring how they respond to physical forces.

By building computer systems that begin to approximate these capacities, the researchers believe they can help answer questions about what information-processing resources human beings use at what stages of development. Along the way, the researchers might also generate some insights useful for robotic vision systems.

“The common theme here is really learning to perceive physics,” Tenenbaum says. “That starts with seeing the full 3-D shapes of objects, and multiple objects in a scene, along with their physical properties, like mass and friction, then reasoning about how these objects will move over time. Jiajun’s four papers address this whole space. Taken together, we’re starting to be able to build machines that capture more and more of people’s basic understanding of the physical world.”

Three of the papers deal with inferring information about the physical structure of objects, from both visual and aural data. The fourth deals with predicting how objects will behave on the basis of that data.

Two-way street

Something else that unites all four papers is their unusual approach to machine learning, a technique in which computers learn to perform computational tasks by analyzing huge sets of training data. In a typical machine-learning system, the training data are labeled: Human analysts will have, say, identified the objects in a visual scene or transcribed the words of a spoken sentence. The system attempts to learn what features of the data correlate with what labels, and it’s judged on how well it labels previously unseen data.

In Wu and Tenenbaum’s new papers, the system is trained to infer a physical model of the world — the 3-D shapes of objects that are mostly hidden from view, for instance. But then it works backward, using the model to resynthesize the input data, and its performance is judged on how well the reconstructed data matches the original data.

For instance, using visual images to build a 3-D model of an object in a scene requires stripping away any occluding objects; filtering out confounding visual textures, reflections, and shadows; and inferring the shape of unseen surfaces. Once Wu and Tenenbaum’s system has built such a model, however, it rotates it in space and adds visual textures back in until it can approximate the input data.

Indeed, two of the researchers’ four papers address the complex problem of inferring 3-D models from visual data. On those papers, they’re joined by four other MIT researchers, including William Freeman, the Perkins Professor of Electrical Engineering and Computer Science, and by colleagues at DeepMind, ShanghaiTech University, and Shanghai Jiao Tong University.

Divide and conquer

The researchers’ system is based on the influential theories of the MIT neuroscientist David Marr, who died in 1980 at the tragically young age of 35. Marr hypothesized that in interpreting a visual scene, the brain first creates what he called a 2.5-D sketch of the objects it contained — a representation of just those surfaces of the objects facing the viewer. Then, on the basis of the 2.5-D sketch — not the raw visual information about the scene — the brain infers the full, three-dimensional shapes of the objects.

“Both problems are very hard, but there’s a nice way to disentangle them,” Wu says. “You can do them one at a time, so you don’t have to deal with both of them at the same time, which is even harder.”

Wu and his colleagues’ system needs to be trained on data that include both visual images and 3-D models of the objects the images depict. Constructing accurate 3-D models of the objects depicted in real photographs would be prohibitively time consuming, so initially, the researchers train their system using synthetic data, in which the visual image is generated from the 3-D model, rather than vice versa. The process of creating the data is like that of creating a computer-animated film.

Once the system has been trained on synthetic data, however, it can be fine-tuned using real data. That’s because its ultimate performance criterion is the accuracy with which it reconstructs the input data. It’s still building 3-D models, but they don’t need to be compared to human-constructed models for performance assessment.

In evaluating their system, the researchers used a measure called intersection over union, which is common in the field. On that measure, their system outperforms its predecessors. But a given intersection-over-union score leaves a lot of room for local variation in the smoothness and shape of a 3-D model. So Wu and his colleagues also conducted a qualitative study of the models’ fidelity to the source images. Of the study’s participants, 74 percent preferred the new system’s reconstructions to those of its predecessors.

All that fall

In another of Wu and Tenenbaum’s papers, on which they’re joined again by Freeman and by researchers at MIT, Cambridge University, and ShanghaiTech University, they train a system to analyze audio recordings of an object being dropped, to infer properties such as the object’s shape, its composition, and the height from which it fell. Again, the system is trained to produce an abstract representation of the object, which, in turn, it uses to synthesize the sound the object would make when dropped from a particular height. The system’s performance is judged on the similarity between the synthesized sound and the source sound.

Finally, in their fourth paper, Wu, Tenenbaum, Freeman, and colleagues at DeepMind and Oxford University describe a system that begins to model humans’ intuitive understanding of the physical forces acting on objects in the world. This paper picks up where the previous papers leave off: It assumes that the system has already deduced objects’ 3-D shapes.

Those shapes are simple: balls and cubes. The researchers trained their system to perform two tasks. The first is to estimate the velocities of balls traveling on a billiard table and, on that basis, to predict how they will behave after a collision. The second is to analyze a static image of stacked cubes and determine whether they will fall and, if so, where the cubes will land.

Wu developed a representational language he calls scene XML that can quantitatively characterize the relative positions of objects in a visual scene. The system first learns to describe input data in that language. It then feeds that description to something called a physics engine, which models the physical forces acting on the represented objects. Physics engines are a staple of both computer animation, where they generate the movement of clothing, falling objects, and the like, and of scientific computing, where they’re used for large-scale physical simulations.

After the physics engine has predicted the motions of the balls and boxes, that information is fed to a graphics engine, whose output is, again, compared with the source images. As with the work on visual discrimination, the researchers train their system on synthetic data before refining it with real data.

In tests, the researchers’ system again outperformed its predecessors. In fact, in some of the tests involving billiard balls, it frequently outperformed human observers as well.

FT 300 force torque sensor

Robotiq Updates FT 300 Sensitivity For High Precision Tasks With Universal Robots
Force Torque Sensor feeds data to Universal Robots force mode

Quebec City, Canada, November 13, 2017 – Robotiq launches a 10 times more sensitive version of its FT 300 Force Torque Sensor. With Plug + Play integration on all Universal Robots, the FT 300 performs highly repeatable precision force control tasks such as finishing, product testing, assembly and precise part insertion.

This force torque sensor comes with an updated free URCap software able to feed data to the Universal Robots Force Mode. “This new feature allows the user to perform precise force insertion assembly and many finishing applications where force control with high sensitivity is required” explains Robotiq CTO Jean-Philippe Jobin*.

The URCap also includes a new calibration routine. “We’ve integrated a step-by-step procedure that guides the user through the process, which takes less than 2 minutes” adds Jobin. “A new dashboard also provides real-time force and moment readings on all 6 axes. Moreover, pre-built programming functions are now embedded in the URCap for intuitive programming.”

See some of the FT 300’s new capabilities in the following demo videos:

#1 How to calibrate with the FT 300 URCap Dashboard

#2 Linear search  demo

#3 Path recording demo

Visit the FT 300 webpage  or get a quote here

Get the FT 300 specs here

Get more info in the FAQ

Get free Skills to accelerate robot programming of force control tasks.

Get free robot cell deployment resources on leanrobotics.org

* Available with Universal Robots CB3.1 controller only

About Robotiq

Robotiq’s Lean Robotics methodology and products enable manufacturers to deploy productive robot cells across their factory. They leverage the Lean Robotics methodology for faster time to production and increased productivity from their robots. Production engineers standardize on Robotiq’s Plug + Play components for their ease of programming, built-in integration, and adaptability to many processes. They rely on the Flow software suite to accelerate robot projects and optimize robot performance once in production.

Robotiq is the humans behind the robots: an employee-owned business with a passionate team and an international partner network.

Media contact

David Maltais, Communications and Public Relations Coordinator

d.maltais@robotiq.com

1-418-929-2513

////

Press Release Provided by: Robotiq.Com

The post FT 300 force torque sensor appeared first on Roboticmagazine.

Swiss drone industry map

The Swiss Drone Industry Map above (click for larger image) is an attempt to list and group all companies and institutions in Switzerland that provide a product or service that makes commercial operations of Unmanned Aerial Vehicles possible. An entity can only appear in one box (i.e. main activity) and must be publicly promoting existing or future products and services. Swiss drone research centres, system integrators and pilot schools are not part of the map. Corrections, suggestions and new submissions are welcome!

I’ve added all the links below so you can easily click through and learn more.

Manufactures
SwissDrones
senseFly
Flyability
Fotokite
Sunflower Labs
Voliro
AgroFly
Skybotix
Aeroscout
Wingtra
Flying Robots

Flight Systems
PX4-Pro
MotionPilot
WindShape
weControl
Rapyuta Robotics
Daedalean
UAVenture

Governance
FOCA
Drohnen Verband
Zurich
DroneLogbook
UAWaero
SGS
Swiss Re

Traffic Management
Gobal UTM Association
SITA
Flarm
Skyguide
SkySoft
OneSky

Defense & Security
ViaSat
RUAG
Rheinmettall
UMS Skeldar
Aurora Swiss Aerospace
Kudelski
IDQ

Sensors & Payload
Terabee
uBlox
Distran
FixPosition
SkyAware
Insightness
Sensirion
Sensima Technology

Electric & Solar Propulsion
Evolaris Aviation
AtlantikSolar
MecaPlex
H55
Maxon Motor
Faulhaber

Delivery
Matternet – Swiss Post
Dronistics
Redline – Droneport
Deldro

Energy
TwingTec
SkyPull

Analytics
Picterra
Pix4D
Gamaya
MeteoMatics

Entertainment
Verity Studios
AEROTAIN
Anabatic.Aero
LémanTech

Humanitarian
WeRobotics
FSD
Redog
Swiss Fang

High-Altitude
SolarStratos
OpenStratosphere

Robots solving climate change

The two biggest societal challenges for the twenty-first century are also the biggest opportunities – automation and climate change. The confluence of these forces of mankind and nature intersect beautifully in the alternative energy market. The epitaph of fossil fuels with its dark cloud burning a hole in the ozone layer is giving way to a rise of solar and wind farms worldwide. Servicing these plantations are fleets of robots and drones, providing greater possibilities of expanding CleanTech to the most remote regions of the planet.

As 2017 comes to end, the solar industry for the first time in ten years has plateaued due to the proposed budget cuts by the Trump administration. Solar has had quite a run with an average annual growth rate of more than 65% for the past decade promoted largely by federal subsidies. The progressive policy of the Obama administration made the US a leader in alternative energy, resulting in a quarter-million new jobs. While the Federal Government now re-embraces the antiquated allure of fossil fuels, the global demand for solar has been rising as infrastructure costs decline by more than half, providing new opportunities without government incentives.

Prior to the renewal energy boom, unattractive roof tiles were the the most visible image of solar. While Elon Musk, and others are developing more aesthetically pleasing roofing materials, the business model of house-by-house conversion has been proven inefficient. Instead, the industry is focusing on “utility-scale” solar farms that will be connected to the national grid. Until recently, such farms have been straddled with ballooning servicing costs.

In a report published last month, leading energy risk management company DNV GL exclaimed that the alternative energy market could benefit greatly by utilizing Artificial Intelligence (AI) and robotics in designing, developing, deploying and maintaining utility farms. The study “Making Renewables Smarter: The Benefits, Risks, And Future of Artificial Intelligence In Solar And Wind” cited that “fields of resource forecasting, control and predictive maintenance” are ripe for tech disruption. Elizabeth Traiger, co-author of the report, explained, “Solar and wind developers, operators, and investors need to consider how their industries can use it, what the impacts are on the industries in a larger sense, and what decisions those industries need to confront.”

Since solar farms are often located in arid, dusty locations, one of the earliest use cases for unmanned systems was self-cleaning robots. As reported in 2014, Israeli company Ecoppia developed a patented waterless panel-washing platform to keep solar up and running in the desert. Today, Ecoppia is cleaning 10 million panels a month. Eran Meller, Chief Executive of Ecoppia, boasts, “We’re pleased to bring the experience gained over four years of cleaning in multiple sites in the Middle East. Cleaning 80 million solar panels in the harshest desert conditions globally, we expect to continue to play a leading role in this growing market.”

Since Ecoppia began selling commercially, there have been other entries into the unmanned maintenance space. This past March, Exosun became the latest to offer autonomous cleaning bots. The track equipment manufacturer claims that robotic systems can cut production losses by 2%, promising a return on investment within 18 months. After their acquisition of Greenbotics in 2013, US-based SunPower also launched its own mechanized cleaning platform, the Oasis, which combines mobile robots and drones.

SunPower brags that its products are ten times faster than traditional (manual) methods using 75% less water. While SunPower and Exosun leverage their large sales footprint with their existing servicing and equipment networks, Ecoppia is still the product leader. Its proprietary waterless solution offers the most cost-effective and connected solution on the market. Via a robust cloud network, Ecoppia can sense weather fluctuations to automatically schedule emergency cleanings. Anat Cohen Segev, Director of Marketing, explains, “Within seconds, we would detect a dust storm hitting the site, the master control will automatically suggest an additional cleaning cycle and within a click the entire site will be cleaned.” According to Segev, the robots remove 99% of the dust on the panels.

Drone companies are also entering the maintenance space. Upstart Aerial Power claims to have designed a “SolarBrush” quadcopter that cleans panels. The solar-powered drone professes to reduce 60% of a solar farm’s operational costs. Solar Brush also promises an 80% savings over existing solutions like Ecoppia since there are no installation costs. However, Aerial Power has yet to fly its product in the field as it is still in development. SolarPower is selling its own drone survey platform to assess development sites and oversee field operations. Matt Campbell, Vice President of Power Plant Products for SunPower, stated “A lot of the beginning of the solar industry was focused on the panel. Now we’re looking at innovation all around the rest of the system. That’s why we’re always surveying new technology — whether it’s a robot, whether it’s a drone, whether it’s software — and saying, ‘How can this help us reduce the cost of solar, build projects faster, and make them more reliable?’”

In 2008, The US Department of Energy published an ambitious proposal to have “20% Wind Energy by 2030: Increasing Wind Energy’s Contribution to U.S. Electricity Supply.” Presently at thirteen years before the goal, less than 5% of US energy is derived from wind. Developing wind farms is not novel, however to achieve 20% by 2030 the US needs to begin looking offshore. To put it in perspective, oceanic wind farms could generate more than 2,000 gigawatts of clean, carbon-free energy, or twice as much electricity as Americans currently consume. To date, there is only one wind farm operating off the coast of the United States. While almost every coastal state has proposals for offshore farms, the industry has been stalled by politics and servicing hurdles in dangerous waters.

For more than a decade the United Kingdom has led the development of offshore wind farms. At the University of Manchester, a leading group of researchers has been exploring a number of AI, robotic and drone technologies for remote inspections. The consortium of academics estimates that these technologies could generate more than $2.5 billion by 2025 in just the UK alone. The global offshore market could reach $17 billion by 2020, with 80% of the costs from operations and maintenance.

Last month, Innovate UK awarded $1.6 million to Perceptual Robotics and VulcanUAV to incorporate drones and autonomous boats into ocean inspections. These startups follow the business model of successful US inspection upstarts, like SkySpecs. Launched three years ago, SkySpecs’ autonomous drones claim to reduce turbine inspections from days to minutes. Danny Ellis, SkySpecs Chief Executive, claims “Customers that could once inspect only one-third of a wind farm can now do the whole farm in the same amount of time.” Last year, British startup Visual Working accomplished the herculean feat of surpassing 2000 blade inspections.

In the words of Paolo Brianzoni, Chief Executive of Visual Working: “We are not talking about what we intend to accomplish in the near future – but actually performing our UAV inspection service every day out there. Many in the industry are using considerable amount of time discussing and testing how to use UAV inspections in a safe and efficient way. We have passed that point and have alone in the first half of 2016 inspected 250 turbines in the North Sea averaging more then 10WTG per day, and still keeping to the “highest quality standards.’”

This past summer, Europe achieved another clean-energy milestone with the announcement of three new offshore wind farms for the first time without government subsidies. By bringing down the cost structure, autonomous systems are turning the tide of alternate energy regardless of government investment. Three days before leaving office, President Barack Obama wrote in the journal Science last year that “Evidence is mounting that any economic strategy that ignores carbon pollution will impose tremendous costs to the global economy and will result in fewer jobs and less economic growth over the long term.” He declared that it is time to move past common misconceptions that climate policy is at odds with business, “rather, it can boost efficiency, productivity, and innovation.”

Vecna Robotics Wins DHL & Dell Robotics Innovation Challenge 2017 with Tote Retrieval System

Vecna Robotics, a leader in intelligent, next-generation, robotic material handling autonomous ground vehicles (AGVs), was awarded first place in the DHL & Dell Robotics Mobile Picking Challenge 2017. The event was held last week at the DHL Innovation Center in Troisdorf, Ge

Three very different startups vie for “Robohub Choice”

Three very different robotics startups have been battling it out over the last week to win the “Robohub Choice” award in our annual startup competition. One was social, one was medical and one was agricultural! Also, one was from the UK, one was from the Ukraine and one was from Canada. Although nine startups entered the voting, it was clear from the start that it was a three horse race – thanks to our Robohub readers and the social media efforts of the startups.

The most popular startup was UniExo with 70.6% of the vote, followed by BotsAndUs on 14.8% and Northstar Robotics on 13.2%.

These three startups will be able to spend time in the Silicon Valley Robotics Accelerator/Cowork Space in Oakland, and we hope to have a feature about each startup on Robohub over the coming year. The overall winner(s) of the Robot Launch 2017 competition will be announced on December 15. The grand prize is investment of up to $500,000 from The Robotics Hub, while all award winners get access to the Silicon Valley Robotics accelerator program and cowork space.

UniExo | ukraine 

UniExo aims to help people with injuries and movement problems to restore the motor functions of their bodies with modular robotic exoskeleton devices, without additional help of doctors.

Thanks to our device, with its advantages, we can help these users in rehabilitation. The use of the product provides free movement for people with disabilities in a comfortable and safe form for them, without the use of outside help, as well as people in the post-opined period, post-traumatic state, being on rehabilitation.

We can give a second chance to people for a normal life, and motivate to do things for our world that can help other people.

https://youtu.be/kjHN35zasvE

BotsAndUs | uk (@botsandus)

BotsAndUs believe in humans and robots collaborating towards a better life. Our aim is to create physical and emotional comfort with robots to support wide adoption.

In May ‘17 we launched Bo, a social robot for events, hospitality and retail. Bo approaches you in shops, hotels or hospitals, finds out what you need, takes you to it and gives you tips on the latest bargains.

In a short time the business has grown considerably: global brands as customers (British Telecom, Etisalat, Dixons), a Government award for our Human-Robot-Interaction tech, members of Nvidia’s Inception program and intuAccelerate (bringing Bo to UK’s top 10 malls), >15k Bo interactions.

https://youtu.be/jrLaoKShKT4

Northstar Robotics | canada (@northstarrobot)

Northstar Robotics is an agricultural technology company that was founded by an experienced farmer and robotics engineer.

Our vision is to create the fully autonomous farm which will address the labour shortage problem and lower farm input costs.  We will make this vision a reality by first providing an open hardware and software platform to allow current farm equipment to become autonomous.  In parallel, we are going to build super awesome robots that will transform farming and set the standard for what modern agricultural equipment should be.

https://youtu.be/o2C4Cx-m2es

 

 

Page 404 of 427
1 402 403 404 405 406 427