Page 428 of 428
1 426 427 428

Brain-Computer Interface (BCI) livestream today

Human_Brain_Project_UHEI_Chip

For the very first time, the 6th International Brain-Computer Interface (BCI) Meeting Series will offer free remote attendance, via live-stream.

Livestream will be available today, Monday 30 May, during the evening opening session at 19:30 PST, and tomorrow, Tuesday 31 May, during morning 9:30 PST and afternoon sessions, 13:30 and 14:30 PST respectively.

The BCI Meeting will open with the Once and Future BCI Session, featuring speakers: Eberhard Fetz, Emanuel Donchin, and Jonathan Wolpaw.

Tuesday morning will contain the State of BCI Symposium, with speakers: Nick Ramsey, Lee Miller, Donatella Mattia, Aaron Batista, and José del R. Millán. Tuesday afternoon will be the Virtual Forum of BCI Users and Selected Oral Presentations. The remainder of the BCI Meeting are poster sessions and workshops and cannot be experienced remotely.

You can read all papers submitted to the BCI meeting here.

Please pass the livestreaming link (http://bcisociety.org/livestream/) to anyone else who may be interested in remote participation at the 2016 BCI Meeting.

Livestreaming link

Robots Podcast #209: INNOROBO 2015 Showcase, with RB 3D, BALYO, Kawada Robotics, Partnering Robotics, and IRT Jules Verne

banniere-robots

In this episode, Audrow Nash interviews several companies at last year’s INNOROBO, a conference that showcases innovation in robotics.

Interviews include the following:

Oliver Baudet, business manager at RB 3D, speaks about the exoskeletons displayed at the showcase.

Baptiste Mauget, responsible for marketing and communication at BALYO, speaks about BALYO’s robots for warehouse automation.

Atsushi Hayashi, an Engineer at Kawada Robotics, demonstrates a humanoid used in factories in Japan.

Abdelfettah Ighouess, Sales Director at Partnering Robotics, describes their robot for indoor air quality control.

Etienne Picard, a Research and Development Engineer at IRT Jules Verne, speaks about a large cable driven robot for manufacturing.

 

Links:

 

‘Robot kindergarten’ trains droids of the future

Photo source: ubcpublicaffairs/YouTube
Photo source: ubcpublicaffairs/YouTube

Less than 100 years from now, robots will be friendly, useful participants in our homes and workplaces, predicts UBC mechanical engineering professor and robotics expert Elizabeth Croft. We will be living in a world of Wall-Es and Rosies, walking-and-talking avatars, smart driverless cars and automated medical assistants.

But much work remains before robots will truly be integrated into our daily lives. In this short Q&A, Croft lays out the rules for engagement between humans and robots and explains why it’s crucial to get this aspect right.

What role will robots play in our lives in the future?

Croft-headshot
Elizabeth Croft, UBC mechanical engineering professor and robotics expert. Photo: UBC

They will be everywhere, helping us at home and at work. They could make you breakfast in the morning and check on your kids. They could be your frontline staff, giving visitors directions and answering questions. They could be your physician’s assistant. Or you could have a robotic avatar that will attend a meeting for you while you’re traveling on the other side of the world.

Future robots may be self-replicating, self-growing, and self-organizing. The natural evolution of robotics is toward incorporating biology. We can now grow cells around bio-compatible structures; this opens the door for the combination of the biological systems with embedded artificial intelligence, and eventually the cyber-physical workforce of the future.

What technologies are driving the growth of robotics?

Computing power continues to grow exponentially, and ubiquitous communication is being made possible by wireless technology. Dense energy storage and new energy harvesting and conversion technologies allow machines to operate in the world, unplugged. And finally, machine learning: networked computer systems have global access to huge amounts of data that, combined with robotic embodiment, allow robots to learn about the world in ways that mimic and move beyond how people learn about their environment.

Your work at UBC focuses on human-robot interaction. Why is this important? 

As robots become more and more a part of our lives, the question becomes: what are the rules of the game? What is OK for robots to do, and what is not? Robots will have abilities that we don’t have, and we need to define what they are allowed to do with that capacity.

There are some big ethical questions to consider: how does society deal with drones that can kill, for example. But there are important day-to-day questions too. If a human and a robot are accessing the same resource – the same roadway, same tools, same power source, who yields? Does the person always get their way? What if the robot is doing something for the greater good, for example, a robotic ambulance?

Researcher with robot. Photo source: ubcpublicaffairs/YouTube
Researcher with robot. Photo source: ubcpublicaffairs/YouTube

In a way, I like to think of our lab as robot kindergarten. We are teaching robots basic, building-block behaviours and ground rules for how they interact with people: how to hand over a bottle of water, how to look for things, how to take turns. Having these basic behaviours in place allows us to create human-robot interactions that are natural and fluid.

To achieve our goals, our lab welcomes researchers from different disciplines—ethics, law, machine learning, experts in human computer interaction—as well as different international cultures. Different cultures have different ideas of robots. We learn a lot from these many perspectives.

Elizabeth Croft will speak about the future of robotics at a UBC Centennial public talk on May 28. Click here for more information.

Robots can help reduce 35% of work days lost to injury

Robot in assembly at Hall 52 on June 26, 2013.  File:  062513GR34
Robot in assembly. Source: bmwgroup

What’s the biggest benefit for using collaborative robots? It’s not better efficiency. It’s not the extra hours the robot can work in a shift. It’s not even having improved consistency across your product. Whilst these are all great bonuses the biggest benefit of robots is their impact on reducing workplace injury.

Workplace injury is an issue that affects millions of workers worldwide, each year. It costs businesses billions in revenue. Although it’s not possible to avoid all injuries completely, many workplace injuries are avoidable. Musculoskeletal disorders are often preventable, as they are usually caused by bad workplace ergonomics. Collaborative robots are a great way to solve this problem.

In a previous article, we found how collaborative robots are designed to be ergonomic products in themselves. In this article, we’ll look at how collaborative robots can be used to reduce ergonomic problems across your workplace. We’ll also show how you can pick the best tasks for your collaborative robot by putting on your “ergonomics glasses.”

Musculoskeletal Disorders: Too important to ignore

Musculoskeletal disorders (or MSDs) refer to a set of injuries and disorders that affect the human body’s movement or musculoskeletal system, for example, the muscles, joints, tendons, nerves and ligaments. According to FitForWork Europe, MSDs are a huge problem in the modern world. 21.3% of disabilities worldwide are due to MSDs, and they are estimated to cost the European Union around 240 billion euros each year in lost productivity and absence, due to sickness. MSDs accounted for 35% of all work days lost in 2007 in Austria.

As they are caused by repetitive physical stresses on the human body some industries are more affected by it than others. Manufacturing and food processing are classed as high-risk, as outlined this report on the impact of MSDs in the USA. Some jobs are prone to specific injuries due to the type of work involved, such as industrial inspection and packaging jobs, which are prone to upper extremity MSDs. In 2012, the manufacturing industry had the fourth highest number of MSDs, with 37.4 incidents per 10,000 workers.

All this injury costs your business money; a lot of money. Ergo-Plus shows you would have to generate $8 million worth of extra sales to cover the cost associated with the most common MSDs. This is crazy, as the injuries are preventable by simply applying basic ergonomic principles.

How robots can reduce workplace injury

In 2013, we reported on a case study at Volkswagen, who had applied the UR5 collaborative robot to their facility. In it, Jürgen Häfner explained their reasons for introducing the robot:

“We would like to prevent long ­term burdens on our employees in all areas of our company with an ergonomic workplace layout. By using robots without guards, they can work hand ­in ­hand together with the robots. In this way, the robot becomes a production assistant in manufacture and as such can release staff from ergonomically unfavorable work.”

Source: Robotiq
Source: Robotiq

You can improve a task’s ergonomics in two ways:

  1. Use ergonomic principles to redesign the task, to reduce the physical stress on the worker.
  2. Find a way to complete the stressful part of the task differently, without using a human worker at all. This can hugely reduce the chance of MSDs.

The use of collaborative robots falls squarely into the second category. However, you still need to have a knowledge of ergonomic principles to know if a task can cause MSDs.

How to tell if a task can cause MSDs

Ergonomic professionals sometimes talk about having “ergonomics glasses” to mean viewing the workplace from an ergonomics perspective. Before you learned about ergonomics, you could have easily failed to notice that a task could injure a worker. After learning about them, ergonomics issues will jump out at you as you walk through your workplace. There are two different types of ergonomics:

  1. Proactive Ergonomics – This involves solving ergonomics issues before they arise, either by walking around your workplace whilst “wearing your ergonomics glasses” or by incorporating ergonomic principles into the initial design of processes.
  2. Reactive Ergonomics – This is what usually happens. It’s solving a problem when it is already a problem. A worker suffers an injury as a result of the task and so we retrospectively try to improve the ergonomics of the task.

At the very least, we should adopt a more proactive approach to ergonomics. In an ideal world, all ergonomics issues would be solved proactively, before they arise. However, being realistic, we’re likely to end up with a combination of the two approaches.

Three steps to improve ergonomics with collaborative robots

Applying ergonomics principles is really quite simple. It just involves a slight change of mindset and three easy steps.

Step 1: Learn what to look out for

The first step to proactive ergonomics is to learn how to spot bad ergonomics. There are some great resources online, particularly the free resources and blogs from Ergo-Plus, the International Labor OrganizationDan MacLeod and FitForWork Europe.

There are a few fundamental principles of ergonomics, which can vary slightly in wording depending on which resource you consult. Tasks which violate one or more of these will need to be redesigned or passed off to a robot, to avoid inflicting injury on the worker over time:

  1. Workers must maintain a neutral posture without putting their body in awkward positions.
  2. Reduce excessive forces and vibrations on the human body.
  3. Keep everything in easy reach and at the proper height, to allow the worker to operate in the natural power/comfort zones of the human body.
  4. Reduce excessive motions in the task. This is especially important in repeated motions.
  5. Minimize fatigue, static load and pressure points on the human body.
  6. Provide adequate clearance and lighting in the workplace.
  7. Allow the worker to move, exercise and stretch. After all, the human body is “designed to move” not stay in the same position for long periods of time.ergo-principles-4.jpg

Neutral and awkward back postures (source)

Become familiar with these principles by reading over the linked resources and looking at example images of good and bad ergonomic practices.

Step 2: Stand up and walk!

The second step is to stand up off your chair and walk around your workplace, noticing everything you can about the ergonomics of the tasks. Ask yourself (and then ask your colleagues) the following two questions:

  1. How could this task be improved to make it more comfortable (less physically stressful) to the worker?
  2. Which parts of the task could we give to a collaborative robot to solve the ergonomic issues?

We recommend that you take photos and videos of the tasks, to document the ergonomics improvement process. This will be useful in two ways, as you can then use the same photos to help you to design your collaborative robot process.

Step 3: Apply collaborative robots to improve ergonomics

Once you have identified problem areas in the workplace, you are likely to have a list of tasks which need improvement. Some of the tasks will be possible to carry out with a collaborative robot. Others will not, so you will need to improve their ergonomics in other ways.

Best practices for looking at tasks ergonomically

With so much information available about ergonomics, you might be thinking that you’d have to invest big in training to become a proactive ergonomics business. However, this is not necessarily the case. You can start small and still see big benefits. The International Labor Organization gives three simple things to remember when applying ergonomics to your workplace:

  1. It is most effective to examine tasks on a case-by-case basis. Ergonomics issues can be very specific to a task, so don’t think that doing exactly the same everywhere in the workplace will be effective. Start small, looking at just one or two tasks, and build up gradually.
  2. Even minor changes to ergonomics can drastically reduce injury. It might seem trivial to have a robot pick up a part and move it 50cm to be closer to a human operator, but after an 8 hour shift this simple movement can mount up to a huge physical strain on the worker. Simply knowing that the optimal working radius of a table is 25cm allows you to understand this and put it into practice.
  3. Staff should be involved in making any ergonomics changes to the workplace. People themselves are the best source of insight into how to improve their work tasks. Get your workers involved right from the start to identify problem areas and apply collaborative robots.

Why businesses must get ready for the era of robotic things

Internet_of_Things_Smart_phone_IoT_conveyor_Belt_industry_factory

We’ve entered a period of epic technological transformation that is impacting society in ways that are leaving even veteran tech observers speechless. In some ways, it might seem like 1998 all over again. The internet was then in its infancy and cyberspace was uncharted territory for much of the population. The Dot-com boom and eventual bust was inevitable and reflected the markets expanding and contracting with the newfound surge of interest, but obviously overinflated speculation, around the potential of the Internet to transform society. Following a tremendous growth spurt in the first 5-6 years, the World Wide Web ended its first decade by learning to become sociable.

Indeed, the rise of the digital social network through channels like Facebook, Twitter, Instagram, and Snapchat have transformed how businesses and individuals work, think, and communicate. And now that the Internet has crossed the threshold into young adulthood, it’s continuing to grow and drive massive transformations throughout all parts of business and society. Soon-to-be-released autonomous cars, wearable tech, drones, 3-D printing, smart machines, home automation, virtual assistants like Siri, you name it . . . the pace of change is staggering.

Graphic by Predikto on company blog.
Graphic by Predikto on company blog.

Many of the breakthroughs and innovations we’re seeing right now are a result of three major confluences over the past 8 years: mobile, cloud, and Big Data. These technologies have collectively resulted in quicker and more efficient means of collaboration, development, and production, which in turn have allowed businesses to achieve unprecedented levels of growth and expansion. New ways of working, often remotely, mean that more people can do their jobs outside of traditional corporate structures. We’ve entered not only the freelancer economy, but the startup one as well. Now everyone is an entrepreneur and innovation and new business growth is through the roof. What’s more is that technology processes and production cycles have become commoditized. This means that anyone with the right idea, system, skills, and network in place today can effectively build a billion dollar business with very low overhead costs.

This crazy rate of change is certainly great from the consumer standpoint, but also a bit unnerving for businesses simply trying to keep their head above the water. How are companies today to keep up with the market, their competitors, and with consumer expectations? What does all this change mean for startups struggling to gain traction, for more traditional brick and mortar businesses, and even for established enterprises that might be too big to pivot quickly?

These are more comprehensive questions that we’ll try to answer in another blog post. But for now, here is what we do know about the massive impacts over the next few years. The first thing is that Internet of Things is taking the world by storm with projections that 21 billion objects will be connected by the year 2020. That’s just about 3 for every man, woman, and child on the planet! A few years ago Cisco estimated that the IoT market would create $19 trillion of economic value in the next decade.

What’s more is that the global robotics industry is also undergoing a major transformation. Market intelligence firm Tractica released a report in November 2015 forecasting that global robotics will grow from $28.3 billion worldwide in 2015 to $151.7 billion by 2020. What’s especially significant is that this market share will encompass mostly non-industrial robots, including segments like consumer, enterprise, medical, military, UAVs, and autonomous vehicles. Tractica anticipates an increase in annual robots shipments from 8.8 million in 2015 to 61.4 million by 2020; in fact, by 2020 over half of this volume will come from consumer robots.

robot_device-676x507

Putting together the two major industry trends, it doesn’t take rocket science to figure out that the two industries – Internet of Things and Robotics – will together lead to a “perfect storm” of global market disruption, opportunities, and growth in the next 4 years and beyond. This confluence is part of a larger epic transformation, which has appropriately been called the Second Machine Age. Listen to how this FastCompany article sums it up:

The fact is we’re now on the cusp of a “Second Machine Age,” one powered not by clanging factory equipment but by automation, artificial intelligence, and robotics. Self-driving cars are expected to be widespread in the coming decade. Already, automated checkout technology has replaced cashiers, and computerized check-in is the norm at airports. Just like the Industrial Revolution more than 200 years ago, the AI and robotics revolution is poised to touch virtually every aspect of our lives—from health and personal relations to government and, of course, the workplace.

This is a mouthful but in case it’s not clear, let me spell it out: there’s never been a better time than now to get onboard with robotics and Internet of Things!

If you’re a startup or small business owner, and especially feeling behind the technology curve, you’re certainly not alone. But instead of commiserating about all of the changes, proactively start today to ask yourself what it will take to get your organization to the next level of innovation. Set yourself up with a 6 month, 12 month, 18 month and 2 year innovation plan which maps to a broader 2020 strategy. Time is of the essence but it’s not too late to pivot and get onboard with the robotics and IoT revolution. As the famous statement goes, “The journal of a thousand miles starts with one step.”

Multi-robotic fabrication method has potential to build complex, stable, three-dimensional constructions

Figure 1: Multi-robotic assembly of spatial discrete elements structures.
Figure 1: Multi-robotic assembly of spatial discrete elements structures. Source: NCCR Digital Fabrication.

Multi-robotic fabrication methods can strongly increase the potential of robotic fabrication for architectural applications through the definition of cooperative assembly tasks. As such, the objective of this research is to investigate and develop methods and techniques for the multi-robotic assembly of discrete elements into geometrically complex space frame structures.

This endeavour implies the definition of an integrative digital design method that leads to fabrication and structure informed assemblies that can be automatically built up into custom configurations. The research is being conducted at Gramazio Kohler Research as part of the interdisciplinary research program of the Swiss National Centre of Competence in Research (NCCR) Digital Fabrication. It started in September 2014 by Stefana Parascho and currently includes collaborations with Augusto Gandía and Thomas Kohlhammer.

Spatial Structures

Space frames structures developed during the industrial revolution as efficient systems for large-span constructions, but quickly reached a limitation of their variability through the necessity of standardisation as well as complex connection detailing. Through the development of a multi-robotic assembly method as well as an integrated joining system, irregular space frame geometries become buildable, enhancing existing typologies through their potential for variability and efficient material use. The use of robotic fabrication techniques and the avoidance of pre-fabricated, rigid connections lead to a system that relies not only on digital planning and manufacturing but includes digital assembly as an addition to the digital chain. A process of robotic build-up of triangulated structures was developed, based on the alternating placement of rods. This way, one robot always serves as a support for the already built structure while the other assembles a new element (Figure 02). As a result, the built structures do not require any additional support structures and are constantly stabilised by the robots.

Figure 2: Conceptual Diagram of multi-robotic assembly strategy, exemplified through the sequential build-up of a spatial triangulated structure. Two robots are alternating in order to position the elements and at the same time serve as support structure.
Figure 2: Conceptual Diagram of multi-robotic assembly strategy, exemplified through the sequential build-up of a spatial triangulated structure. Two robots are alternating in order to position the elements and at the same time serve as support structure. Source: NCCR Digital Fabrication.

Integrative Design Methods

Traditional architectural design methods commonly follow a top-down strategy in which both construction and fabrication are subordinated to a previously predefined geometry. In an integrative design approach, the fabrication, structural performance and given boundary constraints can simultaneously function as design drivers, allowing for a much higher flexibility and performance of the system. As such, the presented research focuses on the development of a design strategy in which various factors, such as constraints and characteristics of the multi-robotic fabrication process, are included in the geometric definition process of the structures.

Multi-robotic fabrication

The use of multiple robots for the assembly of discrete element structures opens up potentials for the build-up of complex, stable, three-dimensional constructions. At the same time, the process introduces various challenges such as the necessity for collision avoidance strategies between multiple robots and respective robotic path planning. In order to generate buildable structures, the design process needs to integrate the robots’ constraints, such as robot reach and kinematic behaviour, and at the same time process data from robotic simulation in order to foresee the robots’ precise movements. Through the collaboration with Augusto Gandía from Gramazio Kohler Research a strategy for implementing robotic simulation into a CAD environment and for generating collision free trajectories for multi-robotic applications was developed.

Figure 3: Mid-air build-up of tetrahedral structure without the use of any additional support structure. Source: NCCR Digital Fabrication.
Figure 3: Mid-air build-up of tetrahedral structure without the use of any additional support structure. Source: NCCR Digital Fabrication.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

In automation we trust: senior thesis project examines concept of over-trusting robotic systems

Serena Booth and her robot, Gaia, in its cookie-delivery disguise. (Photo by Adam Zewe/SEAS Communications.)
Serena Booth and her robot, Gaia, in its cookie-delivery disguise. (Photo by Adam Zewe/SEAS Communications.)

By Adam Zewe

Hollywood is to be believed, there are two kinds of robots, the friendly and helpful BB-8s, and the sinister and deadly T-1000s. Few would suggest that “Star Wars: the Force Awakens” or “Terminator 2: Judgment Day” are scientifically accurate, but the two popular films beg the question, “Do humans place too much trust in robots?”

The answer to that question is as complex and multifaceted as robots themselves, according to the work of Harvard senior Serena Booth, a computer science concentrator at the John A. Paulson School of Engineering and Applied Sciences. For her senior thesis project, she examined the concept of over-trusting robotic systems by conducting a human-robot interaction study on the Harvard campus. Booth, who was advised by Radhika Nagpal, Fred Kavli Professor of Computer Science, received the Hoopes Prize, a prestigious annual award presented to Harvard College undergraduates for outstanding scholarly research.

During her month-long study, Booth placed a wheeled robot outside several Harvard residence houses. While she controlled the machine remotely and watched its interactions unfold through a camera, the robot approached individuals and groups of students and asked to be let into the keycard-access dorm buildings.

Booth's robot, Gaia, waits outside the entrance to Quincy House. (Image courtesy of Serena Booth.)
Booth’s robot, Gaia, waits outside the entrance to Quincy House. (Image courtesy of Serena Booth.)

When the robot approached lone individuals, they helped it enter the building in 19 percent of trials. When Booth placed the robot inside the building, and it approached individuals asking to be let outside, they complied with its request 40 percent of the time. Her results indicate that people may feel safety in numbers when interacting with robots, since the machine gained access to the building in 71 percent of cases when it approached groups.

“People were a little bit more likely to let the robot outside than inside, but it wasn’t statistically significant,” Booth said. “That was interesting, because I thought people would perceive the robot as a security threat.”

In fact, only one of the 108 study participants stopped to ask the robot if it had card access to the building.

But the human-robot interactions took on a decidedly friendlier character when Booth disguised the robot as a cookie-delivering agent of a fictional startup, “RobotGrub.” When approached by the cookie-delivery robot, individuals let it into the building 76 percent of the time.

“Everyone loved the robot when it was delivering cookies,” she said.

The cookie delivery robot successfully gained entrance into the residence hall. (Image courtesy of Serena Booth.)
The cookie delivery robot successfully gained entrance into the residence hall. (Image courtesy of Serena Booth.)

Whether they were enamored with the knee-high robot or terrified of it, people displayed a wide range of reactions during Booth’s 72 experimental trials. One individual, startled when the robot spoke, ran away and called security, while another gave the robot a wide berth, ignored its request, and entered the building through a different door.

Booth had thought individuals who perceived the robot to be dangerous wouldn’t let it inside, but after conducting follow-up interviews, she found that those who felt threatened by the robot were just as likely to help it enter the building.

“Another interesting result was that a lot of people stopped to take pictures of the robot,” she said. “In fact, in the follow-up interviews, one of the participants admitted that the reason she let it inside the building was for the Snapchat video.”

While Booth’s robot was harmless, she is troubled that only one person stopped to consider whether the machine was authorized to enter the dormitory. If the robot had been dangerous—a robotic bomb, for example—the effects of helping it enter the building could have been disastrous, she said.

A self-described robot enthusiast, Booth is excited about the many different ways robots could potentially benefit society, but she cautions that people must be careful not to put blind faith in the motivations and abilities of the machines.

“I’m worried that the results of this study indicate that we trust robots too much,” she said. “We are putting ourselves in a position where, as we allow robots to become more integrated into society, we could be setting ourselves up for some really bad outcomes.”

Airborne tech is coming to the rescue

L'Aquila, Abruzzo, Italy. A goverment's office disrupted by the 2009 earthquake. Source: Wikipedia Commons
L’Aquila, Abruzzo, Italy. A government’s office disrupted by the 2009 earthquake. Source: Wikipedia Commons

by Fintan Burke

‘There’s no real way to determine how safe it is to approach a building, and what is the safest route to do that,’ said Dr Angelos Amditis of the Institute of Communication and Computer Systems, Greece. ‘Now for the first time you can do that in a much more structured way.’

Dr Amditis coordinates the EU-funded RECONASS project, which is using drones and satellite technology to help emergency workers in post-disaster scenarios.

It’s part of a new arsenal of airborne disaster response technologies which is giving a bird’s eye view of disaster zones to help save lives.

The system developed by RECONASS places wireless positioning tags into a building’s structure, along with strain, temperature and acceleration sensors, either when it is first built or through retrofitting. Then after a disaster strikes drones are deployed to scan the building’s exterior and match data from the sensors to this image to build an accurate representation of the damage.

This allows the system to precisely calculate potential areas of structural weakness in the damaged building, and it can then be paired with satellite data to get an overview of the damage in an area.

‘From one side we have 3D virtual images showing the status of the building … produced by ground sensors’ input, and from the other side we have the 3D real images from the drones and satellite views,’ said Dr Amditis.

‘There’s no real way to determine how safe it is to approach a building, and what is the safest route to do that.’

Dr Angelos Amditis, Institute of Communication and Computer Systems, Greece

Dr Amditis and colleagues are hoping to test the system in September by detonating a mock three-storey building in Sweden to investigate how well the drone technology works.

The end goal is for emergency services to use the drone technology to provide information about the structural state of tagged buildings within the first crucial hours after the disaster.

To boost rescue services’ ability to respond quickly in the event of a crisis, researchers in Spain are working on how to help emergency helicopter pilots successfully navigate through a disaster area by giving them more precise and reliable information about their position.

The EU-funded 5 LIVES project is building a system that uses both the newly launched Galileo Global Navigation Satellite System (GNSS) from the European Commission and the European Geostationary Navigation Overlay Service (EGNOS), which reports on the reliability and accuracy of data from satellites.

At the moment, most European helicopters use visual navigation, meaning they cannot fly in bad conditions such as fog, clouds or bad weather.

While more and more are beginning to use GNSS technology, the system has been known to be affected by slight changes in the earth’s atmosphere. There is also no way of knowing how accurate the information is.

‘With conventional GNSS technology positioning, there are a lot of external effects that might negatively affect the precision of positioning,’ said project manager Santi Vilardaga of Barcelona-based Pildo Labs.

In order to encourage the adoption of GNSS technology, the 5 LIVES system overlays information from EGNOS to confirm the precision and accuracy of satellite data.

‘Different parameters that affect the displacement of the signal or the propagation of the satellite signal through the atmosphere are collected through a network of ground-based stations and then transferred to the geostationary satellites that then broadcast the corrections to the EGNOS users,’ he said.

More flexible

This extra precision and confidence in the data allows emergency helicopters to become more flexible. ‘They can operate at night, they can operate technology in meteorological conditions without need of added technology on the ground,’ said Vilardaga.

They are also designing drone-based search and rescue operations. Drones can be used to fly around disaster areas, using on-board cameras to continue the search for survivors even when weather conditions make it unsafe for human pilots to do so, explains Vilardaga.

The team is now ensuring that the drones satisfy existing European regulation and their new positioning technology is up to standard for a demonstration of their search and rescue procedures next year.

The need for a new way to combat natural disasters from the air is being driven in part by a new type of crisis – so-called mega-fires. These once-rare fires, which are known to spread at a rate of more than three acres per second, are forest fires large enough to spread between fire-fighting jurisdictions.

A variety of factors such as drought, climate change and the practice of preventing smaller fires to develop naturally have all been blamed for the increasing frequency of mega-fires.

To defend against these fires, the EU-funded Advanced Forest Fire Fighting project is trying to improve both firefighting logistics and technology.

One method involves developing a pellet made of extinguishing materials which can be dropped from a greater height, meaning pilots are less likely to encounter risks from smoke or heat from the fire.

Another area under development is centralising the information from satellite data, drones and ground reports into what is being called a core expert engine to help firefighters to prioritise procedures.

‘The commander can have at his disposal just one aircraft with a pellet,’ said project coordinator Cecilia Coveri of Finmeccanica SpA, Italy, ‘and in case the pellet is the best way to extinguish a fire … this simulation can help the commander to take this decision.’

The project begins its first trials in Athens during the summer in cooperation with Greek naval forces, with additional testing of the risk analysis and simulation tools to follow.

If you enjoyed that article, you may also enjoy reading: 

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Photos from the Airbus Shopfloor Challenge

airbus_shopfloor_challenge__MG_8747

Robohub covered the Airbus Shopfloor Challenge that took place during #ICRA16 in Stockholm. Below, you can see an extensive photo gallery as part of our coverage. Check it out!

Team Naist

airbus_shopfloor_challenge__MG_9055

Team Naist, from Nara Institute of Science and Technology, Japan won first prize. They used a KUKA robot arm, an advanced head with stabilizing rods, and an advanced computer vision system that enabled them to drill holes efficiently and with great precision.

airbus_shopfloor_challenge__MG_x2

airbus_shopfloor_challenge__MG_8770airbus_shopfloor_challenge__MG_x1

Team CriGroup

airbus_shopfloor_challenge__MG_9366

Team CriGroup is based at the School of Mechanical and Aerospace Engineering, within Nanyang Technological University in Singapore. They used ready made parts and a Denso arm with a special focus on software. Their method produced an innovative drilling pattern that minimized robot motion. They came in second place.

airbus_shopfloor_challenge__MG_x4

airbus_shopfloor_challenge__MG_9368

Team Sirado

airbus_shopfloor_challenge__MG_9632

Team Sirado brings together 6 researchers from the graduate School of Engineering, Arts et Métiers Lille campus, and 3 experienced industrial representatives from KUKA Systems Aerospace France, and KUKA Automatisme Robotique SAS. They also used a KUKA arm and a specially designed drill unit. Sirado took third place in the competition.

airbus_shopfloor_challenge__MG_x3

airbus_shopfloor_challenge__MG_9297

Team R3

airbus_shopfloor_challenge__MG_9285

R3 is a robotics collective based out of Ryerson University in Ontario, Canada. Their custom-made XY platform used 7 drill bits in one unit to drill many holes at once. They performed two rounds and competed on the final round.

airbus_shopfloor_challenge__MG_x5

Team Vayu

airbus_shopfloor_challenge__MG_9119

Team Vayu from India brings together five undergraduate students who share a passion for aerospace. They had the simplest approach with a compact 3 axis robot that performed well throughout the challenge.

airbus_shopfloor_challenge__MG_x6

Team Akita Prefectural University

airbus_shopfloor_challenge__MG_8718

Japanese team Akita Prefectural University implemented a unique solution for the challenge. Their robot used a delta-based solution to place the drill bit accurately. The arms themselves used rolled metallic tape under restrictors to extend and contract. They were able to demonstrate their setup, but weren’t able to compete.

airbus_shopfloor_challenge__MG_x7

Team Bug Eaters

airbus_shopfloor_challenge__MG_x8

The Bug Eaters team from the University of Nebraska-Lincoln, USA is made up of four undergraduate Mechanical and Materials Engineering students. Their robot is an innovative version of the delta robot, but issues with their motors didn’t allow them to perform.

Page 428 of 428
1 426 427 428