Page 336 of 340
1 334 335 336 337 338 340

Wearable system helps visually impaired users navigate

New algorithms power a prototype system for helping visually impaired users avoid obstacles and identify objects. Courtesy of the researchers.

Computer scientists have been working for decades on automatic navigation systems to aid the visually impaired, but it’s been difficult to come up with anything as reliable and easy to use as the white cane, the type of metal-tipped cane that visually impaired people frequently use to identify clear walking paths.

White canes have a few drawbacks, however. One is that the obstacles they come in contact with are sometimes other people. Another is that they can’t identify certain types of objects, such as tables or chairs, or determine whether a chair is already occupied.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new system that uses a 3-D camera, a belt with separately controllable vibrational motors distributed around it, and an electronically reconfigurable Braille interface to give visually impaired users more information about their environments.

The system could be used in conjunction with or as an alternative to a cane. In a paper they’re presenting this week at the International Conference on Robotics and Automation, the researchers describe the system and a series of usability studies they conducted with visually impaired volunteers.

“We did a couple of different tests with blind users,” says Robert Katzschmann, a graduate student in mechanical engineering at MIT and one of the paper’s two first authors. “Having something that didn’t infringe on their other senses was important. So we didn’t want to have audio; we didn’t want to have something around the head, vibrations on the neck — all of those things, we tried them out, but none of them were accepted. We found that the one area of the body that is the least used for other senses is around your abdomen.”

Katzschmann is joined on the paper by his advisor Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science; his fellow first author Hsueh-Cheng Wang, who was a postdoc at MIT when the work was done and is now an assistant professor of electrical and computer engineering at National Chiao Tung University in Taiwan; Santani Teng, a postdoc in CSAIL; Brandon Araki, a graduate student in mechanical engineering; and Laura Giarré, a professor of electrical engineering at the University of Modena and Reggio Emilia in Italy.

Parsing the world

The researchers’ system consists of a 3-D camera worn in a pouch hung around the neck; a processing unit that runs the team’s proprietary algorithms; the sensor belt, which has five vibrating motors evenly spaced around its forward half; and the reconfigurable Braille interface, which is worn at the user’s side.

The key to the system is an algorithm for quickly identifying surfaces and their orientations from the 3-D-camera data. The researchers experimented with three different types of 3-D cameras, which used three different techniques to gauge depth but all produced relatively low-resolution images — 640 pixels by 480 pixels — with both color and depth measurements for each pixel.

The algorithm first groups the pixels into clusters of three. Because the pixels have associated location data, each cluster determines a plane. If the orientations of the planes defined by five nearby clusters are within 10 degrees of each other, the system concludes that it has found a surface. It doesn’t need to determine the extent of the surface or what type of object it’s the surface of; it simply registers an obstacle at that location and begins to buzz the associated motor if the wearer gets within 2 meters of it.

Chair identification is similar but a little more stringent. The system needs to complete three distinct surface identifications, in the same general area, rather than just one; this ensures that the chair is unoccupied. The surfaces need to be roughly parallel to the ground, and they have to fall within a prescribed range of heights.

Tactile data

The belt motors can vary the frequency, intensity, and duration of their vibrations, as well as the intervals between them, to send different types of tactile signals to the user. For instance, an increase in frequency and intensity generally indicates that the wearer is approaching an obstacle in the direction indicated by that particular motor. But when the system is in chair-finding mode, for example, a double pulse indicates the direction in which a chair with a vacant seat can be found.

The Braille interface consists of two rows of five reconfigurable Braille pads. Symbols displayed on the pads describe the objects in the user’s environment — for instance, a “t” for table or a “c” for chair. The symbol’s position in the row indicates the direction in which it can be found; the column it appears in indicates its distance. A user adept at Braille should find that the signals from the Braille interface and the belt-mounted motors coincide.

In tests, the chair-finding system reduced subjects’ contacts with objects other than the chairs they sought by 80 percent, and the navigation system reduced the number of cane collisions with people loitering around a hallway by 86 percent.

May 2017 fundings, acquisitions, IPOs and failures

May 2017 had two robotics-related companies get $9.5 billion in funding and 22 others raised $249 million. Acquisitions also continued to be substantial with Toyota Motor’s $260 million acquisition of Bastian Solutions plus three others (where the amounts weren’t disclosed).

Fundings

  1. Didi Chuxing, the Uber of China, raised $5.5 billion in a round led by SoftBank with new investors Silver Lake Kraftwerk joining previous investors SoftBank, China Merchants Bank and Bank of Communications. According to TechCrunch, this latest round brings the total raised by DiDi to about $13 billion. Uber, by comparison, has raised $8.81 billion.
  2. Nvidia Corp, a Santa Clara, CA-based speciality GPU maker, raised $4 billion (representing a 4.9% stake in the company) according to Bloomberg. Nvidia’s newest chips are focused on providing power for deep learning for self-driving vehicles.
  3. ClearMotion, a Woburn, MA automotive technology startup that’s building shock absorbers with robotic, software-driven adaptive actuators for car stability, has raised $100 million in a Series C round led by a group of JP Morgan clients and NEA, Qualcomm Ventures and more.
  4. Echodyne, a Bellevue, WA developer of radar vision technology used in drones and self-driving cars, has raised $29 million in a Series B round led by New Enterprise Associates and joined by Bill Gates, Madrona Venture Group, and others.
  5. DeepMap, a Silicon Valley mapping startup, raised $25 million in a round led by Accel and included GSR Ventures and Andreessen Horowitz.

    “Autonomous vehicles are tempting us with a radically new future. However, this level of autonomy requires a highly sophisticated mapping and localization infrastructure that can handle massive amounts of data. I’m very excited to work with the DeepMap team, who have the requisite expertise in mapping, vision, and large scale operations, as they build the core technology that will fuel the next generation of transportation,” said Martin Casado, general partner at Andreessen Horowitz.

  6. Hesai Photonics Technology, a transplanted Silicon Valley-to-Shanghai sensor startup, raised $16 million in a Series A round led by Pagoda Investment with participation from Grains Valley Venture Capital, Jiangmen Venture Capital and LightHouse Capital Management. Hesai is developing a hybrid LiDAR device for self-driving cars. Hesai has already partnered with a number of autonomous driving technology and car companies including Baidu’s Chinese electric vehicle start-up NIO and self-driving tech firm UiSee.
  7. Abundant Robotics, a Menlo Park, CA-based automated fruit-picking tech developer, raised $10 million in venture funding. GV (Google Ventures) led the round, and was joined by BayWa AG and Tellus Partners. Existing partners Yamaha Motor Company, KPCB Edge, and Comet Labs also participated.
  8. TriLumina Corp., an Albuquerque, NM-based developer of solid-state automotive LiDAR illumination for ADAS and autonomous driving, closed a $9 million equity and debt financing. Backers included new investors Kickstart Seed Fund and existing stakeholders Stage 1 Ventures, Cottonwood Technology Fund, DENSO Ventures and Sun Mountain Capital.
  9. Bowery Farming, a NYC indoor vertical farm startup, raised $7.5 million (in February) in a seed round led by First Round Capital and including Box Group, Homebrew, Flybridge, Red Swan, RRE, Lerer Hippeau Ventures, and Tom Colicchio – a restauranteur and judge on reality cooking show, Top Chef.
  10. Taranis, an Israel-based precision agriculture intelligence platform raised $7.5 million in Series A funding. Finistere Ventures led the round, and was joined by Vertex Ventures. Existing investors Eshbol Investments, Mindset Ventures, OurCrowd, and Eyal Gura participated.
  11. Ceres Imaging, the Oakland, CA aerial imagery and analytics company, raised a $5 million Series A round of funding led by Romulus Capital.
  12. Stanley Robotics, a Paris-based automated valet parking service developer, raised $4 million in funding. Investors included Elaia Partners, Idinvest Partners and Ville de Demain. Stanley’s new parking robot is a mobile car-carrying lift that moves and tightly parks cars in outdoor locations.
  13. AIRY3D Inc, a Canadian start-up in 3D computer vision, raised $3.5 million in a seed round co-led by CRCM Ventures and R7 Partners. Other investors include WI Harper Group, Robert Bosch Venture Capital, Nautilus Venture Partners and several angel investors that are affiliates of TandemLaunch, the Montreal-based incubator that spun out AIRY3D.
  14. SkyX Systems, a Canada-based unmanned aircraft system developer, raised around $3 million in funding from Kuang-Chi Group.
  15. Catalia Health, a San Francisco-based patient care management company applying robotics to improve personal health, raised $2.5 million in funding. Khosla Ventures led the round.
  16. vHive, an Israeli startup developing software to operate autonomous drone fleets, raised $2 million (in April) in an A round led by StageOne VC and several additional private investors.
  17. Vivacity Labs, a London AI tech and sensor startup, raised $2 million from Tracsis, Downing Ventures and the London Co-Investment Fund and was also granted an additional $1.3 million from Innovate UK to create sensors with built-in machine learning to identify individual road users and manage traffic accordingly.
  18. Bluewrist, a Canadian integrator of vision systems, raised around $1.5 million (in February) from Istuary Toronto Capital.
  19. American Robotics, a Boston-based commercial farming drone system and analytics developer, raised $1.1 million in seed funding. Investors included Brain Robotics Capital.
  20. Kubo, a Danish educational robot startup, raised around $1 million from the Danish Growth Fund. Kubo is an educational robot that helps kids learn coding, math, language and music in a screenless, tangible environment.
  21. Zeals, a Japanese startup which produces interaction software for robots such as Palmi and Sota, has closed a $720k investment from Japanese adtech firm FreakOut Holdings.
  22. Kitty Hawk, a San Francisco drone platform startup, raised $600k in seed money in March from The Flying Object VC.
  23. Kraken Sonar, a Newfoundland marine tech startup, raised around $500k from RDC, a provincial Crown corporation responsible for improving Newfoundland and Labrador’s research and development. The funding will be used to develop the ThunderFish program which will combine smart sonar, laser and optical sensors, advanced pressure tolerant battery and thruster technologies and cutting edge artificial intelligence algorithms integrated onboard a cost effective AUV capable of 20,000 foot depths.
  24. Motörleaf, a Canadian ag sensor, communications and software startup, raised an undisclosed amount in a seed round (in March).

Acquisitions

  1. Toyota Motor Corp paid $260 million to acquire Bastian Solutions, a U.S.-based materials handling systems integrator. Toyota is the world’s No. 1 forklift truck manufacturer in terms of global market share. With this acqisition Toyota is making a “full-scale entry” into the North American logistics technology sector and will also use Bastian’s systems to make its own global supply chain more efficient.
  2. Ctrl.Me Robotics, a Hollywood, CA drone startup, was acquired by Snap, Inc. for “an amount less than $1 million.” Ctrl Me developed a system for capturing movie-quality aerial video but was recently winding down its operations. Snap acquired its assets and technology as well as talent. Snap was already using one of Ctrl.me’s products: Spectacles, which captures video for Snap’s mobile app.
  3. Applied Research Associates, Inc. (ARA), an employee-owned scientific research and engineering company, acquired Neya Systems LLC on April 28, 2017. Neya Systems LLC is known for their development of unmanned systems for defense, homeland security, and commercial users. Terms of the deal were not disclosed.
  4. Trimble has acquired Müller-Elektronik and all its subsidiary companies, for an undisclosed amount. Müller is a German manufacturer and integrator of farm implement controls, steering kits and precision farming solutions. The transaction is expected to close in the Q3 2017. Financial terms were not disclosed. Müller was key in the development of the ISOBUS communication protocol found in most tractors and towed implements, which allows one terminal to control several implements and machines, regardless of manufacturer.

IPOs

  1. Gamma 2 Robotics, a security robot maker, launched a $6 million private offering to accredited investors.
  2. Aquabotix, a Fall River, MA-headquartered company, raised $5.5 million from their IPO of UUV (ASX:UUV) on the Australian Securities Exchange (ASX). Aquabotix manufactures commercial and industrial underwater drone/camera systems and has shipped over 350 units worldwide.

Failures

  1. FarmLink LLC
  2. EZ Robotics (CN)

Europe regulates robotics: Summer school brings together researchers and experts in robotics

After a successful 2016 first edition, our next summer school cohort on The Regulation of Robotics in Europe: Legal, Ethical and Economic Implications will take place in Pisa at the Scuola Sant’Anna, from 3- 8 July.

When the Robolaw project came to an end – and we presented our results before the European Parliament – we clearly perceived that a leap was needed not only in some approaches to regulation but also in the way social scientists, as well as engineers, are trained.

Indeed, in order to undergo technical analysis in law and robotics, without being lured into science fiction, an adequate understanding of the peculiarities of the systems being studied is required. A bottom-up approach, like the one adopted by Robolaw and its guidelines, is essential.

Social scientists, and lawyers in particular, often lack such knowledge and thus tend to either make unreasonable assumptions – of technological developments that are farfetched or simply unrealistic – or misperceive what the pivoting point in the analysis is going to be. The notion of autonomy is a fitting example. The consequence, however, is not simply bad scientific literature, but potentially inadequate policies being developed, thence wrong decision – even legislative ones – being adopted, impacting research and development of new applications, while overlooking relevant issues and impairments.

Similarly, engineers working in robotics are often confronted with philosophical and legal debates involving the devices they research that they are not always equipped to understand. Those debates are instead precious for they allow to identify societal concerns and expectations that can be used to orient research strategically, and engineers ought also participate and have a say.

Ultimately, it is everybody’s interest to better address existing and emerging needs, fulfilling desires and avoiding eliciting – often ungrounded – fears. This is what the European Union understands as Responsible Research and Innovation, but it also the prerequisite for the diffusion of new applications in society and the emergence of a sound robotic industry. Moreover, the current tendency in EU regulation favouring by design approaches – whereby privacy or other rights need to be enforced through the very functioning of the device – require technicians to consider such concerns early on, during the development phase of their products.

A common language needs thus be created, to avoid a babel-tower effect, that preserves each one’s peculiarities and specificities, yet allowing close cooperation.

A multidisciplinary approach, grounded in philosophy – ethics in particular –, law – and law and economics methodologies – economics and innovation management, and engineering is required.

With that idea in mind, we competed and won a Jean Monnet grant – a prestigious funding action of the EU Commission, mainly directed towards the promotion of education and teaching activities – with a project titled: Europe Regulates Robotics and organized the first edition of the Summer School The Regulation of Robotics in Europe: Legal, Ethical and Economic Implications in 2016.

The opening event of the Summer School saw the participation of MEP (Member of the EU Parliament) Mady Delvaux-Stehres, who presented what was then the draft recommendation – now approved – of the EU Parliament on the Civil Law Rules of Robotics, Mihalis Kritikos – a Policy Analyst of the Parliament –, who personally contributed to the drafting of that document, Maria Chiara Carrozza – former Italian minister of University Education and Research, professor of robotics and member of the Italian Senate – discussing Italian political initiatives.  We also had entrepreneurs, such as Roberto Valleggi, and engineers coming from the industry, such as Arturo Baroncelli – from Comau – and academia, Fabio Bonsignorio, who also taught the course.

Classes dealt with methodologies – the Robolaw approach – notions of autonomy, liability – and different liability models – privacy, benchmarking and robot design, machine ethics and human enhancement through technology, innovation management and technology assessment. Students also had the chance to visit the Biorobotics Institute laboratories in Pontedera (directed by Prof. Paolo Dario) and see many different applications and research being carried out, directly explained by the people who work on them.

The most impressive part was, however, our class. We managed to put together a truly international group of young and bright minds, many of which already enrolled in a PhD program – in law, philosophy, engineering and management – coming from Universities such as Edinburgh, London School of Economics, Sorbonne, Cambridge, Vienna, Bologna, Suor Orsola, Bicocca, Milan, Hannover, Pisa, Pavia and Freiburg. Other came from prominent European robotic industries, were practitioners, entrepreneurs, policy makers from EU institutions.

At the end of the Summer School, some presented their research on a broad set of extremely interesting topics, such as driverless car liability and innovation management, machine ethics and the trolley problem, anthrobotics and decision-making in natural and artificial systems.

We had vivid in class debates. A true community was created that is still in contact today. Five of our students actively participated in the 2017 European Robotics Forum in Edinburgh and more are working and publishing on such matters.

We can say we truly achieved our goal! However, the challenge has just begun. We want to reach out to more people and replicate this successful initiative. A second edition of the Summer School will take place again this year in Pisa at the Scuola Sant’Anna from July 3rd to 8th and we are accepting applications until June 7th.

I am certain we will manage again to put together an incredible group of individuals, and I can’t wait to meet our new class. On our side, we are preparing a lot of interesting surprises for them, including the participation of policy makers involved in the regulation of robotics at EU level to provide a first-hand look at what is happening in the EU.

More information about the summer school can be found on our website here.

Registration to the summer school can be found here.

Researcher to develop bio-inspired ‘smart’ knee for prosthetics

A researcher at the University of the West of England (UWE Bristol) is developing a bio-inspired ‘smart’ knee joint for prosthetic lower limbs. Dr Appolinaire Etoundi, based at Bristol Robotics Laboratory, is leading the research and will analyse the functions, features and mechanisms of the human knee in order to translate this information into a new bio-inspired procedure for designing prosthetics.

Dr Etoundi gained his PhD in bio-inspired technologies from the University of Bristol where he developed a design procedure for humanoid robotic knee joints.  He is now turning his attention to nature, a growing area in robotics known as Bio-mimicry, combining curiosity about how biological systems work with solving complex engineering problems, in order to develop a prototype smart knee joint for prosthetics.

Andy Lewis, a Paralympic Triathlon Gold Medallist (Rio 2016), who wears a lower limb prosthetic, will try out the new joint once developed, to compare its energy consumption and gait efficiency to current prosthetics. There are currently approximately 100,000 knee replacement operations performed every year in the UK.  Lower limb amputation has a profound effect on daily life, and prosthesis must be comfortable and adapted to people so they can maintain daily activities such as walking and running.

Looking for inspiration in nature, Dr Etoundi will examine how the human knee works, as well as looking closely at the design of knee replacements used in surgery and at current knee joints in prosthetic limbs.  These three areas of knowledge will inform a procedure for designing a knee that could give greater, more responsive movement, while offering the control and intelligence that comes from robotics.

Dr Etoundi says, “I have spent years designing knee joints for humanoid robots, but the human knee has evolved over millions of years and is incredibly successful.  The human knee is a very complex joint with ligaments, which guide the motion of the knee, and bones that perform the motion.  Current mechanisms in prosthetic knees have a straightforward pin joint with ball bearings that does not have the sophisticated range of motion and stability of the human knee with its cruciate ligaments.

“The complex interaction between the soft tissue (ligaments) and the bones in the knee joint is an area that has yet to be replicated in prosthetics.  We need to understand this better in order to provide a better knee joint for people to use. I will study the different mechanisms within the knee joint and look for ways to translate its beneficial functionalities into a design concept for prosthetics.

“I want to create a prosthetic knee that will give the greatest range of motion with the least friction, enabling walking, climbing stairs, squatting and stability, while also offering important attributes of current prosthetics and the benefits of robotic technology.”

Andy Lewis, who will try out Dr Etoundi’s nature-inspired design says, “I was pleased when Appo approached me. He understands the importance of a good prosthetic for sports people, and it will be interesting to see what he discovers that might make a better prosthetic which is more responsive.  I am looking forward to seeing his early designs next year and trying them out.”

The research team includes Professor Richie Gill (University of Bath), Dr Ravi Vaidyanathan (Imperial College London) and Dr Michael Whitehouse (University of Bristol).

Dr Etoundi is a Senior Lecturer in Mechatronics at UWE Bristol and is a member of the Medical Robotics group at Bristol Robotics Laboratory, which looks at the application of robotic technology in human-controlled and surgical applications.

 

Artificial intelligence: Europe needs a “human in command approach,” says EESC

Credit: EESC

The EU must pursue a policy that ensures the development, deployment and use of artificial intelligence (AI) in Europe in favor, and not conducive to the detriment, acts of society and social welfare, the Committee said in an initiative opinion on the social impact of AI which 11 fields are identified for action.

“We have a human need in-command approach to artificial intelligence, with machines remain machines and people always maintain control of these machines” said rapporteur Catelijne Muller (NL – Workers’ Group). She was not just about technical check: “People can and must decide whether, when and how AI is used in our daily lives, what tasks we entrust to AI and how transparent and ethical is all. Eventually, we decide whether we want that certain activities are carried out, care or medical decisions are made by AI, and whether we want to accept that our AI security, privacy and autonomy might be jeopardized,” said Mrs. Muller.

Artificial intelligence has recently undergone rapid growth. The size of the market for AI is approximately USD 664 million and is expected to increase to 38.8 billion USD in 2025. It is virtually undisputed that artificial intelligence can have great social benefits: consider applications for sustainable agriculture, environmentally friendly production, safer traffic safety work, a safer financial system, better medicine and more. Artificial intelligence can even possibly contribute to the eradication of disease and poverty.

But the benefits of AI can only be realized as well as the challenges to be addressed related to it. The Committee has identified 11 areas in which AI ensures social challenges, ranging from ethics, security, transparency, privacy and standards, employment, education, (in) equality and inclusiveness, legislation, governance and democracy, besides warfare and super intelligence.

These challenges can not be filed with the industry, this is a matter for governments, social partners, scientists and companies together. The EESC believes that the EU should adopt policy frameworks herein and should play a global leadership role. “We have pan-European norms and standards required for AI, as we now have for food and household appliances. We need a pan-European code of ethics to ensure that remain compatible AI systems with the principles of dignity, integrity, freedom and cultural and gender diversity, as well as basic human rights, “said Catelijne Muller,” and we have employment strategies are needed to maintain or create jobs and ensure that employees remain autonomous and take pleasure in their work. “

The issue of the impact of AI on employment is indeed central to the debate on AI in Europe, where unemployment is still high because of the crisis. Although predictions about the number of jobs over the next 20 years will be lost due to the use of AI vary from a small 5% to a disastrous 100%, resulting in a society without jobs, the rapporteur believes, based on a recent report McKinsey that it is more likely that parts or parts of jobs, and not a complete job, will be taken over by AI. In this case, it comes down to education, lifelong learning and training, to ensure that workers benefit from these developments and not be victimized.

The EESC opinion also calls for a European AI infrastructure with open source privacy-respecting and learning environments, real-life test environments and high-quality data sets for training and development of AI systems. Artificial intelligence has been mainly developed by the “big five” (Amazon, Facebook, Apple, Google and Microsoft). Although these companies are in favor of the open development of AI, and some of their AI development platforms and offer open source, full accessibility this is not guaranteed. AI infrastructure for the EU, possibly even with a European AI certification or labeling, can not only help promote the development of responsible and sustainable AI but also the EU competitive advantage.

Livestream: Committee to take stance in the European debate on artificial intelligence

524th plenary session, Main debating chamber, European Parliament. Credit: EESC

Today, the European Economic and Social Committee (EESC) is going to debate its stance in the European discussion on AI and will express conflicting views on certain issues, especially on the question of legal personality for robots. The report, which has been drawn up by a Dutch rapporteur, Ms Catelijne Muller, member of the Workers’ Group, will be debated at the EESC’s plenary in Brussels on 31 May.

Click here to watch the livestream. Live coverage will begin at 14:30 with the debate on AI at 16:00 CEST.

You can also download and read referral related documents about the consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society here.

From the EESC website:

“Artificial Intelligence (AI) technologies offer the potential for creating new and innovative solutions to improve people’s lives, grow the economy, and address challenges in health and wellbeing, climate change, safety and security. Like any disruptive technology, however, AI carries risks and presents complex societal challenges in several areas such as labour, safety, privacy, ethics, skills and so on.

A broad approach towards AI, covering all its effects (good and bad) on society as a whole, is crucial. Especially in a time where developments are accelerating.”

RoboCup video series: Rescue league

RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. Established in 1997, the original mission was to field a team of robots capable of winning against the human soccer World Cup champions by 2050.

To celebrate 20 years of RoboCup, the Federation is launching a video series featuring each of the leagues with one short video for those who just want a taster, and one long video for the full story. Robohub will be featuring one league every week leading up to RoboCup 2017 in Nagoya, Japan.

Robotics isn’t only about playing soccer it’s also about helping people. This week, we take a look at what it takes to be part of RoboCupRescue. You’ll hear about the history and ambitions of RoboCup from the trustees, and inspiring teams from around the world.

Short version:

Long version:

Can’t wait to watch the rest? You can view all the videos on the RoboCup playlist below:

https://www.youtube.com/playlist?list=PLEfaZULTeP_-bqFvCLBWnOvFAgkHTWbWC

Please spread the word! and if you would like to join a team, check here for more information.

f you liked reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Faster, more nimble drones on the horizon

Engineers at MIT have come up with an algorithm to tune a Dynamic Vision Sensor (DVS) camera, simplifying a scene to its most essential visual elements and potentially enabling the development of faster drones. Image: Jose-Luis Olivares/MIT

There’s a limit to how fast autonomous vehicles can fly while safely avoiding obstacles. That’s because the cameras used on today’s drones can only process images so fast, frame by individual frame. Beyond roughly 30 miles per hour, a drone is likely to crash simply because its cameras can’t keep up.

Recently, researchers in Zurich invented a new type of camera, known as the Dynamic Vision Sensor (DVS), that continuously visualizes a scene in terms of changes in brightness, at extremely short, microsecond intervals. But this deluge of data can overwhelm a system, making it difficult for a drone to distinguish an oncoming obstacle through the noise.

Now engineers at MIT have come up with an algorithm to tune a DVS camera to detect only specific changes in brightness that matter for a particular system, vastly simplifying a scene to its most essential visual elements.

The results, which they presented at the IEEE American Control Conference in Seattle, can be applied to any linear system that directs a robot to move from point A to point B as a response to high-speed visual data. Eventually, the results could also help to increase the speeds for more complex systems such as drones and other autonomous robots.

“There is a new family of vision sensors that has the capacity to bring high-speed autonomous flight to reality, but researchers have not developed algorithms that are suitable to process the output data,” says lead author Prince Singh, a graduate student in MIT’s Department of Aeronautics and Astronautics. “We present a first approach for making sense of the DVS’ ambiguous data, by reformulating the inherently noisy system into an amenable form.”

Singh’s co-authors are MIT visiting professor Emilio Frazzoli of the Swiss Federal Institute of Technology in Zurich, and Sze Zheng Yong of Arizona State University.

Taking a visual cue from biology

The DVS camera is the first commercially available “neuromorphic” sensor — a class of sensors that is modeled after the vision systems in animals and humans. In the very early stages of processing a scene, photosensitive cells in the human retina, for example, are activated in response to changes in luminosity, in real time.

Neuromorphic sensors are designed with multiple circuits arranged in parallel, similarly to photosensitive cells, that activate and produce blue or red pixels on a computer screen in response to either a drop or spike in brightness.

Instead of a typical video feed, a drone with a DVS camera would “see” a grainy depiction of pixels that switch between two colors, depending on whether that point in space has brightened or darkened at any given moment. The sensor requires no image processing and is designed to enable, among other applications, high-speed autonomous flight.

Researchers have used DVS cameras to enable simple linear systems to see and react to high-speed events, and they have designed controllers, or algorithms, to quickly translate DVS data and carry out appropriate responses. For example, engineers have designed controllers that interpret pixel changes in order to control the movements of a robotic goalie to block an incoming soccer ball, as well as to direct a motorized platform to keep a pencil standing upright.

But for any given DVS system, researchers have had to start from scratch in designing a controller to translate DVS data in a meaningful way for that particular system.

“The pencil and goalie examples are very geometrically constrained, meaning if you give me those specific scenarios, I can design a controller,” Singh says. “But the question becomes, what if I want to do something more complicated?”

Cutting through the noise

In the team’s new paper, the researchers report developing a sort of universal controller that can translate DVS data in a meaningful way for any simple linear, robotic system. The key to the controller is that it identifies the ideal value for a parameter Singh calls “H,” or the event-threshold value, signifying the minimum change in brightness that the system can detect.

Setting the H value for a particular system can essentially determine that system’s visual sensitivity: A system with a low H value would be programmed to take in and interpret changes in luminosity that range from very small to relatively large, while a high H value would exclude small changes, and only “see” and react to large variations in brightness.

The researchers formulated an algorithm first by taking into account the possibility that a change in brightness would occur for every “event,” or pixel activated in a particular system. They also estimated the probability for “spurious events,” such as a pixel randomly misfiring, creating false noise in the data.

Once they derived a formula with these variables in mind, they were able to work it into a well-known algorithm known as an H-infinity robust controller, to determine the H value for that system.

The team’s algorithm can now be used to set a DVS camera’s sensitivity to detect the most essential changes in brightness for any given linear system, while excluding extraneous signals. The researchers performed a numerical simulation to test the algorithm, identifying an H value for a theoretical linear system, which they found was able to remain stable and carry out its function without being disrupted by extraneous pixel events.

“We found that this H threshold serves as a ‘sweet-spot,’ so that a system doesn’t become overwhelmed with too many events,” Singh says. He adds that the new results “unify control of many systems,” and represent a first step toward faster, more stable autonomous flying robots, such as the Robobee, developed by researchers at Harvard University.

“We want to break that speed limit of 20 to 30 miles per hour, and go faster without colliding,” Singh says. “The next step may be to combine DVS with a regular camera, which can tell you, based on the DVS rendering, that an object is a couch versus a car, in real time.”

This research was supported in part by the Singapore National Research Foundation through the SMART Future Urban Mobility project.

Talking machines: Graphons and “inferencing”

In episode two of season three Neil takes us through the basics on dropout, we chat about the definition of inference (It’s more about context than you think!) and hear an interview with Jennifer Chayes of Microsoft.

If you liked this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

How a challenging aerial environment sparked a business opportunity

We develop the fastest, smallest and lightest distance sensors for advanced robotics in challenging environments. These sensors are born from a fruitful collaboration with CERN while developing flying indoor inspection systems.

How we began started with a challenge: the European Centre for Nuclear Research (CERN) asked if we could use drones to perform fully autonomous inspections within the tunnel of the Large Hadron Collider. Now if you haven’t seen it, it’s a complex environment; perhaps one of the most unfriendly environments imaginable for fully autonomous drone flight. But we accepted the mission, rolled-up our sleeves, and got to work. As you can imagine, the mission was very challenging! 

Large Hadron Collider tunnel. Credit: CERN

One of the main issues we faced was finding suitable sensors to place on the drone for navigation and anti-collision. We got everything on the market that we could find and tried to make it work. Ultrasound was too slow and the range too short. Lasers tend to be too big, too heavy and consumed too much power. Monocular and stereo vision was highly complex and placed a huge computational burden on the system and even then was prone to failure. It became clear that what we really needed, simply didn’t exist! That’s how the concept for TeraRanger’s brand of sensors was born.

Having failed to make any of the available sensing technologies work at the performance levels required, we came to the conclusion that we would need to build the sensors from the ground up. It wasn’t easy (and still isn’t) but finally, we had something small enough, light enough (8g), with fast refresh rates and enough range to work well on the drone. Leading academics in robotics could see potential using the sensor and wanted some for themselves, then more people wanted them, and before too long we had a new business.

Millimetre precision wasn’t vital for the drone application, but the high refresh rates and range were. And by not using a laser emitter we were able to give the sensor a 3 degree field of view, which for many applications proved to be a real boon, giving a smoother flow of data when faced with uneven surfaces and complex and cluttered environments. It also enabled the sensor to be fully eye-safe and the supply current to remain low.

 

Plug and play multi-axis sensing

Knowing that we would often need to use multiple sensors at the same time, we also designed-in support for multi-sensor, multi-axis requirements. Using a ‘hub’ we can simultaneously connect up to eight sensors to provide a simple to use, plug and play approach to multi-sensor applications. By controlling the sequence in which sensors are fired (along with some other parameters) we are able to limit or eliminate, the potential for sensor cross-talk and then stream an array of calibrated distance values in millimetres, and at high speed. From a user’s’ perspective this is about as simple as it gets since the hub also centralises power management.

TeraRanger Tower

There’s no need to get in a spin

Using that same concept we continued to push the boundaries. A significant evolution has been our approach to LiDAR scanning – not just from a hardware point of view (although that is also different) but from a conceptual approach too. We’ve taken the same philosophy of small size, lightweight sensors with very high refresh rates (up to 1kHz) and applied that to create a new style of static LiDAR. Rather than rotating a sensor or using other mechanical methods to move a beam, TeraRanger Tower simultaneously monitors eight axis (or more if you stack multiple units together) and streams an array of data at up to 270Hz!

Challenging the point-cloud conundrum

With no motors or other moving parts, the hardware itself has many advantages, being silent, lightweight and robust, but there is also a secondary benefit from the data. Traditional thinking amongst the robotics community is that to perform navigation, Simultaneous Localisation and Mapping (SLAM) and collision avoidance you have to “see” everything around you. Just as we did at the start of our journey, people focus on complex solutions – like stereo vision – gathering millions of data points which then requires complex and resource-hungry processing. The complexity of the solution – and of the algorithms – has the potential to create many failure modes. Having discovered for ourselves that the complicated solution is not always necessary, our approach is different in that, we monitor fewer points. But,  we monitor them at very fast refresh rates to ensure that what we think we see, is really there. As a result, we build less intense point clouds, but with very reliable data. This then requires less complex algorithms and processing and can be done with lighter-weight computing. The result is a more robust, and potentially safer solution, especially when you can make some assumptions about your environment, or harness odometry data to augment the LiDAR data. Many times we were told you could never do SLAM monitoring on just eight points, but we proved that wrong.

Coming full circle: There are no big problems, just a lot of little problems

All of this leads back to our original mission. We’ve not solved it yet, but recently we mounted TeraRanger Tower to a drone and proved, for the first time we believe, that a static LiDAR can be used for drone anti-collision. The Proof of Concept was quickly put together to harness code developed for the open source APM 3.5 flight controller, with Terabee writing drivers to hook into the codebase. Anti-collision is just one step in the journey to fully autonomous drone flight and we are still on the wild-ride of technology, but definitely, we are taming the beast!

If you have expertise in drone collision avoidance and wish to help us overcome the remaining challenges, please contact us at teraranger@terabee.com. For more information about Terabee and our TeraRanger brand of sensors, please visit our website.

Live coverage of #ICRA2017

The next edition of IEEE 2017 International Conference on Robotics and Automation (ICRA) is in Singapore! The event kicked off on Monday 29 May and is running until 3 June. ICRA is one of the leading international forums for robotics researchers to present their work.

The conference theme, “Innovation, Entrepreneurship, and Real-world Solutions”, underscores the need for innovative R&D talent, dynamic and goal-driven entrepreneurs and practitioners using robotics and automation technology to solve challenging real-world problems such as shortage of labour, an ageing society, and creating sustainable environments.

ICRA 2017 will introduce a new Robotics Innovation & Entrepreneurship Forum, alongside an industry forum, a government forum, an ASEAN & emerging country forum, a public forum (ICRA-X) centred on the conference theme, and an ethics forum.

Plenary talks to feature: Chris Gerdes, Stanford University, USA, presenting on Modeling the possibilities: From the Chalkboard to the Race Track to the World Beyond (Tuesday morning); Hiroaki Kitano, Sony Computer Science Laboratories, Inc., Japan, presenting on Nobel Turing Challenge: Grand Challenge of AI, Robotics, and Systems Biology (Wednesday morning); and Kerstin Vignard, United Nations Institute for Disarmament Research presenting on Framing the International Discussion on the Weaponization of Increasingly Autonomous Technologies (Thursday morning).

The conference will also host a number of high-profile keynotes, technical paper sessions, workshop and tutorial sessions, and exhibitions.

There will also be four Robot Challenges to take place on 30-31 May:

Workshops & Tutorials

25 workshop/tutorial sessions are available for junior researchers. The sessions are to provide interaction and foster collaboration between young researchers, with the opportunity to listen to, and closely interact, with senior experts. The next set of workshops will be on Friday, 2 June.

Special sessions on Emerging Robotics Technology

This year, ICRA 2017 has invited robot experts to share the recent technological advancement in the field of robotics. These special sessions will focus on novel and creative approaches in designing or developing robots for automation, medical or surgical tasks, and space exploratory mission. The event will be held on Tuesday, 30 May.

Audrow will be on site interviewing for upcoming Robots Podcasts; check back for the latest coverage and highlights!

The Drone Center’s Weekly Roundup: 5/29/17

The DJI Spark drone. Image via dronetrest.com

May 22, 2017 – May 28, 2017

News

A U.S. drone strike reportedly killed three members of the Pakistani Taliban. According to the Associated Press, the strike targeted a compound in Khost province, Afghanistan, although other sources indicate that the strike was in Pakistan.

The Trump administration is reportedly seeking new powers from Congress to track and destroy wayward drones inside the United States. A draft of the proposed law obtained by the New York Times would allow the federal government to intercept any drone that is viewed as a threat or is flying over a specially designated area such as military bases. According to the Times, the draft bill is currently being discussed in classified briefings on Capitol Hill.

A judge in North Dakota has acquitted a drone operator arrested at the Dakota Access Pipeline protests last year. Aaron Shawn Turgeon was charged with reckless endangerment after police claimed that he had flown close to a surveillance airplane. Footage from cell phones and from Turgeon’s drone contributed to his acquittal. (Bismarck Tribune)

Commentary, Analysis, and Art

The U.S. House Energy and Commerce Committee held a hearing on disruptive technologies and companies.

At Bellingcat, Nick Waters considers trends in ISIS drone bombing tactics based on a database of 121 strikes.

At the Los Angeles Times, Nabih Bulos examines the role that ISIS drones are playing on the battlefield in Mosul.

The editorial board at the Los Angeles Times argues that drones should not be considered the same as toys, even in the wake of the court ruling that struck down the FAA drone registration database.

At East Pendulum, Henri Kenhmann takes a closer look at the Chinese Air Force’s Wing Loong strike drone squadron.

At the Wall Street Journal, Paul J. Davies writes that profits are eluding drone manufacturers in spite of the popularity of consumer drones.

At CNET, Rick Broida surveys the cheap, $20 quadrotor drones that are currently available on the market.

Meanwhile, at Time, John Patrick Pullen looks for the perfect selfie drone.

At the Verge, Sean O’Kane writes that a new DJI policy will remove functionality from their drones unless the user registers with the company.

At Drone360, David O’Connor argues that online retailers are embracing delivery drones out of a desire to exploit consumer tendencies for instant gratification.
At TechCrunch, Brian Heater argues that in spite of new technologies and systems, consumer drones are not quite “mainstream” yet.

At the Taiwan News, Judy Lin writes that Taiwan’s push to develop a medium-altitude long-endurance surveillance drone is still in its early stages.

At the Augusta Chronicle, Thomas Gardiner writes that a Department of Energy investigation into drone sightings near nuclear sites in Georgia has not confirmed any of the reported sightings.

At the Financial Times, Jennifer Thompson looks at the impact that drones have had on workers in different industries.

At the Australian Financial Review, Andrew Burke examines the role that drones had in making the new Pirates of the Caribbean and considers how drones are changing filmmaking.

Know Your Drone

The Defense Advanced Research Projects Agency and Boeing are teaming up to build a reusable unmanned space plane called Phantom Express. (Popular Mechanics)

At a launch in New York City, commercial drone maker DJI unveiled Spark, a small consumer quadcopter that can be controlled with hand gestures. (TechCrunch)

In a test flight, a General Atomics Aeronautical Systems SkyGuardian drone remained airborne for 48 hours, a new record for a Predator-series aircraft. (Unmanned Systems Technology)

Swedish auto maker Volvo is testing an autonomous garbage truck. (AUVSI)

Swiss drone maker Aeroscout unveiled the Scout B-330, a 50 kg rotary drone that can fly for up to three hours. (GPS World)

Defense firm Textron announced that it has successfully test fired its Fury lightweight precision guided missile from a Shadow tactical drone. (Unmanned Systems Technology)

Israeli defense company UVision has developed a new extended-range loitering munition called the Hero-400EC with an endurance of two hours. (FlightGlobal)

Atmos UAV has unveiled the Marlyn, a vertical take-off and landing fixed-wing drone for commercial applications. (GIM International)

Belarus’s Indela Design Bureau has developed a military vertical take-off and landing drone called Bur. (IHS Jane’s 360)

In a test sponsored by the U.S. Navy, Lockheed Martin launched a Vector Hawk unmanned aerial vehicle from a Marlin MK2 undersea drone. (Popular Mechanics)

Drone maker SwellPro unveiled the Splash Drone 3, a waterproof multirotor drone that can float on water. (The Verge)

Otsaw, a Singapore-based startup, has developed an autonomous security ground robot equipped with a multirotor drone. (Mashable)

The U.S. Naval Undersea Warfare Center is testing a biomimetic minehunting unmanned undersea vehicle. (IHS Jane’s 360)

The U.S. Navy has issued a Request for Information relating to a planned large unmanned surface vehicle program. (FBO)

The U.S. Office of Naval Research and Naval Surface Warfare Command have developed an undersea remotely operated vehicle to assist naval dive teams. (IHS Jane’s 360)

Drones at Work

Chinese retailer JD.com has been granted government approval to operate heavy-load delivery drones along certain fixed routes. (Vox)

A drone crashed into the stands at a San Diego Padres and Arizona Diamondbacks baseball game in San Diego. (Washington Post)

U.S. Senators Feinstein (D-CA), Lee (R-UT), Blumenthal (D-CT), and Cotton (R-AR) have introduced a bill that would grant local governments the authority to regulate drone use. (Press Release)

The European Space Agency conducted a test in which it used a drone to help explore a cave system in Sicily. (Press Release)

Australia’s Civil Aviation Safety Authority has released a mobile app that shows drone operators where they can and cannot fly. (ABC)

Meanwhile, farmers in Australia are using drones to help muster herds of Merino sheep. (ABC)

Somali police have acquired five aerial surveillance drones donated by a former U.S. special operations officer. (Reuters)

The Idaho State Police have acquired four drones for a range of operations. (Idaho State Journal)

A food blogger in New Zealand used a drone to pick up his fried chicken from a KFC for him. (Mashable)

The U.S. National Oceanic and Atmospheric Administration is using three unmanned aircraft to observe atmospheric changes that could lead to severe thunderstorms. (Press Release)

A number of drone companies have teamed up to provide 3D drone imagery following flooding in Colombia. (UAV Expert News)

An Israeli Skylark drone crashed during a flight over southern Lebanon. (The New York Times)

The non-profit Lindbergh Foundation is using AI developed by Neurala to analyze footage from anti-poaching drones. (Engadget)

Park police in New York State used a drone to help rescue a dog from Letchworth Gorge. (Press Release)

The North Dakota National Guard in Fargo is slated to receive two MQ-9 Reaper drones for training this summer. (MPR News)

North Dakota was host to a simulated disaster exercise in order to test the role that drones could play in disaster response. (KVRR)

The Israeli military announced that it will deploy unmanned ground vehicles to patrol its border with the Gaza Strip in the coming years. (The Algemeiner)

The Yuku Baja Muliku Rangers in Queensland, Australia are using drones to conduct environmental inspections. (Innovators Magazine)

Industry Intel

Echodyne, a startup developing radar systems for drones, raised $29 million in a funding round led by Bill Gates. (TechCrunch)

The Defense Advanced Research Projects Agency awarded the University of Washington a base $3.5 million contract for the Aerial Dragnet program. (FBO)

The U.S. Navy awarded Northrop Grumman Systems a $49.4 million advance acquisition contract for components for the MQ-4C Triton surveillance drone. (DoD)

The U.S. Navy awarded Northrop Grumman Systems a $13 million contract for one multi-function active sensor for the MQ-4C Triton surveillance drone. (DoD)

The U.S. Navy awarded Northrop Grumman Systems a $65.5 million contract modification for logistic support and sustainment for the Broad Area Maritime Surveillance-Demonstrator (MQ-4) program. (DoD)

The U.S. Navy awarded Insitu a $1.8 million contract to train service members on the RQ-21A Blackjack drone. (FBO)

The U.S. Navy awarded Raytheon a $14.7 million contract for the AN/AQS-20, a mine hunting sonar that is designed to be towed by unmanned undersea and surface vehicles, as well as by manned platforms. (DoD)   

The U.S. Air Force awarded Radio Hill Technologies a $2.8 million contract for counter-drone systems. (FBO)

The Research & Development Corporation of Newfoundland and Labrador awarded Kraken Sonar Systems $553,609 in funding to support development of the ThunderFish autonomous underwater vehicle. (Press Release)

American Robotics, a company that develops drones for commercial farming applications, secured $1.1 million in a funding round led by Brain Robotics Capital. (Press Release)

DroneSAR, a Dublin-based startup that seeks to develop drone software for emergency response, will receive $55,880 in funding as part of the European Space Agency Business Incubation Centre. (Silicon Republic)

Ontario-based SkyX received $4 million in funding from Kuang-Chi Group to continue developing self-charging drones for long-range industrial inspection missions. (Unmanned Aerial Online)  

L3 Technologies has acquired Open Water Power, a Massachusetts-based company that develops high-density aluminium batteries for unmanned undersea vehicles. (IHS Jane’s 360)

Israeli drone manufacturer Aeronautics will make an initial public offering by mid-June. (IHS Jane’s 360)

BIKI, an autonomous commercial underwater drone equipped with a 4K camera, has reached its fundraising goal on Kickstarter. (News Ledge)

Alta Devices and PowerOasis announced a partnership to develop a solar/lithium-ion hybrid battery for drones. (Drone360)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Robots Podcast #235: Locus Robotics, with Rick Faulk


In this episode, Abate De Mey speaks with Rick Faulk, CEO of Locus Robotics, about warehouse automation with collaborative robots. At Locus Robotics, they increase the productivity of workers in e-commerce warehouses by using robot helpers to transport items that are passed to them by the workers. The lightweight autonomous robots move at a similar pace to their co-workers, use LIDAR and computer vision to detect people and avoid collisions. This allows people to share the warehouse floor with the robots. The collaborative robotic system is lightweight and can be adapted to existing warehouses with minimal alterations.

 

Rick Faulk
Rick Faulk, CEO at Locus Robotics, leads the executive team and is responsible for overall strategy and execution. He has over thirty years of experience in executive management, sales and marketing for some of the world’s most successful technology companies such as Intronis, j2 Global, WebEx, Intranets.com, Lotus Development, Mzinga, and PictureTel. Rick also currently sits on various boards including INfluitive and Hostway, and is an advisor to a number of early-stage companies.

Links

New Horizon 2020 robotics projects, 2016: BADGER

In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda). 

Every week, euRobotics will publish a video interview with a project, so that you can find out more about their activities. This week features BADGER, a Robot for Autonomous Underground Trenchless Operations, Mapping and Navigation.

Objectives

The goal of the proposed project is the design and development of an integrated underground robotic system capable for autonomous construction of subterranean small-diameter, highly curved tunnel networks and for localization, mapping and autonomous navigation during its operation. The proposed robotic system will enable the execution of tasks in different application domains of high societal and economic impact including trenchless constructions (cabling and piping) installations, etc., search and rescue operations, remote science and exploration applications.

Expected impact

The expected strategic impact of BADGER project focuses in:

  1. Introduce advanced robotics technologies, including intelligent control and cognition capabilities, to significantly increase the European competitiveness,
  2. Drastically reduce the traffic congestion and pollution in the European urban environments increasing, in this way, the quality of life of people,

Enabling technologies for new potential applications: search and rescue, mining and quarrying, civil applications, mapping, etc.

Partners

UNIVERSITY CARLOS III OF MADRID (ES) 
UNIVERSITY OF GLASGOW (UK)
CENTRE FOR RESEARCH AND TECHNOLOGY HELLAS (GR) 
IDS GEORADAR (I)
SINGULARLOGIC (GR)
TRACTO-TECHNIK (D)
ROBOTNIK (ES) 
UNIVERSITY CARLOS III OF MADRID (ES)

Coordinator: Prof. Carlos Balaguer
balaguer@ing.uc3m.es
Twitter: @badger_project

Project website: www.badger-robotics.eu

If you enjoyed reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

The Robot Academy: An open online robotics education resource

The Robot Academy is a new learning resource from Professor Peter Corke and the Queensland University of Technology (QUT), the team behind the award-winning Introduction to Robotics and Robotic Vision courses. There are over 200 lessons available, all for free.

Educators are encouraged to use the Academy content to support teaching and learning in class or set them as flipped learning tasks. You can easily create viewing lists with links to lessons or masterclasses. Under Resources, you can download a Robotics Toolbox and Machine Vision Toolbox, which are useful for simulating classical arm-type robotics, such as kinematics, dynamics, and trajectory generation.

The lessons were created in 2015 for the Introduction to Robotics and Robotic Vision courses. We describe our approach to creating the original courses in the article, An Innovative Educational Change: Massive Open Online Courses in Robotics and Robotic Vision. The courses were designed for university undergraduate students but many lessons are suitable for anybody, see the difficulty rating on each lesson.

Under Masterclasses, students can choose a subject and watch a set of videos related to that particular topic. Single lessons can offer a short training segment or a refresher. Three online courses, Introducing Robotics, are also offered.

Below are two examples of the single-course and masterclasses. We encourage everyone to take a look at the QUT Robot Academy by visiting our website.

Single Lesson

Out and about with robots

In this video, we look at a diverse range of real-world robots and discuss what they do and how they do it.

Masterclass

Robot joint control: Introduction (Video 1 of 12)

In this video, students learn how we make robot joints move to the angles or positions that are required to achieve the desired end-effector motion. This is the job of the robot’s joint controller. In the lecture, we will take discuss the realms of control theory.

Robot joint control: Architecture (video 2 of 12)

In this lecture, we discuss a robot joint is a mechatronic system comprising motors, sensors, electronics and embedded computing that implements a feedback control system.

Robot joint control: Actuators (video 3 of 12)

Actuators are the components that actually move the robot’s joint. So, let’s look at a few different actuation technologies that are used in robots.

To watch the rest of the video series, visit their website.

If you enjoyed this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Page 336 of 340
1 334 335 336 337 338 340