Page 394 of 399
1 392 393 394 395 396 399

Uber regrouping after Levandowski firing

Source: Uber

Uber, the global ride-sharing transportation company, has named two replacements to recover from the recent firing of Anthony Levandowski who headed their Advanced Technologies Group, their OTTO trucking unit, and their self-driving team. Levandowski was fired May 30th.

Eric Meyhofer

Meyhofer, who before coming to Uber was the co-founder of Carnegie Robotics and a CMU robotics professor, was also part of the group that came to Uber from CMU (see below). He has now been named to head Uber’s Advanced Technologies Group (ATG) self-driving group and will report directly to Uber CEO Travis Kalanick.

The ATG group is charged with developing the self-driving technologies of mapping, perception, safety, data collection and learning, and self-driving for cars and trucks.

Sensors that determine distances are integral to the process. Elon Musk said recently that LiDAR isn’t needed because cameras, sensors, software and high-speed GPUs can do the same tricks at a fraction of the cost. Levandowski favored LiDARs, particularly newly developed solid state LiDAR technologies.

Anthony Levandowski

Levandowski, the previous head of the ATG, joined Google to work with Sebastian Thrun on Google Street View, started a mobile mapping company that experimented with LiDAR technology and another to build a self-driving LiDAR-using car (a Prius). Google acquired both companies including their IP. In 2016 Levandowski left Google to found Otto, a company making self-driving kits to retrofit semi-trailer trucks. Just as the kit was launched, Uber acquired Otto and Levandowski became the head of Uber’s driverless car operation in addition to continuing his work at Otto.

The Levandowski case, which caused Uber to fire him, revolves around the intellectual property and particularly the LiDAR-related technologies that Google’s and Uber’s self-driving plans revolve around. Getting the cost of perception down to a reasonable level has been part of the bigger challenge of self-driving technology and LiDAR technology is integral to that plan.

Google’s Waymo self-driving unit is implying in their suit that in return for bringing Google’s IP to Uber, Uber gave Levandowski $250 million in stock grants. Uber has called Waymo’s claims baseless and an attempt to slow down a competitor.

Waymo also claims that Uber has a history of “stealing” technology and includes the time in 2015 when Uber hired away 50+ of the Carnegie Mellon University robotics team – a move that cost Uber a reported $5 billion and created havoc at CMU and the National Robotics Engineering Centre (NREC) which lost 1/3 of their staff to Uber. The move was preceded by establishing a strategic partnership with CMU to work together on self-driving technologies. Four months later, Uber hired the 50.

The Daily Mail headlines said:

Carnegie Mellon left decimated after Uber poaches 40 top-rated robotic researchers to help them develop self-driving cars

  • Carnegie Mellon ‘in crisis’ after mass defection of scientists to Uber
  • Uber hope their fleet of taxis will not require drivers in the future
  • Used $5 billion from investors to poach at least 40 from the National Robotics Engineering Center
  • Uber took six principal investigators and 34 engineers

Brian Zajac

Zajac has been on Uber’s self-driving team since 2015 after stints with Shell and the U.S. Army. Now he becomes the new chief of hardware development and reports to Meyhofer.

David Morris, writing for Fortune, wrote:

“Zajac will now bear a great deal of responsibility for cracking the driverless car problem, which Uber CEO Travis Kalanick has described as “existential” to the company. Uber loses huge amounts of money, and many observers think eliminating the cost of drivers is its only realistic path to profitability.”

Bottom line:

Uber has research teams in Silicon Valley, Toronto and Pittsburgh all working to perfect Level 5 autonomous driving capabilities before any of their competitors are able to duplicate the process. Google, Baidu, Yandex, Didi Chuxing, a few of the Tier 1 component makers, and many others including all the major car companies are racing forward with the same intentions. Levandowski’s firing caused a big gap in Uber’s self-driving project management and fear amongst their investors. Uber hopes that these two changes, Meyhofer as overall head and Zajac as hardware chief, will quell the fears that Uber is losing their momentum.

Making Pepper walk: Understanding Softbank’s purchase of Boston Dynamics

It is unclear if Masayoshi Son, Chairman of Softbank, was one of the 17 million YouTube viewers of Boston Dynamic’s Big Dog before acquiring the company for an undisclosed amount this past Thursday. What is clear is the acquisition of Boston Dynamics by Softbank is a big deal. Softbank’s humanoid robot Pepper is trading up her dainty wheels for a pair of sturdy legs.

In expressing his excitement for the acquisition, Masayoshi Son said, “Today, there are many issues we still cannot solve by ourselves with human capabilities. Smart robotics are going to be a key driver of the next stage of the Information Revolution, and Marc and his team at Boston Dynamics are the clear technology leaders in advanced dynamic robots. I am thrilled to welcome them to the SoftBank family and look forward to supporting them as they continue to advance the field of robotics and explore applications that can help make life easier, safer and more fulfilling.”

Marc Raibert, CEO of Boston Dynamics, previously sold his company to Google in 2013. Following the departure of Andy Rubin from Google, the internet company expressed buyers remorse. Raibert’s company failed to advance from being a military contractor to a commercial enterprise. It became very challenging incorporating Boston Dynamic’s zoo of robots (mechanical dogs, cheetahs, bulls, mules and complex humanoids) into Google’s autonomous strategy. Since Rubin’s exit in 2014, rumors of buyers acquiring Boston Dynamics from Google have ranged from Toyota Research to Amazon Robotics. Softbank represents a new chapter for Raibert, and possible the entire industry.

Raibert’s statement to the press gave astute readers a peek of what to expect: “We at Boston Dynamics are excited to be part of SoftBank’s bold vision and its position creating the next technology revolution, and we share SoftBank’s belief that advances in technology should be for the benefit of humanity. We look forward to working with SoftBank in our mission to push the boundaries of what advanced robots can do and to create useful applications in a smarter and more connected world.” A quick study of the assets of both companies reveals how Boston Dynamics could help Softbank in its mission to build robots to benefit humanity.

Softbank’s premier robot is Pepper, a four-foot tall social robot that has been mostly deployed in Asia as a customer service agent. Recently, Pepper, as part of Softbank’s commitment to the Trump administration to invest $50 billion in the United States has been spotted in stores in California. As an example, Pepper proved itself as a valuable asset last year to Palo Alto’s premier tech retailer B8ta, accounting for a 6 times increase in sales. To date, there are close to 10,000 Pepper robots deployed worldwide, mostly in Asian retail stores. However, Softbank is also the owner of Sprint with 4,500 cell phone stores across the USA, and a major investor in WeWork with 140 locations globally servicing 100,000 members – could Pepper be the customer service agent or receptionist of the future?

According to Softbank’s website, Pepper is designed to be a “day-to-day companion,” with its most compelling feature being the ability to perceive emotions. Softbank boasts that their humanoid is the first robot ever to recognize moods of homo sapiens and adapt its behavior accordingly. While this is extremely relevant for selling techniques, Softbank is most proud of Pepper being the first robot to be adopted into homes in Japan. It is believed that Pepper is more than a point-of-purchase display gimmick, but an example of the next generation of caregivers for the rising elderly populations in Japan and the United States. According to the Daily Good, “Pepper could do wonders for the mental engagement and continual monitoring of those in need.” Its under $2,000 price point also provides an attractive incentive to introduce the robot into new environments, however wheel-based systems are a limitation in the home with clutter floors, stairs and other unforeseen obstacles.

Boston Dynamics is almost the complete opposite of Softbank; it is a research group spun out of MIT. Its expertise is not in social robots but in military “proofs of concepts” like futuristic combat mules. The company has touted some of the most frightening mechanical beasts to ever walk the planet from metal Cheetahs that sprint at over 25 miles per hour to mechanized dogs that scale mountains with ease to one of the largest humanoids every built that has an uncanny resemblance to Cyberdyne’s T-800. In a step towards commercialization, Boston Dynamics released earlier this year its newest monster – a wheel-biped leg robot named Handle that can easily lift over a hundred pounds and jump higher than Lebron James. Many analysts pontificated that this appeared to be Boston Dynamics attempt to prove its relevance to Google with a possible last mile delivery bot.

In an IEEE interview when Handle debuted last February, Raibert exclaimed, “Wheels are a great invention. But wheels work best on flat surfaces and legs can go anywhere. By combining wheels and legs, Handle can have the best of both worlds.” IEEE writer Evan Ackerman questioned, after seeing Handle, if the next generation of Boston Dynamic’s humanoids could feature legs with roller-skate like shoes. One is certain that Boston Dynamics is the undisputed leader of dynamic control and balance systems for complex mechanical designs.

Leading roboticist Dan Kara of ABI Research confirmed that “these [Boston Dynamics] are the world’s greatest experts on legged mobility.”

If walking is the expertise of Raibert’s team and Softbank is the leader of cognitive robotics with a seemingly endless supply of capital, the combination could be the first real striding humanoid capable of human-like emotions. By 2030 there will be 70 million people over the age of 65 years in America, with a considerable smaller amount of caregivers. To answer this call researchers are already converting current versions of Pepper into sophisticated robotic assistants. Last year, Rice University unveiled a “Multi-Purpose Eldercare Robot Assistant (MERA)” which is essentially a customized version of the Softbank’s robot. MERA is specifically designed to be a home companion for seniors that “records and analyzes videos of a person’s face and calculates vital signs such as heart and breathing rates.” Rice University partnered with IBM’s Aging-in-Place Research Lab to create MERA’s speech technology. IBM’s Lab founder, Susann Keohane, explained that Pepper “has everything bundled into one adorable self.” Now with Boston Dynamic’s legs Pepper could be a friend, physical therapist, and life coach walking side by side with its human companion.

Daniel Theobald, founder of Vecna Technologies – a healthcare robotic company, summed it best last week, “I think Softbank has made a major commitment to the future of robotics. They understand that the world economy is going to be driven by robotics more and more.”

Next Tuesday we will dive further into the implications of Softbank’s purchase of Boston Dynamics with Dr. Howard Morgan/First Round Capital, Tom Ryden/MassRobotics and Dr. Eric Daimler/Obama White House at RobotLabNYC’s event on 6/13 @ 6pm WeWork Grand Central (RSVP).

From member to mentor: Life after being a FIRST student

3! 2! 1! Go! Suddenly, robots jerk into motion and zoom across the field to score points, crossing over several types of terrain and shooting balls into high and low goals. Another buzzer sounds, drivers pick up their controls and all six robots—three per alliance—are now under human control. As these huge 120-pound robots score points, cheers ring through a packed stadium, fueled by high school students who worked hard to build their robot in just six weeks. As the match ends, nervous and excited students wait to see who is the winner of the 2016 world championship.

This was my last match as a member of the Girls of Steel FIRST Robotics Competition Team #3504. FIRST (For Recognition and Inspiration of Science and Technology) is a robotics program for students from K-12, and I was in the last division, FRC. The program is about more than introducing students to STEM and giving them hands-on experience, it’s about helping students to grow and have positive impacts by recognizing community service efforts, celebrating good values, developing soft skills, and guiding students to pursue higher education.

The next fall, I was off to college at the Illinois Institute of Technology in Chicago, studying to become a mechanical engineer. For the first time in my life, I was on my own. My time was so swept away by schoolwork, clubs, exploring the city, and making new friends that FIRST became a distant memory. Now, I fear that if I hadn’t bumped back into it, I would have lost touch with the program that played such a critical role in my life. While at a robotics club meeting, sign ups for the FRC Midwest Regional Planning Committee were passed around. Wanting to somehow get involved, I signed up.

I had no idea what to expect as I walked into a big conference room with a friend, and fellow FRC alum, for our first committee meeting. As the meeting progressed—densely filled with information and detailed plans for upcoming seasons and younger leagues—I sat there stunned. Regardless of if they were an alum or not, there were so many people who dedicated their time and effort to make this program work for students. It was a wake-up call. During my four years of being a member, I took so much for granted and didn’t realize the magnitude of hard work that volunteers and mentors put in for the students. They were the ones who supported and helped make me the person that I am today. So, to all of the volunteers and mentors who may be reading this, thank you for everything, I couldn’t have done it without you.

Just like that, I was hooked on FIRST once again. I kept going to meetings, trying to help as much as I could while making connections with other volunteers. Right before the season started, there were a couple teams who had a shortage of mentors, which is how I found Hawks on the Horizon FRC team #5125. The first meeting I went to with the students opened my eyes; I was so used to being a member, I had no idea how to mentor FRC. Eventually, I learned how to be a different figure in a familiar situation and how to adjust to the differences between a large all-girls team, with many resources, and a small family-like team. Yet, without a doubt, I knew this was my new team. From the very first day, I was welcomed with open arms by the mentors and students, who made sure I came back.

“Who is your role model?” was a common question for me when I was very young, and I’d respond with a superhero. Now it’s never asked, but I have a better answer since I’ve gotten the chance to meet some of them. One of my role models is a student mentor from my old team. Although I haven’t seen her in a long time, I found myself remembering conversations we’d had years ago about some of the challenges of being a student mentor. Knowing I wasn’t alone, and how she’d dealt with some sticky situations, helped me guide students to find their own answers and become comfortable being hands-off myself. I’ve learned so much from past mentors like her, fellow mentors, and the students themselves. I’m happy to have found my FIRST family again.

The Midwest Regional was a whole new experience for me. During high school, my competition days were hectic, filled with fixing the robot, talking to other teams, and cheering for our alliance. At this past regional, it was an entirely different world. I was the student volunteer coordinator, helping run the student ambassador program and talking to people about how to get involved. I only visited my team when I got the chance. I was less aware of specific robots and paid more attention to what happens behind the scenes to ensure regional runs like a well-oiled machine. I want to be more involved in the process, since these competitions ignited a passion for engineering within me, and I cannot express with words how grateful I am.

Regardless of the changes within the past year, by the end of the competition, I still rocked the uniform. I’m incredibly proud of Hawks on the Horizon, thankful for Girls of Steel, and amazed by everyone in the Midwest Planning Committee. This transition from being a member to mentor has been an amazing journey and it doesn’t stop here. I’ll be working to give back for the rest of my life by helping make programs like FIRST happen for future students. Thank you to everyone who is a part of this organization and to find a team or event, go to firstinspires.org.

If you enjoyed this article, you may also want to read:

Robots Podcast #236: IASP 2016 (Part 3 of 3), with Vadim Kotenev and Vagan Martirosyan

Credit: sk.ru


In this episode, Audrow Nash and Christina Brester conduct interviews at the 2016 International Association of Science Parks and Areas of Innovation conference in Moscow, Russia. They speak with Vadim Kotenev of Rehabot and Motorica about prosthetic hands and rehabilatative devices; and Vagan Martirosyan, CEO of TryFit, a company that uses robotic sensors to help people find shoes that fit them well.

An image of one of the rehabilitative devices from Rehabot.

 

The robotic platform that scans your feet from TryFit.

 

Links

New Horizon 2020 robotics projects, 2016: Co4Robots

In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda). 

Every week, euRobotics will publish a video interview with a project, so that you can find out more about their activities. This week features Co4Robots: Achieving Complex Collaborative Missions via Decentralized Control and Coordination of Interacting Robots.

Objectives

Recent applications necessitate coordination of different robots. Current practice is mainly based on offline, centralized planning and tasks are fulfilled in a predefined manner. Co4Robots’ goal is to build a systematic methodology to:

  • accomplish complex task specifications given to a team of potentially heterogeneous robots;
  • develop control schemes appropriate for mobility and manipulation capabilities of the robots;
  • achieve perceptual capabilities that enable robots to localize themselves and estimate the dynamic environment state;
  • integrate all in a decentralized framework.

Expected impact

  1. The envisioned scenarios involve multi-robot services in e.g. office environments. Although public facilities are in some degree pre-structured, the need for the Co4Robots’ framework is evident since:
  2. it will lead to an improved use of resources and a faster accomplishment of tasks inside workspaces with high social activity;
  3. it will contribute towards the vision of more flexible multi-robot applications in both professional and domestic environments, also in view of the “Industry 4.0” vision and the general need to deploy such systems in everyday life scenarios.

Partners

KTH ROYAL INSTITUTE OF TECHNOLOGY
BOSCH
NATIONAL TECHNICAL UNIVERSITY OF ATHENS
PAL ROBOTICS
FOUNDATION FOR RESEARCH AND TECHNOLOGY HELLAS
UNIVERSITY OF GOTHENBURG

Coordinator:Prof. Dimos Dimarogonas
dimos@kth.se
www.facebook.com/co4robots

Project website: www.co4robots.eu

Watch all EU-projects videos

If you enjoyed reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Engineers design drones that can stay aloft for five days

The Jungle Hawk Owl team. Photo: Sally Chapman/MIT

In the event of a natural disaster that disrupts phone and Internet systems over a wide area, autonomous aircraft could potentially hover over affected regions, carrying communications payloads that provide temporary telecommunications coverage to those in need.

However, such unpiloted aerial vehicles, or UAVs, are often expensive to operate, and can only remain in the air for a day or two, as is the case with most autonomous surveillance aircraft operated by the U.S. Air Force. Providing adequate and persistent coverage would require a relay of multiple aircraft, landing and refueling around the clock, with operational costs of thousands of dollars per hour, per vehicle.

Now a team of MIT engineers has come up with a much less expensive UAV design that can hover for longer durations to provide wide-ranging communications support. The researchers designed, built, and tested a UAV resembling a thin glider with a 24-foot wingspan. The vehicle can carry 10 to 20 pounds of communications equipment while flying at an altitude of 15,000 feet. Weighing in at just under 150 pounds, the vehicle is powered by a 5-horsepower gasoline engine and can keep itself aloft for more than five days — longer than any gasoline-powered autonomous aircraft has remained in flight, the researchers say.

The team is presenting its results this week at the American Institute of Aeronautics and Astronautics Conference in Denver, Colorado. The team was led by R. John Hansman, the T. Wilson Professor of Aeronautics and Astronautics; and Warren Hoburg, the Boeing Assistant Professor of Aeronautics and Astronautics. Hansman and Hoburg are co-instructors for MIT’s Beaver Works project, a student research collaboration between MIT and the MIT Lincoln Laboratory.

A solar no-go

Hansman and Hoburg worked with MIT students to design a long-duration UAV as part of a Beaver Works capstone project — typically a two- or three-semester course that allows MIT students to design a vehicle that meets certain mission specifications, and to build and test their design.

In the spring of 2016, the U.S. Air Force approached the Beaver Works collaboration with an idea for designing a long-duration UAV powered by solar energy. The thought at the time was that an aircraft, fueled by the sun, could potentially remain in flight indefinitely. Others, including Google, have experimented with this concept,  designing solar-powered, high-altitude aircraft to deliver continuous internet access to rural and remote parts of Africa.

But when the team looked into the idea and analyzed the problem from multiple engineering angles, they found that solar power — at least for long-duration emergency response — was not the way to go.

“[A solar vehicle] would work fine in the summer season, but in winter, particularly if you’re far from the equator, nights are longer, and there’s not as much sunlight  during the day. So you have to carry more batteries, which adds weight and makes the plane bigger,” Hansman says. “For the mission of disaster relief, this could only respond to disasters that occur in summer, at low latitude. That just doesn’t work.”

The researchers came to their conclusions after modeling the problem using GPkit, a software tool developed by Hoburg that allows engineers to determine the optimal design decisions or dimensions for a vehicle, given certain constraints or mission requirements.

This method is not unique among initial aircraft design tools, but unlike these tools, which take into account only several main constraints, Hoburg’s method allowed the team to consider around 200 constraints and physical models simultaneously, and to fit them all together to create an optimal aircraft design.

“This gives you all the information you need to draw up the airplane,” Hansman says. “It also says that for every one of these hundreds of parameters, if you changed one of them, how much would that influence the plane’s performance? If you change the engine a bit, it will make a big difference. And if you change wingspan, will it show an effect?”

Framing for takeoff

After determining, through their software estimations, that a solar-powered UAV would not be feasible, at least for long-duration use in any part of the world, the team performed the same modeling for a gasoline-powered aircraft. They came up with a design that was predicted to stay in flight for more than five days, at altitudes of 15,000 feet, in up to 94th-percentile winds, at any latitude.

In the fall of 2016, the team built a prototype UAV, following the dimensions determined by students using Hoburg’s software tool. To keep the vehicle lightweight, they used materials such as carbon fiber for its wings and fuselage, and Kevlar for the tail and nosecone, which houses the payload. The researchers designed the UAV to be easily taken apart and stored in a FedEx box, to be shipped to any disaster region and quickly reassembled.

This spring, the students refined the prototype and developed a launch system, fashioning a simple metal frame to fit on a typical car roof rack. The UAV sits atop the frame as a driver accelerates the launch vehicle (a car or truck) up to rotation speed — the UAV’s optimal takeoff speed. At that point, the remote pilot would angle the UAV toward the sky, automatically releasing a fastener and allowing the UAV to lift off.

In early May, the team put the UAV to the test, conducting flight tests at Plum Island Airport in Newburyport, Massachusetts. For initial flight testing, the students modified the vehicle to comply with FAA regulations for small unpiloted aircraft, which allow drones flying at low altitude and weighing less than 55 pounds. To reduce the UAV’s weight from 150 to under 55 pounds, the researchers simply loaded it with a smaller ballast payload and less gasoline.

In their initial tests, the UAV successfully took off, flew around, and landed safely. Hoburg says there are special considerations that have to be made to test the vehicle over multiple days, such as having enough people to monitor the aircraft over a long period of time.

“There are a few aspects to flying for five straight days,” Hoburg says. “But we’re pretty confident that we have the right fuel burn rate and right engine that we could fly it for five days.”

“These vehicles could be used not only for disaster relief but also other missions, such as environmental monitoring. You might want to keep watch on wildfires or the outflow of a river,” Hansman adds. “I think it’s pretty clear that someone within a few years will manufacture a vehicle that will be a knockoff of this.”

This research was supported, in part, by MIT Lincoln Laboratory.

Finally! Google sells Boston Dynamics to SoftBank

Spotmini by Boston Dynamics. Source: Boston Dynamics/YouTube

In a long-awaited transaction, The New York Times Dealbook announced that SoftBank was buying Boston Dynamics from Alphabet (Google). Also included in the deal is the Japanese startup Schaft. Acquisition details were not disclosed.

Both Boston Dynamics and Schaft were acquired by Google when Andy Rubin was developing Google’s robot group through a series of acquisitions. Both companies have continued to develop innovative mobile robots. And both have been on Google’s for sale list.

Boston Dynamics, a DARPA and DoD-funded 25 year old company, designed two and four-legged robots for the military. Videos of BD’s robots WildCat, Big Dog, Cheetah, SpotMini (shown above getting into an elevator) and Handle, have been YouTube hits for years. Handle, BD’s most recent is a two-wheeled, four-legged hybrid robot that can stand, walk, hop, run and roll at up to 9 MPH.

Schaft, a Japanese startup/participant in the DARPA Robotic Challenge, recently unveiled a two-legged robot that climbed stairs, can carry 125 pounds of payload, move in tight spaces and keep its balance throughout.

SoftBank, through another acquisition (of French Aldabaran, the maker of the Nao and Romeo robots), and in a joint venture with Foxconn and Alibaba, has developed and marketed thousands of Pepper robots. Pepper is a cute, humanoid, mobile robot being marketed and used as a guide and sales assistant. The addition of Boston Dynamics and Schaft to the SoftBank stable add talent and technology to their growing robotics efforts, particularly the Tokyo-based Schaft.

Today, there are many issues we still cannot solve by ourselves with human capabilities. Smart robotics are going to be a key driver of the next stage of the information revolution,” said Masayoshi Son, chairman and chief executive of SoftBank.

Teaching ROS quickly to students

Lecturer Steffen Pfiffner of University of Weingarten in Germany is teaching ROS to 26 students at the same time at a very fast pace. His students, all of them within the Master on Computer Science of University of Weingarten, use only a web browser. They connect to a web page containing the lessons, a ROS development environment and several ROS based simulated robots. Using the browser, Pfiffner and his colleague Benjamin Stähle, are able to teach how to program with ROS quickly and to many students. This is what Robot Ignite Academy is made for.

“With Ignite Academy our students can jump right into ROS without all the hardware and software setup problems. And the best: they can do this from everywhere,” says Pfiffner.

Robot Ignite Academy provides a web service which contains the teaching material in text and video format, the simulations of several ROS based robots that the students must learn to program, and the development environment required to build ROS programs and test them on the simulated robot.

Student’s point of view

Students bring their own laptops to the class and connect to the online platform. From that moment, their laptop becomes a ROS development machine, ready to develop programs for many simulated real robots.

The Academy provides the text, the videos and the examples that the student has to follow. Then, the student creates her own ROS program and makes the robot perform a specific action. The student develops the ROS programs as if she is in a typical ROS development computer.

The main advantage is that students can use a Windows, Linux or Mac machine to learn ROS. They don’t even have to install ROS in their computers. The only prerequisite of the laptop is to have a browser. So students do not mess with all the installation problems that frustrate them (and the teachers!), especially when they are starting.

After class, students can continue with their learning at home, library or even the beach if there is a wifi available! All their code, learning material and simulations are stored online so they can access them from anywhere, anytime using any computer.

Teacher’s point of view

The advantage of using the platform is not only for the students but also for the teachers. Teachers do not have to create the material and maintain it. They do not have to prepare the simulations or work on multiple different computers. They don’t even have to prepare the exams!! (which are already provided by the platform).

So what are the teachers for?

By making use of the provided material, the teacher can concentrate on guiding the students by explaining the most confusing parts, answer questions, suggest modifications according to the level of each student, and adapt the pace to the different types of students.

This new method of teaching ROS is exploding among the Universities and High Schools that want to provide the latest and most practical teachings to their students. The method, developed by Robot Ignite Academy, combines a new way of teaching based on practice and an online learning platform. Those two points combined make the teaching of ROS a smooth experience and can potentially see the students’ knowledge base skyrocket.

As user Walace Rosa indicates in his video comment about Robot Ignite Academy:

It is a game changer [in] teaching ROS!

The method is becoming very popular in the robotics circuits too, and many teachers are using it for younger students. For example, High School Mundet in Barcelona is using it to teach ROS to 15 years old students.

Additionally, the academy provides a free online certification exam with different levels of knowledge certification. Many Universities are using this exam to certify that their students did learn the material since the exam is quite demanding.

Some examples of past events

  •  1 week ROS course in Barcelona for SMART-E project team members. This is a private course given by Robot Ignite Academy at Barcelona for 15 members of the SMART-E project that need to be up to speed with ROS fast. From 8th to 12nd of May 2017
  •  1 day ROS course for the Col·legi d’Enginyers de Barcelona. The 17th of May 2017.
  •  3 months course for University of La Salle in Barcelona within the Master on Automatics, Domotics and Robotics. From 10th of May to 29th of June 2017.
  •  1 weekend ROS course for teenagers in Bilbao, Spain. The 20th and 21st of May 2017.
  •  We can also organize a special event like these for you and your team.

Helpful ROS videos

Mori: A modular origami robot

Mori pictured in a hand as scale

The fields of modular and origami robotics have become increasingly popular in recent years, with both approaches presenting particular benefits, as well as limitations, to the end user. Christoph Belke and Jamie Paik from RRL, EPFL and NCCR Robotics have recently proposed an elegant new solution that integrates both types of robotics in order to overcome their individual limitations: Mori, a modular origami robot.

Mori is the first example of a robot that combines the concepts behind both origami robots and reconfigurable, modular robots. Origami robotics utilises folding of thin structures to produce single robots that can change their shape, while modular robotics uses large numbers of individual entities to reconfigure the overall shape and address diverse tasks. Origami robots are compact and light-weight but have functional restrictions related to the size and shape of the sheet and how many folds can be created. By contrast, modular robots are more flexible when it comes to shape and configuration, but they are generally bulky and complex.

Singular module

Mori, an origami robot that is modular, merges the benefits of these two approaches and eliminates some of their drawbacks. The presented prototype has the quasi-2D profile of an origami robot (meaning that it is very thin) and the flexibility of a modular robot. By developing a small and symmetrical coupling mechanism with a rotating pivot that provides actuation, each module can be attached to another in any formation. Once connected, the modules can fold up into any desirable shape.

The individual modules have a triangular structure with dimensions of just 6 mm in thickness, 70 mm in width and 26 g in weight. Contained within this slender structure are actuators, sensors and an on-board controller. This means that the only external input required for full functionality is a power source. The researchers at EPFL have thereby managed to create a robot that has the thin structure of an origami robot as well as the functional flexibility of a modular system.

The prototype presents a highly adaptive modular robot and has been tested in three scenarios that demonstrate the system’s flexibility. Firstly, the robots are assembled into a reconfigurable surface, which changes its shape according to the user’s input. Secondly, a single module is manoeuvred through a small gap, using rubber rings embedded into the rotating pivot as wheels, and assembled on the other side into a container. Thirdly the robot is coupled with feedback from an external camera, allowing the system to manipulate objects with closed-loop control.Mori as a manoeuverable surface

With Mori, the researchers have created the first robotic system that can represent reconfigurable surfaces of any size in three dimensions by using quasi-2D modules. The system’s design is adaptable to whatever task required, be that modulating its shape to repair damage to a structure in space, moulding to a limb weakened after injury in order to provide selective support or reconfiguring user interfaces, such as changing a table’s surface to represent geographical data. The opportunities are truly endless.

Reference

Christoph H. Belke and Jamie Paik, “Mori: A Modular Origami Robot“, IEEE/ASME Transactions on Mechatronics, doi:10.1109/TMECH.2017.2697310

ICRA 2017 in Singapore: Recap

Image: ICRA 2017

ICRA, the IEEE International Conference on Robotics and Automation, is an annual academic conference covering advances in robotics. It is one of the premier conferences in its field. This year, I was invited to attend to its 2017 edition in Singapore.

With a superb organization and a beautiful location, the event included conferences of leading researchers and companies from all around the world, as well as workshops and an exhibitors area. This latter is where I spent most of my time, as I love direct interaction with the companies and research centres. Also, in this kind of academic events, compared to trade fairs, you usually have the chance to directly find technical people who are able to explain in deep detail all the ins and outs of their products.

The robotics community is not so big yet, so we still know each other. I had the pleasure to meet good friends from companies like Infinium Robotics, PAL Robotics and Fetch Robotics between others. Infinium Robotics is the company in Singapore where I work as CTO. I already wrote about this great company in one of my previous posts: “Infinium Robotics. Flying Robots“.

PAL Robotics is a company in Spain well known for having developed some of the best humanoid robots in the world. I have a very good relationship with this companies team since more than ten years ago. Great people, well motivated, well managed, that has been able to look outside of the box and enter with bravery in the world of the robotics warehouse solutions with robots like Tiago or StockBot.

Fetch Robotics is also one of the big players in the robotics solutions for warehousing industry. But I met other interesting people and had amazing chats with people from companies so key in this field as: Amazon RoboticsDJI or Clearpath Robotics. At the end of this post, there is a full list of exhibiting companies.

I saw interesting technology, like the rehabilitation exoskeletons from Fourier Intelligence (Shanghai), the Spidar-G 6DOF Haptic interface from Tokyo Institute of Technology, the Haptic systems of Force Dimension and Moog, the dexterous manipulators of Kinova, Kuka, ABB, ITRI, the modular robot components of HEBI Robotics, Keyi Technology, the D motion capture technologies of Phoenix Technologies and Optitrack, the educational solutions of Ubitech, GT Robot or Robotis and many, many others, most of them ROS enabled.

As I usually do at these events, I recorded a video of the exhibition area to provide an idea of the technologies shown there.

Last but not least, I want to thank my friends from SIAA (Singapore International Automation Association) for their kind friendship and support. They also organized also the Singapore Pavilion in this event.

Ms LIM Sue Yin, Civic Seh (both SIAA) and Alejandro Alonso (IR/Hisparob VP)
List of exhibitors:

No more playing games: AlphaGo AI to tackle some real world challenges

Playing Go. Image: CC0

Humankind lost another important battle with artificial intelligence (AI) last month when AlphaGo beat the world’s leading Go player Ke Jie by three games to zero.

AlphaGo is an AI program developed by DeepMind, part of Google’s parent company Alphabet. Last year it beat another leading player, Lee Se-dol, by four games to one, but since then AlphaGo has substantially improved.

Ke Jie described AlphaGo’s skill as “like a God of Go”.

AlphaGo will now retire from playing Go, leaving behind a legacy of games played against itself. They’ve been described by one Go expert as like “games from far in the future”, which humans will study for years to improve their own play.

Ready, set, Go

Go is an ancient game that essentially pits two players – one playing black pieces the other white – for dominance on board usually marked with 19 horizontal and 19 vertical lines.

A typical game of Go: simple to learn but a lifetime to master. Flickr/Alper Cugun, CC BY

Go is a far more difficult game for computers to play than chess, because the number of possible moves in each position is much larger. This makes searching many moves ahead – feasible for computers in chess – very difficult in Go.

DeepMind’s breakthrough was the development of general-purpose learning algorithms that can, in principle, be trained in more societal-relevant domains than Go.

DeepMind says the research team behind AplhaGo is looking to pursue other complex problems, such as finding new cures for diseases, dramatically reducing energy consumption or inventing revolutionary new materials. It adds:

If AI systems prove they are able to unearth significant new knowledge and strategies in these domains too, the breakthroughs could be truly remarkable. We can’t wait to see what comes next.

This does open up many opportunities for the future, but challenges still remain.

Neuroscience meets AI

AlphaGo combines the two most powerful ideas about learning to emerge from the past few decades: deep learning and reinforcement learning. Remarkably, both were originally inspired by how biological brains learn from experience.

In the human brain, sensory information is processed in a series of layers. For instance, visual information is first transformed in the retina, then in the midbrain, and then through many different areas of the cerebral cortex.

This creates a hierarchy of representations where simple, local features are extracted first, and then more complex, global features are built from these.

The AI equivalent is called deep learning; deep because it involves many layers of processing in simple neuron-like computing units.

But to survive in the world, animals need to not only recognise sensory information, but also act on it. Generations of scientists and psychologists have studied how animals learn to take a series of actions that maximise their reward.

This has led to mathematical theories of reinforcement learning that can now be implemented in AI systems. The most powerful of these is temporal difference learning, which improves actions by maximising its expectation of future reward.

The best moves

By combining deep learning and reinforcement learning in a series of artificial neural networks, AlphaGo first learned human expert-level play in Go from 30 million moves from human games.

But then it started playing against itself, using the outcome of each game to relentlessly refine its decisions about the best move in each board position. A value network learned to predict the likely outcome given any position, while a policy network learned the best action to take in each situation.

Although it couldn’t sample every possible board position, AlphaGo’s neural networks extracted key ideas about strategies that work well in any position. It is these countless hours of self-play that led to AlphaGo’s improvement over the past year.

Unfortunately, as yet there is no known way to interrogate the network to directly read out what these key ideas are. Instead we can only study its games and hope to learn from these.

This is one of the problems with using such neural network algorithms to help make decisions in, for instance, the legal system: they can’t explain their reasoning.

We still understand relatively little about how biological brains actually learn, and neuroscience will continue to provide new inspiration for improvements in AI.

Humans can learn to become expert Go players based on far less experience than AlphaGo needed to reach that level, so there is clearly room for further developing the algorithms.

Also much of AlphaGo’s power is based on a technique called back-propagation learning that helps it correct errors. But the relationship between this and learning in real brains is still unclear.

What’s next?

The game of Go provided a nicely constrained development platform for optimising these learning algorithms. But many real world problems are messier than this and have less opportunity for the equivalent of self-play (for instance self-driving cars).

So are there problems to which the current algorithms can be fairly immediately applied?

One example may be optimisation in controlled industrial settings. Here the goal is often to complete a complex series of tasks while satisfying multiple constraints and minimising cost.

The ConversationAs long as the possibilities can be accurately simulated, these algorithms can explore and learn from a vastly larger space of outcomes than will ever be possible for humans. Thus DeepMind’s bold claims seem likely to be realised, and as the company says, we can’t wait to see what comes next.

This article was originally published on The Conversation. Read the original article.

AI for Good Global Summit welcomes “new frontier” for sustainable development

The world’s brightest minds in Artificial Intelligence (AI) and humanitarian action will meet with industry leaders and academia at the AI for Good Global Summit, 7-9 June 2017, to discuss how AI will assist global efforts to address poverty, hunger, education, healthcare and the protection of our environment. The event will in parallel explore means to ensure the safe, ethical development of AI, protecting against unintended consequences of advances in AI.

View the live webcast at: http://bit.ly/AI-for-Good-Webcast.

The event is co-organized by ITU and the XPRIZE Foundation, in partnership with 20 other United Nations (UN) agencies, and with the participation of more than 70 leading companies and academic and research institutes.

“Artificial Intelligence has the potential to accelerate progress towards a dignified life, in peace and prosperity, for all people,” said UN Secretary-General António Guterres. “The time has arrived for all of us – governments, industry and civil society – to consider how AI will affect our future. The AI for Good Global Summit represents the beginnings of our efforts to ensure that AI charts a course that will benefit all of humanity.”

The AI for Good Global Summit will emphasize AI’s potential to contribute to the pursuit of the UN Sustainable Development Goals.

Opening sessions will share expert insight into the state of play in AI, with leading minds in AI giving voice to their greatest ambitions in driving AI towards social good. ‘Breakthrough’ sessions will propose strategies for the development of AI applications and systems able to promote sustainable living, reduce poverty and deliver citizen-centric public services.

“Today, we’ve gathered here to discuss how far AI can go, how much it will improve our lives, and how we can all work together to make it a force for good,” said ITU Secretary-General Houlin Zhao. “This event will assist us in determining how the UN, ITU and other UN Agencies can work together with industry and the academic community to promote AI innovation and create a good environment for the development of artificial intelligence.”

“The AI for Good Global Summit has assembled an impressive, diverse ecosystem of thought leaders who recognize the opportunity to use AI to solve some of the world’s grandest challenges,” said Marcus Shingles, CEO of the XPRIZE Foundation. “We look forward to this Summit providing a unique opportunity for international dialogue and collaboration that will ideally start to pave the path forward for a new future of problem solvers working with XPRIZE and beyond.”

The AI for Good Global Summit will be broadcast globally as well as captioned to ensure accessibility.

You can view the live webcast at: http://bit.ly/AI-for-Good-Webcast.

For more information about this event, please visit: AI for Good Global Summit web page.

RoboCup video series: @Home league

RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. Established in 1997, the original mission was to field a team of robots capable of winning against the human soccer World Cup champions by 2050.

To celebrate 20 years of RoboCup, the Federation is launching a video series featuring each of the leagues with one short video for those who just want a taster, and one long video for the full story. Robohub will be featuring one league every week leading up to RoboCup 2017 in Nagoya, Japan.

This week, we consider being part of the RoboCup@Home league. Robots helping at home can certainly ‘feel’ like the future. One day, these robots might help with various tasks around the house. You’ll hear about the history and ambitions of RoboCup from the trustees, and inspiring teams from around the world.

Short version:

Long version:

Want to watch the rest? You can view all the videos on the RoboCup playlist below:

https://www.youtube.com/playlist?list=PLEfaZULTeP_-bqFvCLBWnOvFAgkHTWbWC

Please spread the word! and if you would like to join a team, check here for more information.

 

If you liked reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

China’s strategic plan for a robotic future is working: 500+ Chinese robot companies

In 2015, after much research, I wrote about China having 194 robot companies and used screen shots of The Robot Report’s Global Map to show where they were and a chart to show their makeup. We’ve just concluded another research project and have added hundreds of new Chinese companies to the database and global map.

Why is China so focused on robots?

China installed 90,000 robots in 2016, 1/3 of the world’s total and a 30% increase over 2015. Why?

Simply said, China has three drivers helping them move toward country-wide adoption of robotics: scale, growth momentum, and money. Startup companies can achieve scale quickly because the domestic market is so large. Further, companies are under pressure to automate thereby causing double-digit demand for industrial robots (according to the International Federation of Robotics). Third, the government is strongly behind the move.

Made in China 2025 and 5-Year Plans

Chinese President Xi Jinping has called for “a robot revolution” and initiated the “Made in China 2025” program. More than 1,000 firms and a new robotics association, CRIA (Chinese Robotics Industry Alliance) have emerged (or begun to transition) into robotics to take advantage of the program, according to a 2016 report by the Ministry of Industry and Information Technology. By contrast, according to the same report, the sector was virtually non-existent a decade ago.

Under “Made in China 2025,” and the five-year robot plan launched last April, Beijing is focusing on automating key sectors of the economy including car manufacturing, electronics, home appliances, logistics, and food production. At the same time, the government wants to increase the share of in-country-produced robots to more than 50% by 2020; up from 31% last year.

Robot makers, and companies that automate, are both eligible for subsidies, low-interest loans, tax waivers, rent-free land and other incentives. One such program lured back Chinese engineers working overseas; another oversaw billions of dollars poured into technology parks dedicated to robotics production and related businesses; another encouraged local governments to help regional companies deploy robots in their production processes; and despite its ongoing crackdown on capital outflows, green lights have been given to Chinese companies acquiring Western robotics technology companies.

Many of those acquisitions were reported by The Robot Report during 2016 and are reflected (with little red flags) in the chart reporting the top 15 acquisitions of robotic-related companies:

  1. Midea, a Chinese consumer products manufacturer, acquired KUKA, one of the Big 4 global robot manufacturers
  2. The Kion Group, a predominately Chinese-funded warehousing systems and equipment conglomerate, acquired Dematic, a large European AGV and material handling systems company
  3. KraussMaffei, a big German industrial robots integrator, was acquired by ChemChina
  4. Paslin, a US-based industrial robot integrator, was acquired by Zhejiang Wanfeng Technology, a Chinese industrial robot integrator

China has set goals to be able to make 150,000 industrial robots in 2020; 260,000 in 2025; and 400,000 by 2030. If achieved, the plan should help generate $88 billion over the next decade. China’s stated goal in both their 5-year plan and Made in China 2025 program is to overtake Germany, Japan, and the United States in terms of manufacturing sophistication by 2049, the 100th anniversary of the founding of the People’s Republic of China. To make that happen, the government needs Chinese manufacturers to adopt robots by the millions. It also wants Chinese companies to start producing more of those robots.​

Analysts and Critics

Various research reports are predicting that more than 250,000 industrial pick and place, painting and welding robots will be purchased and deployed in China by 2019. That figure represents more than the total global sales of all types of industrial robots in 2014!

Research firms predicting dramatic growth for the domestic Chinese robotics industry are also predicting very low-cost devices. Their reports are contradicted by academics, roboticists and others who point out that there are so many new robot manufacturing companies in China that none will be able to manufacture many thousand robots per year and thus benefit from scale. Further, many of the components that comprise a robot are intricate and costly, e.g., speed reducers, servo motors and control panels. Consequently these are purchased from Nabtesco, Harmonic Drive, Sumitomo and other Japanese, German and US companies. Although a few of the startups are attempting to make reduction gears and other similar devices, the lack of these component manufacturers in China may put a cap on how low costs can go and on how much can be done in-country for the time being.

“We aim to increase the market share of homegrown servomotors, speed reducers and control panels in China to over 30 percent by 2018 or 2019,” said Qu Xianming, an expert with the National Manufacturing Strategy Advisory Committee, which advises the government on plans to upgrade the manufacturing sector. “By then, these indigenous components could be of high enough quality to be exported to foreign countries,” Qu said in an interview with China Daily. “Once the target is met, it will lay down a strong foundation for Chinese parts makers to expand their presence.”

Regardless, China, with governmental directives and incentives, has become both the world’s biggest buyer of robots and also is growing a very large in-country industry to make and sell robots of all types.

 

The Robot Report now has over 500 Chinese companies in its online directories and on its Global Map

The Robot Report and its research team have been able to identify over 500 companies that make or are directly involved in making robots in China. The CRIA (China Robot Industry Alliance), and other sources, proffer the number to be closer to 800. The Robot Report is limited by our own research capabilities, language translation limitations, and scarcity of information about robotics companies and their websites and contact people in China.

These companies are combined with other global companies – now totaling over 5,300 – in our online directories and plotted on our global map so that you can research by area. You can explore online and filter in a variety of ways.

Use Google’s directional and +/- markers to navigate, enlarge, and hone in on a geographical area of interest (or double click near where you want to enlarge). Click on one of the colored markers to get a pop-up window with the name, type, focus, location and a link to the company’s website.

[NOTE: the map shows a single entry for the company headquarters regardless how many branches, subsidiaries and factory locations that company might have, consequently international companies with factories and service centers in China won’t appear. Further note that The Robot Report’s database doesn’t contain companies that just use robots; it focuses on those involved in making robots.]

The Filter pull-down menu lets you choose any one of the seven major categories:

  1. Industrial robot makers
  2. Service robots used by corporations and governments
  3. Service robots for personal and private use
  4. Integrators
  5. Robotics-related start-up companies
  6. Universities and research labs with special emphasis on robotics
  7. Ancillary businesses providing engineering, software, components, sensors and other products and services to the industry.

In the chart below, 500 Chinese companies are tabulated by their business type and area of focus. Please note that your help would be greatly appreciated by adding to the map and making it as accurate and up-to-date as possible. Please send robotics-related companies that we have missed (or are new) to info@therobotreport.com.

 

Localization uncertainty-aware exploration planning

Autonomous exploration and reliable mapping of unknown environments corresponds to a major challenge for mobile robotic systems. For many important application domains, such as industrial inspection or search and rescue, this task is further challenged from the fact that such operations often have to take place in GPS-denied environments and possibly visually-degraded conditions.

Source: Dr Kostas Alexis, UNR

In this work, we move away from deterministic approaches on autonomous exploration and we propose a localization uncertainty-aware autonomous receding horizon exploration and mapping planner verified using aerial robots. This planner follows a two-step optimization paradigm. At first, in an online computed random tree the algorithm finds a finite-horizon branch that optimizes the amount of space expected to be explored. The first viewpoint configuration of this branch is selected, but the path towards it is decided through a second planning step. Within that, a new tree is sampled, admissible branches arriving at the reference viewpoint are found and the robot belief about its state and the tracked landmarks of the environment is propagated. The branch that minimizes the expected localization uncertainty is selected, the corresponding path is executed by the robot and the whole process is iteratively repeated.

The algorithm has been experimentally verified with aerial robotic platforms equipped with a stereo visual-inertial system operating in both well-lit and dark conditions, as shown in our videos:

To enable further developments, research collaboration and consistent comparison, we have released an open source version of our localization uncertainty-aware exploration and mapping planner, experimental datasets and interfaces. To get the code, please visit: https://github.com/unr-arl/rhem_planner

This research was conducted at the Autonomous Robots Lab of the University of Nevada, Reno.

Reference:

Christos Papachristos, Shehryar Khattak, Kostas Alexis, “Uncertainty-aware Receding Horizon Exploration and Mapping using Aerial Robots,” IEEE International Conference on Robotics and Automation (ICRA), May 29-June 3, 2017, Singapore
If you liked this article, you may also want to read:
Page 394 of 399
1 392 393 394 395 396 399