The Robot Academy is a new learning resource from Professor Peter Corke and the Queensland University of Technology (QUT), the team behind the award-winning Introduction to Robotics and Robotic Vision courses. There are over 200 lessons available, all for free.
The lessons were created in 2015 for the Introduction to Robotics and Robotic Vision courses. We describe our approach to creating the original courses in the article, An Innovative Educational Change: Massive Open Online Courses in Robotics and Robotic Vision. The courses were designed for university undergraduate students but many lessons are suitable for anybody, as you can easily see the difficulty rating for each lesson. Below are several examples of image formation and 3D vision.
The geometry of image formation
The real world has three dimensions but an image has only two. We can use linear algebra and homogeneous coordinates to understand what’s going on. This more general approach allows us to model the positions of pixels in the sensor array and to derive relationships between points on the image and points on an arbitrary plane in the scene.
How is an image formed? The real world has three dimensions but an image has only two: how does this happen and what are the consequences? We can use simple geometry to understand what’s going on.
An image is a two-dimensional projection of a three-dimensional world. The big problem with this projection is that big distant objects appear the same size as small close objects. For people, and robots, it’s important to distinguish these different situations. Let’s look at how humans and robots can determine the scale of objects and estimate the 3D structure of the world based on 2D images.
No matter how great a surgeon is, robotic assistance can bring a higher level of precision to the operating table. The ability to remotely operate a robot that can hold precision instruments greatly increases the accuracy of surgical procedures like thoracoscopic surgery, which is used to treat lung cancer.
Of the two most common types of lung cancer, non-small cell lung cancer (NSCLC) is a good candidate for surgery because the tumors spread slowly and are more localized. Since more than 80 percent of people with lung cancer have NSCLC, surgery is a common treatment.
Lung cancer usually starts when epithelial cells that line the inside of the lungs rapidly reproduce into cancerous cells, creating tumors inside the lungs. These tumors have been traditionally removed directly by the skilled hands of a surgeon. Today, we’re starting to see more tumors being removed by robots that are controlled by skilled surgeons. This process is known as robotic assisted lung surgery.
The key benefits of robotic assisted lung surgery are:
It’s minimally invasiveand small incisions are used. Traditional lung surgery procedures utilize a large incision across the chest wall. When using robotic assisted surgery, the incision is about half as long. The robotic arms can be maneuvered more intricately, so the incision doesn’t need to be as large. Often a second incision is made for the removal of tissue.
Patients often experience a faster recovery time. When a patient has a robotic lobectomy, they’re often back to their normal routine in a week’s time. Older patients in their 70s and 80s also experience a good recovery with this type of procedure, which is good news considering they’re usually not great candidates for open surgeries. The small incisions are responsible for a faster recovery time compared to the long recovery time after traditional lung surgery.
There are several robotic assisted procedures used to treat lung cancer:
Video-Assisted Thoracoscopic Surgery (VATS)
Fewer than 5 percent of thoracic surgeries in the US are performed robotically. Rush University is one of a few US medical institutions that offer a full range of robotic-assisted lung cancer procedures, including VATS. In fact, Rush is known as a leader in VATS.
VATS is the most common type of robotic assisted lung surgery. With this procedure, one incision is made to insert the surgical instruments for the removal of tissue, while another incision is used to place the camera. The surgeon maneuvers the surgical instruments while an assistant operates the camera so the surgeon can see what’s happening.
The da Vinci Si robotic system
Rush University is now using the da Vinci Si robotic system for lung surgery, where it was once only used for other, more general surgeries.
How the da Vinci Si robotic system works
A surgeon sits at a console to control a four-armed robot that’s been positioned above the patient lying on the operating table. The surgeon observes the scene from a screen that displays images coming from a camera. As the surgeon moves his or her controls, the robot responds accordingly in real-time, and the surgical instruments attached to the robotic arms perform the surgery.
One of Rush University Medical Center’s thoracic surgeons, Gary Chmielewski, says, “any motion I can do with my hands, the robot can simulate inside the patient with more precision and less tissue trauma. It all works together for a better operation that’s easier on the patient.”
Nine years of positive results
A study published in 2014, analyzed the results of different types of robotic lobectomies for treating lung cancer over a period of 9 years. The study was designed to evaluate the evolution of technique as well as the robotic technology. The study found a positive trend in patient outcomes when they opted for the upgraded robotic systems compared to the standard systems.
Twenty years ago, who would have thought having robotic assisted lung surgery would become the most popular, most effective option?
If robots can be used to help surgeons remove tumors from lung cancer patients, what else is possible with robotics? The possibilities are endless.
A U.S. drone strike reportedly killed eight individuals at an al-Shabab command post in southern Somalia. It is believed to be the first U.S. drone strike in Somalia since President Trump relaxed rules for targeting al-Shabab militants. (New York Times)
Canada announced that it is moving ahead with its acquisition of strike-capable drones. The planned acquisition is part of a $62 billion investment in new military systems and technologies. In a press conference, Canadian Prime Minister Justin Trudeau said that the government will carefully review how to best use the drones prior to deployment. (CBC)
Alphabet will sell Boston Dynamics, which makes quadruped and bipedal robots, as well as robotics firm Schaft, to SoftBank Group, a Japanese telecommunications corporation. Boston Dynamics has become well-known for developing advanced, dexterous robots. Alphabet acquired Boston Dynamics in 2013 and Schaft in 2014. (Recode)
A U.S. fighter jet shot down an Iranian drone near the Al Tanf border crossing between Syria and Iraq. In a statement, the U.S.-led coalition said that the drone had earlier attacked U.S. allies on the ground and that it was approximately the size of a U.S. MQ-1 Predator, indicating that it was likely a Shahed-129 drone. A variety of Iranian drones have supported pro-regime forces in Syria since at least 2012. (The Daily Beast) For more on the drones operating in Syria and Iraq, click here.
Meanwhile, at Over the Horizon, U.S. Air Force Colonel Joseph Campo discusses U.S. drone operations and the psychological toll of those operations on drone pilots.
At Recode, April Glaser writes that the Trump administration’s plan to privatize air traffic control could accelerate the push to develop a national drone tracking system.
In a study published in the online Journal of Unmanned Vehicle Systems, Paul Nesbit examines the growing number of close encounters between drones and manned aircraft in Canada. (Phys.org)
At TechCrunch, Brian Heater looks at how the Robust Adaptive Systems Lab at Carnegie Mellon is developing drones and robots that can work alongside humans.
At CBC, Kyle Bakx visits a village in Canada that is seeking to transform itself into a hub for companies that develop and test drones.
Amazon has been awarded a patent for a safety system for its delivery drones that shuts off the aircraft’s rotors if it detects an imminent collision. (Drone Life)
Researchers at Georgia Tech are developing the Miniature Autonomous Blimp, an indoor unmanned blimp that has a camera that can detect faces and autonomously follow individuals. (IEEE Spectrum)
Navajo County sheriff’s deputies in Arizona used a drone to assist in the search for a man who went missing in a local forest. (White Mountain Independent)
The European Commission officially launched the 5.5 billion Euro European Defence Fund, which will include funding for unmanned systems development and acquisition. (Press Release)
The U.S. Navy awarded Academi Training Center, Insitu, PAE ISR, and AAI contracts for sea and land-based drones for intelligence, surveillance, and reconnaissance. The total value of the contracts is $1.73 billion. (DoD)
The U.S. Special Operations Command awarded AAI Corp. a multiple award with a $475 million maximum order ceiling for Mid-Endurance Unmanned Aircraft Systems. (DoD)
The U.S. Special Operations Command awarded Insitu a multiple award with a $475 million maximum order ceiling for Mid-Endurance Unmanned Aircraft Systems. (DoD)
The U.S. Air Force awarded Radio Hill Technologies a $2.5 million contract for 100 Block 3 Dronebuster counter-UAS detection and jamming systems. (Shephard Media)
The Defense Advanced Research Projects Agency awarded BAE Systems two contracts worth $5.4 million to develop a payload architecture that will enable smaller drones to multitask, a program known as CONCERTO. (IHS Jane’s 360)
The Defense Advanced Research Projects Agency awarded Oregon State University a $6.5 million grant to improve the trustworthiness of artificial intelligence in robots and drones. (Press Release)
Boeing is partnering with Huntington Ingalls, the largest military shipbuilder in the U.S., to help to develop the company’s Echo Voyager extra large unmanned undersea vehicle (XLUUV). (Defense One)
Dronefence, a German counter-UAS startup, announced seed funding from Larnabel Ventures, VP Capital, Boundary Holding, and Technology and Business Consulting Group. (Press Release)
Maritime Robotics, a Norwegian company specializing in unmanned maritime vehicles, will provide Seabed Geosolutions with its Mariner unmanned surface vehicle for seismic exploration. (Maritime Journal)
The Duke Energy Foundation awarded Butler Technology and Career Development School in Hamilton, Ohio a $45,000 grant to teach students how to use drones. (WLWT5)
Virginia Governor Terry McAuliffe awarded three $50,000 grants to the Counter-Drone Research Corporation, TruWeather Solutions, and dbS Productions to conduct research into drones and autonomous vehicles. (StateScoop)
Autonomous vehicle and artificial intelligence company Cognata raised $5 million from Emerge, Maniv Mobility, and Airbus Ventures. (TechCrunch)
For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.
Uber, the global ride-sharing transportation company, has named two replacements to recover from the recent firing of Anthony Levandowski who headed their Advanced Technologies Group, their OTTO trucking unit, and their self-driving team. Levandowski was fired May 30th.
Eric Meyhofer
Meyhofer, who before coming to Uber was the co-founder of Carnegie Robotics and a CMU robotics professor, was also part of the group that came to Uber from CMU (see below). He has now been named to head Uber’s Advanced Technologies Group (ATG) self-driving group and will report directly to Uber CEO Travis Kalanick.
The ATG group is charged with developing the self-driving technologies of mapping, perception, safety, data collection and learning, and self-driving for cars and trucks.
Sensors that determine distances are integral to the process. Elon Musk said recently that LiDAR isn’t needed because cameras, sensors, software and high-speed GPUs can do the same tricks at a fraction of the cost. Levandowski favored LiDARs, particularly newly developed solid state LiDAR technologies.
Anthony Levandowski
Levandowski, the previous head of the ATG, joined Google to work with Sebastian Thrun on Google Street View, started a mobile mapping company that experimented with LiDAR technology and another to build a self-driving LiDAR-using car (a Prius). Google acquired both companies including their IP. In 2016 Levandowski left Google to found Otto, a company making self-driving kits to retrofit semi-trailer trucks. Just as the kit was launched, Uber acquired Otto and Levandowski became the head of Uber’s driverless car operation in addition to continuing his work at Otto.
The Levandowski case, which caused Uber to fire him, revolves around the intellectual property and particularly the LiDAR-related technologies that Google’s and Uber’s self-driving plans revolve around. Getting the cost of perception down to a reasonable level has been part of the bigger challenge of self-driving technology and LiDAR technology is integral to that plan.
Google’s Waymo self-driving unit is implying in their suit that in return for bringing Google’s IP to Uber, Uber gave Levandowski $250 million in stock grants. Uber has called Waymo’s claims baseless and an attempt to slow down a competitor.
Waymo also claims that Uber has a history of “stealing” technology and includes the time in 2015 when Uber hired away 50+ of the Carnegie Mellon University robotics team – a move that cost Uber a reported $5 billion and created havoc at CMU and the National Robotics Engineering Centre (NREC) which lost 1/3 of their staff to Uber. The move was preceded by establishing a strategic partnership with CMU to work together on self-driving technologies. Four months later, Uber hired the 50.
Carnegie Mellon left decimated after Uber poaches 40 top-rated robotic researchers to help them develop self-driving cars
Carnegie Mellon ‘in crisis’ after mass defection of scientists to Uber
Uber hope their fleet of taxis will not require drivers in the future
Used $5 billion from investors to poach at least 40 from the National Robotics Engineering Center
Uber took six principal investigators and 34 engineers
Brian Zajac
Zajac has been on Uber’s self-driving team since 2015 after stints with Shell and the U.S. Army. Now he becomes the new chief of hardware development and reports to Meyhofer.
“Zajac will now bear a great deal of responsibility for cracking the driverless car problem, which Uber CEO Travis Kalanick has described as “existential” to the company. Uber loses huge amounts of money, and many observers think eliminating the cost of drivers is its only realistic path to profitability.”
Bottom line:
Uber has research teams in Silicon Valley, Toronto and Pittsburgh all working to perfect Level 5 autonomous driving capabilities before any of their competitors are able to duplicate the process. Google, Baidu, Yandex, Didi Chuxing, a few of the Tier 1 component makers, and many others including all the major car companies are racing forward with the same intentions. Levandowski’s firing caused a big gap in Uber’s self-driving project management and fear amongst their investors. Uber hopes that these two changes, Meyhofer as overall head and Zajac as hardware chief, will quell the fears that Uber is losing their momentum.
It is unclear if Masayoshi Son, Chairman of Softbank, was one of the 17 million YouTube viewers of Boston Dynamic’s Big Dog before acquiring the company for an undisclosed amount this past Thursday. What is clear is the acquisition of Boston Dynamics by Softbank is a big deal. Softbank’s humanoid robot Pepper is trading up her dainty wheels for a pair of sturdy legs.
In expressing his excitement for the acquisition, Masayoshi Son said, “Today, there are many issues we still cannot solve by ourselves with human capabilities. Smart robotics are going to be a key driver of the next stage of the Information Revolution, and Marc and his team at Boston Dynamics are the clear technology leaders in advanced dynamic robots. I am thrilled to welcome them to the SoftBank family and look forward to supporting them as they continue to advance the field of robotics and explore applications that can help make life easier, safer and more fulfilling.”
Marc Raibert, CEO of Boston Dynamics, previously sold his company to Google in 2013. Following the departure of Andy Rubin from Google, the internet company expressed buyers remorse. Raibert’s company failed to advance from being a military contractor to a commercial enterprise. It became very challenging incorporating Boston Dynamic’s zoo of robots (mechanical dogs, cheetahs, bulls, mules and complex humanoids) into Google’s autonomous strategy. Since Rubin’s exit in 2014, rumors of buyers acquiring Boston Dynamics from Google have ranged from Toyota Research to Amazon Robotics. Softbank represents a new chapter for Raibert, and possible the entire industry.
Raibert’s statement to the press gave astute readers a peek of what to expect: “We at Boston Dynamics are excited to be part of SoftBank’s bold vision and its position creating the next technology revolution, and we share SoftBank’s belief that advances in technology should be for the benefit of humanity. We look forward to working with SoftBank in our mission to push the boundaries of what advanced robots can do and to create useful applications in a smarter and more connected world.” A quick study of the assets of both companies reveals how Boston Dynamics could help Softbank in its mission to build robots to benefit humanity.
Softbank’s premier robot is Pepper, a four-foot tall social robot that has been mostly deployed in Asia as a customer service agent. Recently, Pepper, as part of Softbank’s commitment to the Trump administration to invest $50 billion in the United States has been spotted in stores in California. As an example, Pepper proved itself as a valuable asset last year to Palo Alto’s premier tech retailer B8ta, accounting for a 6 times increase in sales. To date, there are close to 10,000 Pepper robots deployed worldwide, mostly in Asian retail stores. However, Softbank is also the owner of Sprint with 4,500 cell phone stores across the USA, and a major investor in WeWork with 140 locations globally servicing 100,000 members – could Pepper be the customer service agent or receptionist of the future?
According to Softbank’s website, Pepper is designed to be a “day-to-day companion,” with its most compelling feature being the ability to perceive emotions. Softbank boasts that their humanoid is the first robot ever to recognize moods of homo sapiens and adapt its behavior accordingly. While this is extremely relevant for selling techniques, Softbank is most proud of Pepper being the first robot to be adopted into homes in Japan. It is believed that Pepper is more than a point-of-purchase display gimmick, but an example of the next generation of caregivers for the rising elderly populations in Japan and the United States. According to the Daily Good, “Pepper could do wonders for the mental engagement and continual monitoring of those in need.” Its under $2,000 price point also provides an attractive incentive to introduce the robot into new environments, however wheel-based systems are a limitation in the home with clutter floors, stairs and other unforeseen obstacles.
Boston Dynamics is almost the complete opposite of Softbank; it is a research group spun out of MIT. Its expertise is not in social robots but in military “proofs of concepts” like futuristic combat mules. The company has touted some of the most frightening mechanical beasts to ever walk the planet from metal Cheetahs that sprint at over 25 miles per hour to mechanized dogs that scale mountains with ease to one of the largest humanoids every built that has an uncanny resemblance to Cyberdyne’s T-800. In a step towards commercialization, Boston Dynamics released earlier this year its newest monster – a wheel-biped leg robot named Handle that can easily lift over a hundred pounds and jump higher than Lebron James. Many analysts pontificated that this appeared to be Boston Dynamics attempt to prove its relevance to Google with a possible last mile delivery bot.
In an IEEE interview when Handle debuted last February, Raibert exclaimed, “Wheels are a great invention. But wheels work best on flat surfaces and legs can go anywhere. By combining wheels and legs, Handle can have the best of both worlds.” IEEE writer Evan Ackerman questioned, after seeing Handle, if the next generation of Boston Dynamic’s humanoids could feature legs with roller-skate like shoes. One is certain that Boston Dynamics is the undisputed leader of dynamic control and balance systems for complex mechanical designs.
Leading roboticist Dan Kara of ABI Research confirmed that “these [Boston Dynamics] are the world’s greatest experts on legged mobility.”
If walking is the expertise of Raibert’s team and Softbank is the leader of cognitive robotics with a seemingly endless supply of capital, the combination could be the first real striding humanoid capable of human-like emotions. By 2030 there will be 70 million people over the age of 65 years in America, with a considerable smaller amount of caregivers. To answer this call researchers are already converting current versions of Pepper into sophisticated robotic assistants. Last year, Rice University unveiled a “Multi-Purpose Eldercare Robot Assistant (MERA)” which is essentially a customized version of the Softbank’s robot. MERA is specifically designed to be a home companion for seniors that “records and analyzes videos of a person’s face and calculates vital signs such as heart and breathing rates.” Rice University partnered with IBM’s Aging-in-Place Research Lab to create MERA’s speech technology. IBM’s Lab founder, Susann Keohane, explained that Pepper “has everything bundled into one adorable self.” Now with Boston Dynamic’s legs Pepper could be a friend, physical therapist, and life coach walking side by side with its human companion.
Daniel Theobald, founder of Vecna Technologies – a healthcare robotic company, summed it best last week, “I think Softbank has made a major commitment to the future of robotics. They understand that the world economy is going to be driven by robotics more and more.”
Next Tuesday we will dive further into the implications of Softbank’s purchase of Boston Dynamics with Dr. Howard Morgan/First Round Capital, Tom Ryden/MassRobotics and Dr. Eric Daimler/Obama White House at RobotLabNYC’s event on 6/13 @ 6pm WeWork Grand Central (RSVP).
3! 2! 1! Go! Suddenly, robots jerk into motion and zoom across the field to score points, crossing over several types of terrain and shooting balls into high and low goals. Another buzzer sounds, drivers pick up their controls and all six robots—three per alliance—are now under human control. As these huge 120-pound robots score points, cheers ring through a packed stadium, fueled by high school students who worked hard to build their robot in just six weeks. As the match ends, nervous and excited students wait to see who is the winner of the 2016 world championship.
This was my last match as a member of the Girls of Steel FIRST Robotics Competition Team #3504. FIRST (For Recognition and Inspiration of Science and Technology) is a robotics program for students from K-12, and I was in the last division, FRC. The program is about more than introducing students to STEM and giving them hands-on experience, it’s about helping students to grow and have positive impacts by recognizing community service efforts, celebrating good values, developing soft skills, and guiding students to pursue higher education.
The next fall, I was off to college at the Illinois Institute of Technology in Chicago, studying to become a mechanical engineer. For the first time in my life, I was on my own. My time was so swept away by schoolwork, clubs, exploring the city, and making new friends that FIRST became a distant memory. Now, I fear that if I hadn’t bumped back into it, I would have lost touch with the program that played such a critical role in my life. While at a robotics club meeting, sign ups for the FRC Midwest Regional Planning Committee were passed around. Wanting to somehow get involved, I signed up.
I had no idea what to expect as I walked into a big conference room with a friend, and fellow FRC alum, for our first committee meeting. As the meeting progressed—densely filled with information and detailed plans for upcoming seasons and younger leagues—I sat there stunned. Regardless of if they were an alum or not, there were so many people who dedicated their time and effort to make this program work for students. It was a wake-up call. During my four years of being a member, I took so much for granted and didn’t realize the magnitude of hard work that volunteers and mentors put in for the students. They were the ones who supported and helped make me the person that I am today. So, to all of the volunteers and mentors who may be reading this, thank you for everything, I couldn’t have done it without you.
Just like that, I was hooked on FIRST once again. I kept going to meetings, trying to help as much as I could while making connections with other volunteers. Right before the season started, there were a couple teams who had a shortage of mentors, which is how I found Hawks on the Horizon FRC team #5125. The first meeting I went to with the students opened my eyes; I was so used to being a member, I had no idea how to mentor FRC. Eventually, I learned how to be a different figure in a familiar situation and how to adjust to the differences between a large all-girls team, with many resources, and a small family-like team. Yet, without a doubt, I knew this was my new team. From the very first day, I was welcomed with open arms by the mentors and students, who made sure I came back.
“Who is your role model?” was a common question for me when I was very young, and I’d respond with a superhero. Now it’s never asked, but I have a better answer since I’ve gotten the chance to meet some of them. One of my role models is a student mentor from my old team. Although I haven’t seen her in a long time, I found myself remembering conversations we’d had years ago about some of the challenges of being a student mentor. Knowing I wasn’t alone, and how she’d dealt with some sticky situations, helped me guide students to find their own answers and become comfortable being hands-off myself. I’ve learned so much from past mentors like her, fellow mentors, and the students themselves. I’m happy to have found my FIRST family again.
The Midwest Regional was a whole new experience for me. During high school, my competition days were hectic, filled with fixing the robot, talking to other teams, and cheering for our alliance. At this past regional, it was an entirely different world. I was the student volunteer coordinator, helping run the student ambassador program and talking to people about how to get involved. I only visited my team when I got the chance. I was less aware of specific robots and paid more attention to what happens behind the scenes to ensure regional runs like a well-oiled machine. I want to be more involved in the process, since these competitions ignited a passion for engineering within me, and I cannot express with words how grateful I am.
Regardless of the changes within the past year, by the end of the competition, I still rocked the uniform. I’m incredibly proud of Hawks on the Horizon, thankful for Girls of Steel, and amazed by everyone in the Midwest Planning Committee. This transition from being a member to mentor has been an amazing journey and it doesn’t stop here. I’ll be working to give back for the rest of my life by helping make programs like FIRST happen for future students. Thank you to everyone who is a part of this organization and to find a team or event, go to firstinspires.org.
If you enjoyed this article, you may also want to read:
In this episode, Audrow Nash and Christina Brester conduct interviews at the 2016 International Association of Science Parks and Areas of Innovation conference in Moscow, Russia. They speak with Vadim Kotenev of Rehabot and Motorica about prosthetic hands and rehabilatative devices; and Vagan Martirosyan, CEO of TryFit, a company that uses robotic sensors to help people find shoes that fit them well.
An image of one of the rehabilitative devices from Rehabot.
The robotic platform that scans your feet from TryFit.
In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda).
Every week, euRobotics will publish a video interview with a project, so that you can find out more about their activities. This week features Co4Robots: Achieving Complex Collaborative Missions via Decentralized Control and Coordination of Interacting Robots.
Objectives
Recent applications necessitate coordination of different robots. Current practice is mainly based on offline, centralized planning and tasks are fulfilled in a predefined manner. Co4Robots’ goal is to build a systematic methodology to:
accomplish complex task specifications given to a team of potentially heterogeneous robots;
develop control schemes appropriate for mobility and manipulation capabilities of the robots;
achieve perceptual capabilities that enable robots to localize themselves and estimate the dynamic environment state;
integrate all in a decentralized framework.
Expected impact
The envisioned scenarios involve multi-robot services in e.g. office environments. Although public facilities are in some degree pre-structured, the need for the Co4Robots’ framework is evident since:
it will lead to an improved use of resources and a faster accomplishment of tasks inside workspaces with high social activity;
it will contribute towards the vision of more flexible multi-robot applications in both professional and domestic environments, also in view of the “Industry 4.0” vision and the general need to deploy such systems in everyday life scenarios.
In the event of a natural disaster that disrupts phone and Internet systems over a wide area, autonomous aircraft could potentially hover over affected regions, carrying communications payloads that provide temporary telecommunications coverage to those in need.
However, such unpiloted aerial vehicles, or UAVs, are often expensive to operate, and can only remain in the air for a day or two, as is the case with most autonomous surveillance aircraft operated by the U.S. Air Force. Providing adequate and persistent coverage would require a relay of multiple aircraft, landing and refueling around the clock, with operational costs of thousands of dollars per hour, per vehicle.
Now a team of MIT engineers has come up with a much less expensive UAV design that can hover for longer durations to provide wide-ranging communications support. The researchers designed, built, and tested a UAV resembling a thin glider with a 24-foot wingspan. The vehicle can carry 10 to 20 pounds of communications equipment while flying at an altitude of 15,000 feet. Weighing in at just under 150 pounds, the vehicle is powered by a 5-horsepower gasoline engine and can keep itself aloft for more than five days — longer than any gasoline-powered autonomous aircraft has remained in flight, the researchers say.
The team is presenting its results this week at the American Institute of Aeronautics and Astronautics Conference in Denver, Colorado. The team was led by R. John Hansman, the T. Wilson Professor of Aeronautics and Astronautics; and Warren Hoburg, the Boeing Assistant Professor of Aeronautics and Astronautics. Hansman and Hoburg are co-instructors for MIT’s Beaver Works project, a student research collaboration between MIT and the MIT Lincoln Laboratory.
A solar no-go
Hansman and Hoburg worked with MIT students to design a long-duration UAV as part of a Beaver Works capstone project — typically a two- or three-semester course that allows MIT students to design a vehicle that meets certain mission specifications, and to build and test their design.
In the spring of 2016, the U.S. Air Force approached the Beaver Works collaboration with an idea for designing a long-duration UAV powered by solar energy. The thought at the time was that an aircraft, fueled by the sun, could potentially remain in flight indefinitely. Others, including Google, have experimented with this concept, designing solar-powered, high-altitude aircraft to deliver continuous internet access to rural and remote parts of Africa.
But when the team looked into the idea and analyzed the problem from multiple engineering angles, they found that solar power — at least for long-duration emergency response — was not the way to go.
“[A solar vehicle] would work fine in the summer season, but in winter, particularly if you’re far from the equator, nights are longer, and there’s not as much sunlight during the day. So you have to carry more batteries, which adds weight and makes the plane bigger,” Hansman says. “For the mission of disaster relief, this could only respond to disasters that occur in summer, at low latitude. That just doesn’t work.”
The researchers came to their conclusions after modeling the problem using GPkit, a software tool developed by Hoburg that allows engineers to determine the optimal design decisions or dimensions for a vehicle, given certain constraints or mission requirements.
This method is not unique among initial aircraft design tools, but unlike these tools, which take into account only several main constraints, Hoburg’s method allowed the team to consider around 200 constraints and physical models simultaneously, and to fit them all together to create an optimal aircraft design.
“This gives you all the information you need to draw up the airplane,” Hansman says. “It also says that for every one of these hundreds of parameters, if you changed one of them, how much would that influence the plane’s performance? If you change the engine a bit, it will make a big difference. And if you change wingspan, will it show an effect?”
Framing for takeoff
After determining, through their software estimations, that a solar-powered UAV would not be feasible, at least for long-duration use in any part of the world, the team performed the same modeling for a gasoline-powered aircraft. They came up with a design that was predicted to stay in flight for more than five days, at altitudes of 15,000 feet, in up to 94th-percentile winds, at any latitude.
In the fall of 2016, the team built a prototype UAV, following the dimensions determined by students using Hoburg’s software tool. To keep the vehicle lightweight, they used materials such as carbon fiber for its wings and fuselage, and Kevlar for the tail and nosecone, which houses the payload. The researchers designed the UAV to be easily taken apart and stored in a FedEx box, to be shipped to any disaster region and quickly reassembled.
This spring, the students refined the prototype and developed a launch system, fashioning a simple metal frame to fit on a typical car roof rack. The UAV sits atop the frame as a driver accelerates the launch vehicle (a car or truck) up to rotation speed — the UAV’s optimal takeoff speed. At that point, the remote pilot would angle the UAV toward the sky, automatically releasing a fastener and allowing the UAV to lift off.
In early May, the team put the UAV to the test, conducting flight tests at Plum Island Airport in Newburyport, Massachusetts. For initial flight testing, the students modified the vehicle to comply with FAA regulations for small unpiloted aircraft, which allow drones flying at low altitude and weighing less than 55 pounds. To reduce the UAV’s weight from 150 to under 55 pounds, the researchers simply loaded it with a smaller ballast payload and less gasoline.
In their initial tests, the UAV successfully took off, flew around, and landed safely. Hoburg says there are special considerations that have to be made to test the vehicle over multiple days, such as having enough people to monitor the aircraft over a long period of time.
“There are a few aspects to flying for five straight days,” Hoburg says. “But we’re pretty confident that we have the right fuel burn rate and right engine that we could fly it for five days.”
“These vehicles could be used not only for disaster relief but also other missions, such as environmental monitoring. You might want to keep watch on wildfires or the outflow of a river,” Hansman adds. “I think it’s pretty clear that someone within a few years will manufacture a vehicle that will be a knockoff of this.”
This research was supported, in part, by MIT Lincoln Laboratory.
In a long-awaited transaction, The New York Times Dealbook announced that SoftBank was buying Boston Dynamics from Alphabet (Google). Also included in the deal is the Japanese startup Schaft. Acquisition details were not disclosed.
Both Boston Dynamics and Schaft were acquired by Google when Andy Rubin was developing Google’s robot group through a series of acquisitions. Both companies have continued to develop innovative mobile robots. And both have been on Google’s for sale list.
Boston Dynamics, a DARPA and DoD-funded 25 year old company, designed two and four-legged robots for the military. Videos of BD’s robots WildCat, Big Dog, Cheetah, SpotMini (shown above getting into an elevator) and Handle, have been YouTube hits for years. Handle, BD’s most recent is a two-wheeled, four-legged hybrid robot that can stand, walk, hop, run and roll at up to 9 MPH.
Schaft, a Japanese startup/participant in the DARPA Robotic Challenge, recently unveiled a two-legged robot that climbed stairs, can carry 125 pounds of payload, move in tight spaces and keep its balance throughout.
SoftBank, through another acquisition (of French Aldabaran, the maker of the Nao and Romeo robots), and in a joint venture with Foxconn and Alibaba, has developed and marketed thousands of Pepper robots. Pepper is a cute, humanoid, mobile robot being marketed and used as a guide and sales assistant. The addition of Boston Dynamics and Schaft to the SoftBank stable add talent and technology to their growing robotics efforts, particularly the Tokyo-based Schaft.
Today, there are many issues we still cannot solve by ourselves with human capabilities. Smart robotics are going to be a key driver of the next stage of the information revolution,” said Masayoshi Son, chairman and chief executive of SoftBank.
Lecturer Steffen Pfiffner of University of Weingarten in Germany is teaching ROS to 26 students at the same time at a very fast pace. His students, all of them within the Master on Computer Science of University of Weingarten, use only a web browser. They connect to a web page containing the lessons, a ROS development environment and several ROS based simulated robots. Using the browser, Pfiffner and his colleague Benjamin Stähle, are able to teach how to program with ROS quickly and to many students. This is what Robot Ignite Academy is made for.
“With Ignite Academy our students can jump right into ROS without all the hardware and software setup problems. And the best: they can do this from everywhere,” says Pfiffner.
Robot Ignite Academy provides a web service which contains the teaching material in text and video format, the simulations of several ROS based robots that the students must learn to program, and the development environment required to build ROS programs and test them on the simulated robot.
Student’s point of view
Students bring their own laptops to the class and connect to the online platform. From that moment, their laptop becomes a ROS development machine, ready to develop programs for many simulated real robots.
The Academy provides the text, the videos and the examples that the student has to follow. Then, the student creates her own ROS program and makes the robot perform a specific action. The student develops the ROS programs as if she is in a typical ROS development computer.
The main advantage is that students can use a Windows, Linux or Mac machine to learn ROS. They don’t even have to install ROS in their computers. The only prerequisite of the laptop is to have a browser. So students do not mess with all the installation problems that frustrate them (and the teachers!), especially when they are starting.
After class, students can continue with their learning at home, library or even the beach if there is a wifi available! All their code, learning material and simulations are stored online so they can access them from anywhere, anytime using any computer.
Teacher’s point of view
The advantage of using the platform is not only for the students but also for the teachers. Teachers do not have to create the material and maintain it. They do not have to prepare the simulations or work on multiple different computers. They don’t even have to prepare the exams!! (which are already provided by the platform).
So what are the teachers for?
By making use of the provided material, the teacher can concentrate on guiding the students by explaining the most confusing parts, answer questions, suggest modifications according to the level of each student, and adapt the pace to the different types of students.
This new method of teaching ROS is exploding among the Universities and High Schools that want to provide the latest and most practical teachings to their students. The method, developed by Robot Ignite Academy, combines a new way of teaching based on practice and an online learning platform. Those two points combined make the teaching of ROS a smooth experience and can potentially see the students’ knowledge base skyrocket.
As user Walace Rosa indicates in his video comment about Robot Ignite Academy:
It is a game changer [in] teaching ROS!
The method is becoming very popular in the robotics circuits too, and many teachers are using it for younger students. For example, High School Mundet in Barcelona is using it to teach ROS to 15 years old students.
Additionally, the academy provides a free online certification exam with different levels of knowledge certification. Many Universities are using this exam to certify that their students did learn the material since the exam is quite demanding.
Some examples of past events
1 week ROS course in Barcelona for SMART-E project team members. This is a private course given by Robot Ignite Academy at Barcelona for 15 members of the SMART-E project that need to be up to speed with ROS fast. From 8th to 12nd of May 2017
The fields of modular and origami robotics have become increasingly popular in recent years, with both approaches presenting particular benefits, as well as limitations, to the end user. Christoph Belke and Jamie Paik from RRL, EPFL and NCCR Robotics have recently proposed an elegant new solution that integrates both types of robotics in order to overcome their individual limitations: Mori, a modular origami robot.
Mori is the first example of a robot that combines the concepts behind both origami robots and reconfigurable, modular robots. Origami robotics utilises folding of thin structures to produce single robots that can change their shape, while modular robotics uses large numbers of individual entities to reconfigure the overall shape and address diverse tasks. Origami robots are compact and light-weight but have functional restrictions related to the size and shape of the sheet and how many folds can be created. By contrast, modular robots are more flexible when it comes to shape and configuration, but they are generally bulky and complex.
Mori, an origami robot that is modular, merges the benefits of these two approaches and eliminates some of their drawbacks. The presented prototype has the quasi-2D profile of an origami robot (meaning that it is very thin) and the flexibility of a modular robot. By developing a small and symmetrical coupling mechanism with a rotating pivot that provides actuation, each module can be attached to another in any formation. Once connected, the modules can fold up into any desirable shape.
The individual modules have a triangular structure with dimensions of just 6 mm in thickness, 70 mm in width and 26 g in weight. Contained within this slender structure are actuators, sensors and an on-board controller. This means that the only external input required for full functionality is a power source. The researchers at EPFL have thereby managed to create a robot that has the thin structure of an origami robot as well as the functional flexibility of a modular system.
The prototype presents a highly adaptive modular robot and has been tested in three scenarios that demonstrate the system’s flexibility. Firstly, the robots are assembled into a reconfigurable surface, which changes its shape according to the user’s input. Secondly, a single module is manoeuvred through a small gap, using rubber rings embedded into the rotating pivot as wheels, and assembled on the other side into a container. Thirdly the robot is coupled with feedback from an external camera, allowing the system to manipulate objects with closed-loop control.
With Mori, the researchers have created the first robotic system that can represent reconfigurable surfaces of any size in three dimensions by using quasi-2D modules. The system’s design is adaptable to whatever task required, be that modulating its shape to repair damage to a structure in space, moulding to a limb weakened after injury in order to provide selective support or reconfiguring user interfaces, such as changing a table’s surface to represent geographical data. The opportunities are truly endless.
Reference
Christoph H. Belke and Jamie Paik, “Mori: A Modular Origami Robot“, IEEE/ASME Transactions on Mechatronics, doi:10.1109/TMECH.2017.2697310
With a superb organization and a beautiful location, the event included conferences of leading researchers and companies from all around the world, as well as workshops and an exhibitors area. This latter is where I spent most of my time, as I love direct interaction with the companies and research centres. Also, in this kind of academic events, compared to trade fairs, you usually have the chance to directly find technical people who are able to explain in deep detail all the ins and outs of their products.
The robotics community is not so big yet, so we still know each other. I had the pleasure to meet good friends from companies like Infinium Robotics, PAL Robotics and Fetch Robotics between others. Infinium Robotics is the company in Singapore where I work as CTO. I already wrote about this great company in one of my previous posts: “Infinium Robotics. Flying Robots“.
PAL Robotics is a company in Spain well known for having developed some of the best humanoid robots in the world. I have a very good relationship with this companies team since more than ten years ago. Great people, well motivated, well managed, that has been able to look outside of the box and enter with bravery in the world of the robotics warehouse solutions with robots like Tiago or StockBot.
Fetch Robotics is also one of the big players in the robotics solutions for warehousing industry. But I met other interesting people and had amazing chats with people from companies so key in this field as: Amazon Robotics, DJI or Clearpath Robotics. At the end of this post, there is a full list of exhibiting companies.
I saw interesting technology, like the rehabilitation exoskeletons from Fourier Intelligence (Shanghai), the Spidar-G 6DOF Haptic interface from Tokyo Institute of Technology, the Haptic systems of Force Dimension and Moog, the dexterous manipulators of Kinova, Kuka, ABB, ITRI, the modular robot components of HEBI Robotics, Keyi Technology, the D motion capture technologies of Phoenix Technologies and Optitrack, the educational solutions of Ubitech, GT Robot or Robotis and many, many others, most of them ROS enabled.
As I usually do at these events, I recorded a video of the exhibition area to provide an idea of the technologies shown there.
Last but not least, I want to thank my friends from SIAA (Singapore International Automation Association) for their kind friendship and support. They also organized also the Singapore Pavilion in this event.
Humankind lost another important battle with artificial intelligence (AI) last month when AlphaGo beat the world’s leading Go player Ke Jie by three games to zero.
AlphaGo is an AI program developed by DeepMind, part of Google’s parent company Alphabet. Last year it beat another leading player, Lee Se-dol, by four games to one, but since then AlphaGo has substantially improved.
AlphaGo will now retire from playing Go, leaving behind a legacy of games played against itself. They’ve been described by one Go expert as like “games from far in the future”, which humans will study for years to improve their own play.
Ready, set, Go
Go is an ancient game that essentially pits two players – one playing black pieces the other white – for dominance on board usually marked with 19 horizontal and 19 vertical lines.
Go is a far more difficult game for computers to play than chess, because the number of possible moves in each position is much larger. This makes searching many moves ahead – feasible for computers in chess – very difficult in Go.
DeepMind’s breakthrough was the development of general-purpose learning algorithms that can, in principle, be trained in more societal-relevant domains than Go.
DeepMind says the research team behind AplhaGo is looking to pursue other complex problems, such as finding new cures for diseases, dramatically reducing energy consumption or inventing revolutionary new materials. It adds:
If AI systems prove they are able to unearth significant new knowledge and strategies in these domains too, the breakthroughs could be truly remarkable. We can’t wait to see what comes next.
This does open up many opportunities for the future, but challenges still remain.
Neuroscience meets AI
AlphaGo combines the two most powerful ideas about learning to emerge from the past few decades: deep learning and reinforcement learning. Remarkably, both were originally inspired by how biological brains learn from experience.
In the human brain, sensory information is processed in a series of layers. For instance, visual information is first transformed in the retina, then in the midbrain, and then through many different areas of the cerebral cortex.
This creates a hierarchy of representations where simple, local features are extracted first, and then more complex, global features are built from these.
The AI equivalent is called deep learning; deep because it involves many layers of processing in simple neuron-like computing units.
But to survive in the world, animals need to not only recognise sensory information, but also act on it. Generations of scientists and psychologists have studied how animals learn to take a series of actions that maximise their reward.
This has led to mathematical theories of reinforcement learning that can now be implemented in AI systems. The most powerful of these is temporal difference learning, which improves actions by maximising its expectation of future reward.
The best moves
By combining deep learning and reinforcement learning in a series of artificial neural networks, AlphaGo first learned human expert-level play in Go from 30 million moves from human games.
But then it started playing against itself, using the outcome of each game to relentlessly refine its decisions about the best move in each board position. A value network learned to predict the likely outcome given any position, while a policy network learned the best action to take in each situation.
Although it couldn’t sample every possible board position, AlphaGo’s neural networks extracted key ideas about strategies that work well in any position. It is these countless hours of self-play that led to AlphaGo’s improvement over the past year.
Unfortunately, as yet there is no known way to interrogate the network to directly read out what these key ideas are. Instead we can only study its games and hope to learn from these.
This is one of the problems with using such neural network algorithms to help make decisions in, for instance, the legal system: they can’t explain their reasoning.
We still understand relatively little about how biological brains actually learn, and neuroscience will continue to provide new inspiration for improvements in AI.
Humans can learn to become expert Go players based on far less experience than AlphaGo needed to reach that level, so there is clearly room for further developing the algorithms.
Also much of AlphaGo’s power is based on a technique called back-propagation learning that helps it correct errors. But the relationship between this and learning in real brains is still unclear.
What’s next?
The game of Go provided a nicely constrained development platform for optimising these learning algorithms. But many real world problems are messier than this and have less opportunity for the equivalent of self-play (for instance self-driving cars).
So are there problems to which the current algorithms can be fairly immediately applied?
One example may be optimisation in controlled industrial settings. Here the goal is often to complete a complex series of tasks while satisfying multiple constraints and minimising cost.
As long as the possibilities can be accurately simulated, these algorithms can explore and learn from a vastly larger space of outcomes than will ever be possible for humans. Thus DeepMind’s bold claims seem likely to be realised, and as the company says, we can’t wait to see what comes next.
The world’s brightest minds in Artificial Intelligence (AI) and humanitarian action will meet with industry leaders and academia at the AI for Good Global Summit, 7-9 June 2017, to discuss how AI will assist global efforts to address poverty, hunger, education, healthcare and the protection of our environment. The event will in parallel explore means to ensure the safe, ethical development of AI, protecting against unintended consequences of advances in AI.
The event is co-organized by ITU and the XPRIZE Foundation, in partnership with 20 other United Nations (UN) agencies, and with the participation of more than 70 leading companies and academic and research institutes.
“Artificial Intelligence has the potential to accelerate progress towards a dignified life, in peace and prosperity, for all people,” said UN Secretary-General António Guterres. “The time has arrived for all of us – governments, industry and civil society – to consider how AI will affect our future. The AI for Good Global Summit represents the beginnings of our efforts to ensure that AI charts a course that will benefit all of humanity.”
The AI for Good Global Summit will emphasize AI’s potential to contribute to the pursuit of the UN Sustainable Development Goals.
Opening sessions will share expert insight into the state of play in AI, with leading minds in AI giving voice to their greatest ambitions in driving AI towards social good. ‘Breakthrough’ sessions will propose strategies for the development of AI applications and systems able to promote sustainable living, reduce poverty and deliver citizen-centric public services.
“Today, we’ve gathered here to discuss how far AI can go, how much it will improve our lives, and how we can all work together to make it a force for good,” said ITU Secretary-General Houlin Zhao. “This event will assist us in determining how the UN, ITU and other UN Agencies can work together with industry and the academic community to promote AI innovation and create a good environment for the development of artificial intelligence.”
“The AI for Good Global Summit has assembled an impressive, diverse ecosystem of thought leaders who recognize the opportunity to use AI to solve some of the world’s grandest challenges,” said Marcus Shingles, CEO of the XPRIZE Foundation. “We look forward to this Summit providing a unique opportunity for international dialogue and collaboration that will ideally start to pave the path forward for a new future of problem solvers working with XPRIZE and beyond.”
The AI for Good Global Summit will be broadcast globally as well as captioned to ensure accessibility.