Page 333 of 399
1 331 332 333 334 335 399

Robots in Depth with Sebastian Weisenburger


In this episode of Robots in Depth, Per Sjöborg speaks Sebastian Weisenburger about how ECHORD++ works, with application-oriented research bridging academia, industry and end users to bring robots to market, under the banner “From lab to market”.

We also hear about Public end-user Driven Technological Innovation (PDTI) projects in healthcare and urban robotics.

This interview was recorded in 2016.

How to mass produce cell-sized robots

This photo shows circles on a graphene sheet where the sheet is draped over an array of round posts, creating stresses that will cause these discs to separate from the sheet. The gray bar across the sheet is liquid being used to lift the discs from the surface.
Image: Felice Frankel

By David L. Chandler

Tiny robots no bigger than a cell could be mass-produced using a new method developed by researchers at MIT. The microscopic devices, which the team calls “syncells” (short for synthetic cells), might eventually be used to monitor conditions inside an oil or gas pipeline, or to search out disease while floating through the bloodstream.

The key to making such tiny devices in large quantities lies in a method the team developed for controlling the natural fracturing process of atomically-thin, brittle materials, directing the fracture lines so that they produce miniscule pockets of a predictable size and shape. Embedded inside these pockets are electronic circuits and materials that can collect, record, and output data.

The novel process, called “autoperforation,” is described in a paper published today in the journal Nature Materials, by MIT Professor Michael Strano, postdoc Pingwei Liu, graduate student Albert Liu, and eight others at MIT.

The system uses a two-dimensional form of carbon called graphene, which forms the outer structure of the tiny syncells. One layer of the material is laid down on a surface, then tiny dots of a polymer material, containing the electronics for the devices, are deposited by a sophisticated laboratory version of an inkjet printer. Then, a second layer of graphene is laid on top.

Controlled fracturing

People think of graphene, an ultrathin but extremely strong material, as being “floppy,” but it is actually brittle, Strano explains. But rather than considering that brittleness a problem, the team figured out that it could be used to their advantage.

“We discovered that you can use the brittleness,” says Strano, who is the Carbon P. Dubbs Professor of Chemical Engineering at MIT. “It’s counterintuitive. Before this work, if you told me you could fracture a material to control its shape at the nanoscale, I would have been incredulous.”

But the new system does just that. It controls the fracturing process so that rather than generating random shards of material, like the remains of a broken window, it produces pieces of uniform shape and size. “What we discovered is that you can impose a strain field to cause the fracture to be guided, and you can use that for controlled fabrication,” Strano says.

When the top layer of graphene is placed over the array of polymer dots, which form round pillar shapes, the places where the graphene drapes over the round edges of the pillars form lines of high strain in the material. As Albert Liu describes it, “imagine a tablecloth falling slowly down onto the surface of a circular table. One can very easily visualize the developing circular strain toward the table edges, and that’s very much analogous to what happens when a flat sheet of graphene folds around these printed polymer pillars.”

As a result, the fractures are concentrated right along those boundaries, Strano says. “And then something pretty amazing happens: The graphene will completely fracture, but the fracture will be guided around the periphery of the pillar.” The result is a neat, round piece of graphene that looks as if it had been cleanly cut out by a microscopic hole punch.

Because there are two layers of graphene, above and below the polymer pillars, the two resulting disks adhere at their edges to form something like a tiny pita bread pocket, with the polymer sealed inside. “And the advantage here is that this is essentially a single step,” in contrast to many complex clean-room steps needed by other processes to try to make microscopic robotic devices, Strano says.

The researchers have also shown that other two-dimensional materials in addition to graphene, such as molybdenum disulfide and hexagonal boronitride, work just as well.

Cell-like robots

Ranging in size from that of a human red blood cell, about 10 micrometers across, up to about 10 times that size, these tiny objects “start to look and behave like a living biological cell. In fact, under a microscope, you could probably convince most people that it is a cell,” Strano says.

This work follows up on earlier research by Strano and his students on developing syncells that could gather information about the chemistry or other properties of their surroundings using sensors on their surface, and store the information for later retrieval, for example injecting a swarm of such particles in one end of a pipeline and retrieving them at the other to gain data about conditions inside it. While the new syncells do not yet have as many capabilities as the earlier ones, those were assembled individually, whereas this work demonstrates a way of easily mass-producing such devices.

Apart from the syncells’ potential uses for industrial or biomedical monitoring, the way the tiny devices are made is itself an innovation with great potential, according to Albert Liu. “This general procedure of using controlled fracture as a production method can be extended across many length scales,” he says. “[It could potentially be used with] essentially any 2-D materials of choice, in principle allowing future researchers to tailor these atomically thin surfaces into any desired shape or form for applications in other disciplines.”

This is, Albert Liu says, “one of the only ways available right now to produce stand-alone integrated microelectronics on a large scale” that can function as independent, free-floating devices. Depending on the nature of the electronics inside, the devices could be provided with capabilities for movement, detection of various chemicals or other parameters, and memory storage.

There are a wide range of potential new applications for such cell-sized robotic devices, says Strano, who details many such possible uses in a book he co-authored with Shawn Walsh, an expert at Army Research Laboratories, on the subject, called “Robotic Systems and Autonomous Platforms,” which is being published this month by Elsevier Press.

As a demonstration, the team “wrote” the letters M, I, and T into a memory array within a syncell, which stores the information as varying levels of electrical conductivity. This information can then be “read” using an electrical probe, showing that the material can function as a form of electronic memory into which data can be written, read, and erased at will. It can also retain the data without the need for power, allowing information to be collected at a later time. The researchers have demonstrated that the particles are stable over a period of months even when floating around in water, which is a harsh solvent for electronics, according to Strano.

“I think it opens up a whole new toolkit for micro- and nanofabrication,” he says.

Daniel Goldman, a professor of physics at Georgia Tech, who was not involved with this work, says, “The techniques developed by Professor Strano’s group have the potential to create microscale intelligent devices that can accomplish tasks together that no single particle can accomplish alone.”

In addition to Strano, Pingwei Liu, who is now at Zhejiang University in China, and Albert Liu, a graduate student in the Strano lab, the team included MIT graduate student Jing Fan Yang, postdocs Daichi Kozawa, Juyao Dong, and Volodomyr Koman, Youngwoo Son PhD ’16, research affiliate Min Hao Wong, and Dartmouth College student Max Saccone and visiting scholar Song Wang. The work was supported by the Air Force Office of Scientific Research, and the Army Research Office through MIT’s Institute for Soldier Nanotechnologies.

How should autonomous vehicles be programmed?

Ethical questions involving autonomous vehicles are the focus of a new global survey conducted by MIT researchers.

By Peter Dizikes

A massive new survey developed by MIT researchers reveals some distinct global preferences concerning the ethics of autonomous vehicles, as well as some regional variations in those preferences.

The survey has global reach and a unique scale, with over 2 million online participants from over 200 countries weighing in on versions of a classic ethical conundrum, the “Trolley Problem.” The problem involves scenarios in which an accident involving a vehicle is imminent, and the vehicle must opt for one of two potentially fatal options. In the case of driverless cars, that might mean swerving toward a couple of people, rather than a large group of bystanders.

“The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to,” says Edmond Awad, a postdoc at the MIT Media Lab and lead author of a new paper outlining the results of the project. “We don’t know yet how they should do that.”

Still, Awad adds, “We found that there are three elements that people seem to approve of the most.”

Indeed, the most emphatic global preferences in the survey are for sparing the lives of humans over the lives of other animals; sparing the lives of many people rather than a few; and preserving the lives of the young, rather than older people.

“The main preferences were to some degree universally agreed upon,” Awad notes. “But the degree to which they agree with this or not varies among different groups or countries.” For instance, the researchers found a less pronounced tendency to favor younger people, rather than the elderly, in what they defined as an “eastern” cluster of countries, including many in Asia.

The paper, “The Moral Machine Experiment,” is being published today in Nature.

The authors are Awad; Sohan Dsouza, a doctoral student in the Media Lab; Richard Kim, a research assistant in the Media Lab; Jonathan Schulz, a postdoc at Harvard University; Joseph Henrich, a professor at Harvard; Azim Shariff, an associate professor at the University of British Columbia; Jean-François Bonnefon, a professor at the Toulouse School of Economics; and Iyad Rahwan, an associate professor of media arts and sciences at the Media Lab, and a faculty affiliate in the MIT Institute for Data, Systems, and Society.

Awad is a postdoc in the MIT Media Lab’s Scalable Cooperation group, which is led by Rahwan.

To conduct the survey, the researchers designed what they call “Moral Machine,” a multilingual online game in which participants could state their preferences concerning a series of dilemmas that autonomous vehicles might face. For instance: If it comes right down it, should autonomous vehicles spare the lives of law-abiding bystanders, or, alternately, law-breaking pedestrians who might be jaywalking? (Most people in the survey opted for the former.)

All told, “Moral Machine” compiled nearly 40 million individual decisions from respondents in 233 countries; the survey collected 100 or more responses from 130 countries. The researchers analyzed the data as a whole, while also breaking participants into subgroups defined by age, education, gender, income, and political and religious views. There were 491,921 respondents who offered demographic data.

The scholars did not find marked differences in moral preferences based on these demographic characteristics, but they did find larger “clusters” of moral preferences based on cultural and geographic affiliations. They defined “western,” “eastern,” and “southern” clusters of countries, and found some more pronounced variations along these lines. For instance: Respondents in southern countries had a relatively stronger tendency to favor sparing young people rather than the elderly, especially compared to the eastern cluster.

Awad suggests that acknowledgement of these types of preferences should be a basic part of informing public-sphere discussion of these issues. In all regions, since there is a moderate preference for sparing law-abiding bystanders rather than jaywalkers, knowing these preferences could, in theory, inform the way software is written to control autonomous vehicles.

“The question is whether these differences in preferences will matter in terms of people’s adoption of the new technology when [vehicles] employ a specific rule,” he says.

Rahwan, for his part, notes that “public interest in the platform surpassed our wildest expectations,” allowing the researchers to conduct a survey that raised awareness about automation and ethics while also yielding specific public-opinion information.

“On the one hand, we wanted to provide a simple way for the public to engage in an important societal discussion,” Rahwan says. “On the other hand, we wanted to collect data to identify which factors people think are important for autonomous cars to use in resolving ethical tradeoffs.”

Beyond the results of the survey, Awad suggests, seeking public input about an issue of innovation and public safety should continue to become a larger part of the dialoge surrounding autonomous vehicles.

“What we have tried to do in this project, and what I would hope becomes more common, is to create public engagement in these sorts of decisions,” Awad says.

Educating the workforce of the future, in the age of accelerations

I have two kids in college and one of my biggest concerns is their knowledge that what they have labored hard to acquire will become obsolete by the time of graduation. Our age is driven by the hypersonic accelerations of technology and data forcing innovative educators to create new pedagogical systems that empower students with the skills today to lead tomorrow.
The new CyberNYC initiative announced last week by the City of New York is just one example of this growing partnership between online platforms and traditional academia in the hope of fostering a new generation of wage earners.

The goal of CyberNYC is to train close to 5% of the city’s working population to become “cyber specialists.” In order to accomplish this lofty objective, the NYCEDC forged an educational partnership with CUNY, NYU, Columbia, Cornell Tech, and iQ4. One of the most compelling aspects of the partnership is the advanced degree program offered by CUNY and Facebook, enabling students to achieve a masters in computer science in just a year through the online educational hosting site EdX, which also enables users to stack credentials from other universities.

As Anant Agarwal, CEO of edX, explains, “The workplace is changing more rapidly today than ever before and employers are in need of highly-developed talent. Meanwhile, college graduates want to advance professionally, but are realizing they do not have the career-relevant skills that the modern workplace demands. EdX recognizes this mismatch between business and education for learners, employees and employers. The MicroMasters initiative provides the next level of innovation in learning to address this skills gap by creating a bridge between higher education and industry to create a skillful, successful 21st-century workforce.”

Realizing that not everyone is cut out for higher education, the Big Apple is also working to create boot camps to upskill existing tech operators in a matter of weeks with industry-specific cyber competencies. Fullstack Academy is leading the effort to create a catalogue of intensive boot camps throughout the boroughs. LaGuardia Community College (LAGCC) is also providing free preparatory courses for adults with minimum computing proficiency in order to qualify for Fullstack’s programs. Most importantly, LAGCC will act as a liaison to CyberNYC’s corporate partners to match graduates with open positions.

In 2012, Sebastian Thrun famously declared that, “access to higher education should be a a basic human right.” Thrun, who left his position of running Google X to “democratize education” worldwide by launching a free online open-university platform, Udacity, is now transforming the learning paradigm. The German inventor is no stranger to innovation, in 2011 he unveiled at the Ted Conference one of the first self-driving cars, inspired by losing his best friend to a car accident. Similar to CyberNYC’s fast-track masters in computer science program, Udacity is teamed up with AT&T and Georgia Tech to offer similar degrees for less than $7,000 (compared to $26,860 for an on-campus program).

In the words of AT&T’s Chief Executive Randall Stephenson, “We believe that high-quality and 100 percent online degrees can be on par with degrees received in traditional on-campus settings, and that this program could be a blueprint for helping the United States address the shortage of people with STEM degrees, as well as exponentially expand access to computer science education for students around the world.”

In 2003, Reid Hoffman launched LinkedIn, the first business social network. Today, there are more than a half billion profiles (resumes) posted on the site. Last March, Hoffman sat down with University of California President (and former director of Homeland Security) Janet Napolitano to discuss the future of education. The leading advocate for entrepreneurship explained that he believes everyone should be “in permanent beta,” whereby one is constantly consuming information. Hoffman states that this is the only way an individual and a society will be able to compete in a world driven by data and artificial intelligence. Universities, like the UC system, Hoffman suggests should move towards a cross-disciplinary system. As Hoffman espouses, “What we’re actually in fact primarily teaching is that learning how to learn as you get to new areas, not areas where it’s necessarily the apprenticeship model, which is we teach you this thing and you know how to do this one thing. You know how to do this thing really well, but actually, in fact, you’re going to be crossing domains. That’s how I would somewhat shift the focus overall in terms of thinking about it.”

In his book, “Thank You For Being Late: An Optimistic’s Guide To Thriving In The Age Of Accelerations,” Thomas Friedman quotes Nest Labs’ founder, Tony Fadell, as asserting that the future economy rests on businesses’ ability of turning “AI into IA,” or “Intelligent Assistants.” Friedman specifically singled out LinkedIn as one of these IAs that are creating human networks to amplify people in finding the best opportunities and highest demanded skills. In order to utilize his IA, Hoffman advised Napolitano’s audience to be versatile, “As opposed to thinking about this as a career ladder, a career escalator, to think of it as more of a career jungle gym, that you’re actually going to be changing around in terms industries. The exact shape of certain different job professions will change, and that you need to be adaptive with that.” He continued, “I do think that the notion that is still too often preached, which is you go to college, you discover your calling, and that’s your job for the next 50 years, that’s gone. ” The key to harnessing this trend says Hoffman is “to constantly be learning and to be learning new things. Some of them by taking ongoing classes but some of them also by doing, and talking to people and finding out what the relevant things are, and then tracking what’s going on.”

All these efforts are not happening fast enough in the United States to fill the current gap between the 6.9 million job openings and the number of unemployed. While the unemployment rate is at a forty year low with 6.2 million workers out of work, there still is a significant disparity with more open job listings. The primary reason stated by business leaders across the nation is that the current class of applicants lack the versatility of skills required for the modern workplace, resulting in the push towards full automation. Eric Schmidt, former Executive Chairman of Google and Alphabet, claims that “Today we all live and work in a new era, the Internet Century, where technology is roiling the business landscape and the pace of change is accelerating.” This Internet Century (and by extension cloud computing and unmanned systems) requires a new type of worker, which he affectionally calls the “smart creative” that is first and foremost an “adaptive learner.”

The deficiency of graduating “smart creatives” could be the reason why America, which is almost at full employment, is still producing historically low output resulting is stagnant wages. Mark Zandi, Moody’s Chief Economist, explains, “Wage growth feels low by historical standards and that’s largely because productivity growth is low relative to historical standards. Productivity growth between World War II and up through the Great Recession was, on average, 2 percent per annum. Since the recession 10 years ago, it’s been 1 percent.” The virtuous efforts of CyberNYC, and other grassroots initiatives, are only the first of many towards the complete restructuring of America’s educational framework to nurture a culture of smart creatives that are in permanent beta.

Industrial robots increase wages for employees

In addition to increasing productivity, the introduction of industrial robots has increased wages for the employees. At the same time, industrial robots have also changed the labour market by increasing the number of job opportunities for highly skilled employees, while opportunities for low-skilled employees are declining.

Join the World MoveIt! Day code sprint on Oct 25 2018

World MoveIt! Day is an international hackathon to improve the MoveIt! code base, documentation, and community. We hope to close as many pull requests and issues as possible and explore new areas of features and improvements for the now seven year old framework. Everyone is welcome to participate from their local workplace, simply by working on open issues. In addition, a number of companies and groups host meetings on their sites all over the world. A video feed will unite the various locations and enable more collaboration. Maintainers will take part in some of these locations.

 

Locations

  • Note that the Tokyo and Singapore locations will have their events on Friday the 26th, not Thursday the 25th.

General Information Contacts

  • Dave Coleman, Nathan Brooks, Rob Coleman // PickNik Consulting

Signup

Please state your intent to join the event on this form. Note that specific locations will have their own signups in addition to this form.

If you aren’t near an organized event we encourage you to have your own event in your lab/organization/company and video conference in to all the other events. We would also like to mail your team or event some MoveIt! stickers to schwag out your robots!

Logistics

What version of MoveIt! should you use?

We recommend the Kinetic LTS branch/release. The Melodic release is also a good choice but is new and has been tested less. The Indigo branch is considered stable and frozen – and only critical bug fixes will be backported.

For your convenience, a VirtualBox image for ROS Kinetic on Ubuntu 16.04 is available here.

Finding Where You Can Help

Suggested areas for improvement are tracked on MoveIt’s GitHub repo via several labels:

  • moveit day candidate labels issues as possible entry points for participants in the event. This list will grow longer before the event.
  • simple improvements indicates the issue can probably be tackled in a few hours, depending on your background.
  • documentation suggests new tutorials, changes to the website, etc.
  • assigned aids developers to find issues that are not already being worked on.
  • no label – of course issues that are not marked can still be worked on during World MoveIt! day, though they will likely take longer than one day to complete.

If you would like to help the MoveIt! project by tackling an issue, claim the issue by commenting “I’ll work on this” and a maintainer will add the label “assigned”. Feel free to ask further questions in each issue’s comments. The developers will aim to reply to WMD-related questions before the event begins.

If you have ideas and improvements for the project, please add your own issues to the tracker, using the appropriate labels where applicable. It’s fine if you want to then claim them for yourself.

Further needs for documentation and tutorials improvement can be found directly on the moveit_tutorials issue tracker.

Other larger code sprint ideas can be found on this page. While they will take longer than a day the ideas might provide a good reference for other things to contribute on WMD.

Documentation

Improving our documentation is at least as important as fixing bugs in the system. Please add to our Sphinx and Markdown-based documentation within our packages and on the MoveIt! website. If you have studied extensively an aspect of MoveIt! that is not currently documented well, please convert your notes into a pull request in the appropriate location. If you’ve started a conversation on the mailing list or other location where a more experienced developer explained a concept, consider converting that answer into a pull request to help others in the future with the same question.

For more details on modifying documentation, see Contributing.

Video Conference and IRC

Join the conversation on IRC with #moveit at irc.freenode.net. For those new to IRC try this web client.

Joint the video conference on Appear.In

Sponsorship

We’d like to thank the following sponsors:

PickNik Consulting

Iron Ox

Fraunhofer IPA

ROS-Industrial Asian Pacific Consortium

Tokyo Opensource Robotics Kyokai Association

OMRON SINIC X Corporation

Southwest Research Institute

Read More

Robots in Depth with Nicola Tomatis

In this episode of Robots in Depth, Per Sjöborg speaks with Nicola Tomatis about his long road into robotics and how BlueBotics handles indoor navigation and integrates it in automated guided vehicles (AGV).

Like many, Nicola started out tinkering when he was young, and then got interested in computer science as he wanted to understand it better.

Nicola gives us an overview of indoor navigation and its challenges. He shares a number of interesting projects, including professional cleaning and intralogistics in hospitals. We also find out what someone who wants to use indoor navigation and AGVs should think about.

This interview was recorded in 2016.

Models of dinosaur movement could help us build stronger robots and buildings

Researchers are using computer simulations to estimate how 11 different species of extinct archosaurs such as the batrachotomus might have moved. Image credit: John Hutchinson

By Sandrine Ceurstemont

From about 245 to 66 million years ago, dinosaurs roamed the Earth. Although well-preserved skeletons give us a good idea of what they looked like, the way their limbs worked remains a bigger mystery. But computer simulations may soon provide a realistic glimpse into how some species moved and inform work in fields such as robotics, prosthetics and architecture.

John Hutchinson, a professor of evolutionary biomechanics from the Royal Veterinary College in Hertfordshire, UK, and his colleagues are investigating the locomotion of the earliest, small dinosaurs, as part of the five-year-long Dawndinos project which began in 2016.

‘These dinosaurs have been hugely neglected,’ Prof. Hutchinson said. ‘People – including me – have mostly been studying the celebrity dinosaurs like T. rex.’

About 225 million years ago, during the late Triassic period, these small dinosaurs were in the minority, whereas the bigger crocodile-like animals that lived alongside them were more numerous and diverse. Dinosaurs somehow went on to thrive while most other animals from that period became extinct.

Compared to their quadrupedal, heavy-built contemporaries, what stands out about these early dinosaurs is that they had an erect posture and could, at least intermittently, walk on two limbs. One theory is that their style of locomotion gave them a survival edge.

‘The idea of this project is to test that idea,’ Prof. Hutchinson said.

The team has started to develop computer simulations to estimate how 11 different species of extinct archosaurs – the group of animals that includes crocodiles, birds, their relatives and dinosaurs – might have moved. They will focus on five different types of motion: walking, running, turning, jumping and standing.

Simulations

To test whether their simulations are accurate, the researchers plan to give the same treatment to their living relatives – crocodiles and birds – as well. They will then compare the results to actual measurements of motion to determine how good their computer models of extinct animals are.

‘It will be the first time we ground-truth (test with empirical evidence) these methods very rigorously with the best possible data we can get,’ Prof. Hutchinson said.

So far, they’ve modelled the movement of a Mussaurus – an early cousin of giant plant-eating sauropod dinosaurs such as Brontosaurus. The Mussaurus was much smaller and researchers wanted to see whether it moved on four legs like its larger relatives. The first reconstructions of the animal had it on four legs because it had quite big arms, said Prof. Hutchinson.

Using scans of well-preserved fossils from Argentina, they were able to produce new models of its movement. Prof. Hutchinson and his team found that it was in fact bipedal. It couldn’t have walked on four legs since the palms of its front limbs faced inwards and the forearm joints weren’t capable of rotating downwards. Therefore, it wouldn’t have been able to plant its front legs on the ground.

‘It wasn’t until we put the bones together in a 3D environment and tried playing with their movements that it became clear to us that this wasn’t an animal with very mobile arms and hands,’ Prof. Hutchinson said.

After modelling the large forearm of the Mussaurus, the Dawndinos team realised that it could not be used for walking. Video courtesy: John Hutchinson

Robotics

The simulations produced during the project could be useful for zoologists. But they could have less obvious applications too, for example, helping to improve how robots move, according to Prof. Hutchinson.

Accurate models are needed to replicate the motion of animals, which robotics researchers often take inspiration from. Mimicking a crocodile, for example, could be of interest to create a robot that can both swim and walk on land.

Prof. Hutchinson also regularly gets contacted by film and documentary makers who are interested in using his simulations to create realistic animations. ‘It’s hard to make bigger, or unusual, animals move correctly if the physics isn’t right,’ Prof. Hutchinson said.

Understanding the locomotion of the very largest dinosaurs is the aim of a project being undertaken by paleobiology researcher Alexandra Houssaye and her colleagues from France’s National Centre for Scientific Research and the National Museum of Natural History in Paris. Through their Gravibone project, which began last year, they want to pin down the limb bone adaptations that allow large animals to carry a heavy skeleton.

‘We really want to understand what (bone features) are linked to being massive,’ Dr Houssaye said.

Massive

So far, research has shown that the long bones in the limbs of bigger animals are more robust than those of smaller animals. But this general trend has only been superficially observed. The outer and inner bone structures have adapted over time to help support animals’ weight. For example, whereas smaller terrestrial animals have hollow limb bones, massive ones like elephants, rhinos and hippos have connective tissue in the middle.

Among the largest animals and their ancestors there are also other differences. The limb bones of modern rhinos, for example, are short and heavy. But their prehistoric relatives called Indricotherium, the largest land mammal that ever lived, had a less stocky skeleton. ‘It’s interesting to see that the biggest didn’t have the most massive (frame),’ Dr Houssaye said.

The team is studying both living and extinct animals, focussing on elephants, rhinos, hippos, prehistoric mammals and dinosaurs such as sauropods – a group that includes the biggest terrestrial animals of all time.

So far, they have compared the ankle bones of horses, tapirs, rhinos and fossils of rhinos’ ancestors. They found that for animals of the same mass there were differences depending on if they were short and stout or had longer limbs. In less stocky animals, the two ankle bones tended to be more distinct whereas they were more strongly connected in those that were massively built, probably to reinforce the articulation.

‘It’s not only the mass (of the animal) but how the mass is distributed on the body,’ said Dr Houssaye. ‘For us that was interesting.’

3D modelling

Their next step will be to scan different limb bones and analyse their inner structure. They will also use 3D modelling to figure out how much weight different parts of the bones can handle in different spots, for example.

The results from the project could help make more efficient prosthetics for people and animals, Dr Houssaye said. Designers will be able to better understand how different features of limb bones, such as thickness and orientation, relate to their strength, enabling them to create materials that are lighter but more resistant. 

Similarly, Dr Houssaye has also had interest from the construction industry which is looking for new types of materials and more effective building techniques. Pillars supporting heavy buildings, for example, could be made using less material by improving their inner structure instead.

‘How a skeleton adapts (to heavy weight) has implications for construction,’ Dr Houssaye said. ‘(Architects) are trying to create structures that are able to support heavy weight.’

The research in this article was funded by the European Research Council. If you liked this article, please consider sharing it on social media.

A step toward personalized, automated smart homes


MIT researchers have built a system that takes a step toward fully automated smart homes, by identifying occupants even when they’re not carrying mobile devices. Image: Chelsea Turner, MIT

By Rob Matheson

Developing automated systems that track occupants and self-adapt to their preferences is a major next step for the future of smart homes. When you walk into a room, for instance, a system could set to your preferred temperature. Or when you sit on the couch, a system could instantly flick the television to your favorite channel.

But enabling a home system to recognize occupants as they move around the house is a more complex problem. Recently, systems have been built that localize humans by measuring the reflections of wireless signals off their bodies. But these systems can’t identify the individuals. Other systems can identify people, but only if they’re always carrying their mobile devices. Both systems also rely on tracking signals that could be weak or get blocked by various structures.

MIT researchers have built a system that takes a step toward fully automated smart home by identifying occupants, even when they’re not carrying mobile devices. The system, called Duet, uses reflected wireless signals to localize individuals. But it also incorporates algorithms that ping nearby mobile devices to predict the individuals’ identities, based on who last used the device and their predicted movement trajectory. It also uses logic to figure out who’s who, even in signal-denied areas.

“Smart homes are still based on explicit input from apps or telling Alexa to do something. Ideally, we want homes to be more reactive to what we do, to adapt to us,” says Deepak Vasisht, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author on a paper describing the system that was presented at last week’s Ubicomp conference. “If you enable location awareness and identification awareness for smart homes, you could do this automatically. Your home knows it’s you walking, and where you’re walking, and it can update itself.”

Experiments done in a two-bedroom apartment with four people and an office with nine people, over two weeks, showed the system can identify individuals with 96 percent and 94 percent accuracy, respectively, including when people weren’t carrying their smartphones or were in blocked areas.

But the system isn’t just novelty. Duet could potentially be used to recognize intruders or ensure visitors don’t enter private areas of your home. Moreover, Vasisht says, the system could capture behavioral-analytics insights for health care applications. Someone suffering from depression, for instance, may move around more or less, depending on how they’re feeling on any given day. Such information, collected over time, could be valuable for monitoring and treatment.

“In behavioral studies, you care about how people are moving over time and how people are behaving,” Vasisht says. “All those questions can be answered by getting information on people’s locations and how they’re moving.”

The researchers envision that their system would be used with explicit consent from anyone who would be identified and tracked with Duet. If needed, they could also develop an app for users to grant or revoke Duet’s access to their location information at any time, Vasisht adds.

Co-authors on the paper are: Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science; former CSAIL researcher Anubhav Jain ’16; and CSAIL PhD students Chen-Yu Hsu and Zachary Kabelac.

Tracking and identification

Duet is a wireless sensor installed on a wall that’s about a foot and a half squared. It incorporates a floor map with annotated areas, such as the bedroom, kitchen, bed, and living room couch. It also collects identification tags from the occupants’ phones.

The system builds upon a device-based localization system built by Vasisht, Katabi, and other researchers that tracks individuals within tens of centimeters, based on wireless signal reflections from their devices. It does so by using a central node to calculate the time it takes the signals to hit a person’s device and travel back. In experiments, the system was able to pinpoint where people were in a two-bedroom apartment and in a café.

The system, however, relied on people carrying mobile devices. “But in building [Duet] we realized, at home you don’t always carry your phone,” Vasisht says. “Most people leave devices on desks or tables, and walk around the house.”

The researchers combined their device-based localization with a device-free tracking system, called WiTrack, developed by Katabi and other CSAIL researchers, that localizes people by measuring the reflections of wireless signals off their bodies.

Duet locates a smartphone and correlates its movement with individual movement captured by the device-free localization. If both are moving in tightly correlated trajectories, the system pairs the device with the individual and, therefore, knows the identity of the individual.

To ensure Duet knows someone’s identity when they’re away from their device, the researchers designed the system to capture the power profile of the signal received from the phone when it’s used. That profile changes, depending on the orientation of the signal, and that change be mapped to an individual’s trajectory to identify them. For example, when a phone is used and then put down, the system will capture the initial power profile. Then it will estimate how the power profile would look if it were still being carried along a path by a nearby moving individual. The closer the changing power profile correlates to the moving individual’s path, the more likely it is that individual owns the phone.

Logical thinking

One final issue is that structures such as bathroom tiles, television screens, mirrors, and various metal equipment can block signals.

To compensate for that, the researchers incorporated probabilistic algorithms to apply logical reasoning to localization. To do so, they designed the system to recognize entrance and exit boundaries of specific spaces in the home, such as doors to each room, the bedside, and the side of a couch. At any moment, the system will recognize the most likely identity for each individual in each boundary. It then infers who is who by process of elimination.

Suppose an apartment has two occupants: Alisha and Betsy. Duet sees Alisha and Betsy walk into the living room, by pairing their smartphone motion with their movement trajectories. Both then leave their phones on a nearby coffee table to charge — Betsy goes into the bedroom to nap; Alisha stays on the couch to watch television. Duet infers that Betsy has entered the bed boundary and didn’t exit, so must be on the bed. After a while, Alisha and Betsy move into, say, the kitchen — and the signal drops. Duet reasons that two people are in the kitchen, but it doesn’t know their identities. When Betsy returns to the living room and picks up her phone, however, the system automatically re-tags the individual as Betsy. By process of elimination, the other person still in the kitchen is Alisha.

“There are blind spots in homes where systems won’t work. But, because you have logical framework, you can make these inferences,” Vasisht says.

“Duet takes a smart approach of combining the location of different devices and associating it to humans, and leverages device-free localization techniques for localizing humans,” says Ranveer Chandra, a principal researcher at Microsoft, who was not involved in the work. “Accurately determining the location of all residents in a home has the potential to significantly enhance the in-home experience of users. … The home assistant can personalize the responses based on who all are around it; the temperature can be automatically controlled based on personal preferences, thereby resulting in energy savings. Future robots in the home could be more intelligent if they knew who was where in the house. The potential is endless.”

Next, the researchers aim for long-term deployments of Duet in more spaces and to provide high-level analytic services for applications such as health monitoring and responsive smart homes.

What’s the legacy of Rethink Robotics?

Baxter – Rethink Robotics

With the recent demise of Rethink Robotics, there were dozens of testimonials that the company revolutionized industrial robotics and kickstarted the collaborative robotics trend. There is no doubt that Baxter and Sawyer were truly innovative and more sophisticated than the average industrial robot. They were also safer than most other cobots, though at the expense of precision. So was Rethink Robotics the pioneer of collaborative robots?

Rethink Robotics was certainly one of the first companies to enter the collaborative robots market. However, I don’t think that the company had a major impact on that industry. Surely, Rethink’s marketing team did an exceptional job promoting the concept of cobots, and thanks to Baxter (which sold for only US$25,000, nearly four times cheaper than Kawada’s Nextage, featured in the above photo), hundreds of research labs were able to safely train students in the field of robotics and AI. However, Rethink’s technology itself did not influence any other robot manufacturer.

Baxter was released in September 2011, but it was only in May 2013 that our lab managed to acquire one of the very first Baxters outside USA. (We now have two of them: one is the research version, the other is the industrial model.) By that time, Universal Robots had already sold more than a thousand cobots. It was only in March 2015 that Rethink Robotics introduced the more practical Sawyer. Two months later, Denmark-based Universal Robots was acquired by Teradyne, neighbor of Rethink Robotics, for $350M. By that time, Universal Robots had sold more than 4,000 cobots, whereas Rethink had shipped only several hundred Baxters, mostly to academia.

Today, there are dozens of cobot models that closely “resemble” the 25,000 (!) robot arms already sold by Universal Robots. And even the latest version of UR arms, while more sophisticated than the EasyBot (the very first version of the UR5), shares none of the advanced features of Rethink’s cobots (many of which are patented, of course). The e-series UR cobots do not have embedded camera, nor seven degrees of freedom, nor torque sensors in the joints, not series elastic actuators, nor self-collision avoidance algorithms. Universal Robots started with a simple, easy-to-use programming interface, a relatively standard, easy-to-manufacture robot arm, and some basic safety functionality. The company then gradually improved its product, and most importantly its distribution network. Rethink Robotics tried to release immediately a technology that was never tested before.

Rethink Robotics developed cobots that were way too different from other industrial robots, both from technological and user points of view. It’s not because you target users that are possibly not engineers with robotics background that you need to make a perfectly safe, friendly-faced robot that can only be programmed through demonstration. I am truly sad about the company’s engineers; they did create two revolutionary robots that marked the robotics field and will continue to be mentioned in every future textbook on robotics. However, I doubt that there will be much interest in the company’s patent portfolio and other intellectual property, which are currently for sale.

Learning acrobatics by watching YouTube

By Xue Bin (Jason) Peng and Angjoo Kanazawa

Whether it’s everyday tasks like washing our hands or stunning feats of acrobatic prowess, humans are able to learn an incredible array of skills by watching other humans. With the proliferation of publicly available video data from sources like YouTube, it is now easier than ever to find video clips of whatever skills we are interested in.
...
Simulated characters imitating skills from YouTube videos.

A staggering 300 hours of videos are uploaded to YouTube every minute. Unfortunately, it is still very challenging for our machines to learn skills from this vast volume of visual data. Most imitation learning approaches require concise representations, such as those recorded from motion capture (mocap). But getting mocap data can be quite a hassle, often requiring heavy instrumentation. Mocap systems also tend to be restricted to indoor environments with minimal occlusion, which can limit the types of skills that can be recorded. So wouldn’t it be nice if our agents can also learn skills by watching video clips?

In this work, we present a framework for learning skills from videos (SFV). By combining state-of-the-art techniques in computer vision and reinforcement learning, our system enables simulated characters to learn a diverse repertoire of skills from video clips. Given a single monocular video of an actor performing some skill, such as a cartwheel or a backflip, our characters are able to learn policies that reproduce that skill in a physics simulation, without requiring any manual pose annotations.

Read More

What happens to human driven cars in the robocar world?


I love to talk about the coming robocar world. Over the next few decades, more and more trips will be made in robocars, and more and more people will reduce or give up car ownership to live the robotaxi life. This won’t be instantaneous, and it will happen in some places decades before it happens in others, but I think it’s coming.

But what of the driver of the regular car? What lies ahead for those who love driving and want to own a traditional car? I often see people declare that nobody will own cars in the future, and that human driving will even be banned. Is that realistic?

Nobody restricts human driving for quite some time

The transition to robocars must be gradual, one car at a time, at least in most of the world. That means lots of human driven cars on all the roads for decades to come.

Some people predict that human driving will quickly be banned. This won’t happen in most places simply because there will still be lots of places robocars don’t go because it’s not commercially viable to certify them there. In addition, there will be strong political opposition. At a rough guess, around 1/3rd of people never have a car accident in their lives. What is the justification in taking away their licences?

When I give talks on robocars, I usually get some people telling me they can’t imagine why anybody would drive or own a car in the future, and others declaring that only a minority will give up the fun, freedom and control of manual driving. The real answer will be a mix. Though to those who tell me that Americans love cars too much to ever give them up, I ask how it is that car-loving Californians can move to Manhattan and give up car ownership in 15 minutes.

Restricted zones

We might see the rise of robocar-only lanes in certain places. There might be a special highway lane, where faster driving is allowed, and platooning takes place.

More dramatic would be the designation of certain downtown areas as robocar only at certain times or all the time. It’s not unusual for there to be driving restrictions in downtowns, particularly “old city” downtowns with small streets as found in Europe. Sometimes downtown streets are converted to pedestrian malls as well. It’s possible to imagine the robocars being deemed well behaved enough to go into these restricted areas.

We might also see robocars allowed to access, under certain rules, the private right-of-way used by transit lines, particularly bus rapid transit paths. It’s also possible rail lines and tunnels could get partially paved to allow the robocars to use them when the transit vehicle is not. They can be trusted not to interfere, and they can also drive reliably on thin strips of pavement — like rails for tires — if necessary.

This is not taking anything away from regular cars, but rather it’s giving the robocars a privilege the traditional cars never got.

More threatening to the human driver might be time restrictions or congestion restrictions. Once robocars provably cause less congestion, we might see congestion taxes on human drivers, or limitations on human driving during congested times, just as we sometimes see for trucks.

New infrastructure only for robocars

Many years down the road, cities might realize that when building new infrastructure, there are advantages in making it only for robocars, or for certain classes of robocars, such as lightweight electric single passenger cars. You can build very small tunnels for them, or much cheaper bridges and elevated roadways. They can drive reliably on a thin lane, saving a lot of money. Your old car won’t go there, but then again, it can’t go there now.

It’s also possible we might see housing neighbourhoods that take advantage of some things robocars can do, like serve the homes only with a single lane back alley, which comes with occasional wide spots for cars to do perfectly timed passing of one another on the short lanes.

Like many people, I think that robocars will be electric. As we get more electric cars, they might get privileges on the road you don’t get if you drive a manual gas guzzler. This even includes things like driving into buildings.

The fun roads will still be fun roads

The vast majority of the changes due to robocars will be on urban streets and highways. Commuting roads. Even those who love to drive don’t tend to love the urban commute. The country roads, the scenic coastal and mountain routes — these will stay pretty much the same for a long time to come. In fact, you might like having a robocar drive you out of the city to places where it’s fun to drive, and then take the wheel (or switch to a sportscar) to enjoy those great drives.

New traffic control and virtual infrastructure

Robocars must mix with regular cars, but they are able to do a better job at helping reduce traffic congestion by obeying on-the-fly rules about road use. They make use of “virtual infrastructure” — things like maps that show everything about the roads and their rules.

Drivers of regular cars can participate in this as well, with just a smartphone. If the day comes when cities want to meter or charge for roads using computer networks, the robocars will be ready to do it, but so will the ordinary drivers as long as they have that phone.

Parking will dwindle, but get cheaper for robocars

Robotaxis don’t need to park that much — they would rather be out working on another ride. Private robocars will drop their masters off at the door and then go find the cheapest parking nearby. That parking doesn’t have to be right nearby, they will shop around in a way that human drivers can’t. And park themselves densely in valet style. So they won’t pay that much for parking. You’ll probably have to pay more since you need to park your car right where you’re going while they don’t. You also take up more space.

Many parking spaces on the street might be restricted to robocars at certain times. That’s because robocars don’t park, they “stand,” able to move away at any time. A robocar can stand in front of a driveway or even a fire hydrant. And we might see important streets that only let robocars park on them as rush hour approaches, because they can be trusted to reliably clear the parking lanes and make them open for traffic. Robocars can also double park where there is room, because they can get out of the way if the “blocked” car wants to get out.

Trucks

The existence of cheap delivery robots might change how much people want trucks or vehicles with lots of cargo space. If you know that, when you want to move a big thing, you can easily call up a delivery robot, you might reduce your desire for cargo space. On the other hand, if you want a van with everything in it all the time, you might still go for that.

Your insurance might go down a bit

The price of insurance is based on how frequently people like you have accidents. Unless you think you will have more accidents, your insurance won’t go up. In fact, as the roads get safer and more orderly, you might have fewer accidents. And late model manually driven cars will still be loaded up with new accident prevention features, some of them a result of robocar research. That means your insurance will get a bit cheaper. Of course, the robocars will drive more safely than you, and pay even less for the cost of accidents. If not, they’re doing it wrong.

Some kids never get a licence

Kids are alive today who won’t get a licence due to robocars. That’s easy because kids are alive today who are not getting licences due to transit and Uber. This will grow. As it grows, it will become more normal, which means more effort to accommodate them, and a bit more pressure against human driving in some areas and times. Parents will cut back on the idea of giving a teenager a car, preferring to offer them a “card” which provides robotaxi service. A time will come when preferring to manual drive will be looked on as a touch odd, and for the older generation and the enthusiast.

Still plenty of places without robocars

As I outline in my timeline for deployment, robocars will roll out on very different schedules based on where you are. There will be cities that have heavy robotaxi penetration in the 2020s, while other cities have next to none. There will be countries which have no robocars even in the 2030s.

That’s good news for people who want to give up car ownership and sell their existing car. Even if they might have a hard time selling a used car in a city where car ownership is decreasing, there will still be plenty of places, both in the USA and around the world, where the car can be sold.

In addition, robotaxi service will not be available in rural areas for a very long time. Out there you will find only traditional cars and privately owned robocars. There will be many places to enjoy driving and sell cars.

Conversely, there is talk that countries like China, which have been known to do bold projects and are building brand-new cities, might declare a city to be all-robocars before too long. While that’s possible, it is not going to affect how you drive outside that city.

Privately owned robocars will have a steering wheel available, though if it’s for rare use it may be more like a video game wheel. You’ll need it to drive places robocars are not rated to go, like off-road or the dirt road to grandpa’s house — or just the many places the robocar doesn’t have maps and safety certification for. This probably won’t be “fun” sports driving, but it could be. Taxis won’t have these wheels (except for use by staff if the vehicle fails) but might drop you off next to a vehicle that has a wheel if that’s needed to complete your trip.

Society may not tolerate your mistakes

The biggest threat to the lover of driving is that while you won’t get any more dangerous, you will be more dangerous by comparison. You might not get away with things you could get away with before when everybody was dangerous.

It’s possible that you might lose your licence after one or two accidents. Perhaps even after just one DUI or a few serious tickets. Today, society doesn’t want to take away people’s licences because that destroys their mobility. In the future, it won’t — you will still be able to get around. So society may not tolerate you making big mistakes.

This also might happen more to people as they get older. Again, today we don’t want to take away a senior’s licence even when they are half-blind, because that ends their mobile life. Not any more.

The more distant future

As time passes, a new generation will grow up without learning how to drive. A couple of generations from now, manual driving may be mostly a sport or affectation. Still allowed, but uncommon. After all, people still ride horses a century after they stopped being used for transportation.

In this far away time, driving may be seen mostly as a sport, not a means of transportation. Racetracks and special roads will still exist, but for fun, not travel.

Page 333 of 399
1 331 332 333 334 335 399