Archive 09.10.2018

Page 4 of 5
1 2 3 4 5

25 women in robotics you need to know about – 2018

From driving rovers on Mars to improving farm automation for Indian women, once again we’re bringing you a list of 25 amazing women in robotics! These women cover all aspects of the robotics industry, both research, product and policy. They are founders and leaders, they are investigators and activists. They are early career stage and emeritus. There is a role model here for everyone! And there is no excuse – ever – not to have a woman speaking on a panel on robotics and AI.

But to start, here’s some news about previous nominees (and this is just a sample because we’ve showcased over 125 women so far and this is our 6th year).

In 2013, Melonee Wise was just launching her first startup! Since then she’s raised $48 million USD for Fetch Robotics and Fetch and Freight robots are rolling out in warehouses all over the world! Maja Mataric’s startup Embodied Inc has raised $34.4 million for home companion robots. Amy Villeneuve has moved from President and COO of Kiva Systems, VP at Amazon Robotics to the Board of Directors of 4 new robotics startups. And Manuela Veloso joined the Corporate & Investment Bank J.P. Morgan as head of Artificial Intelligence (AI) Research.

In 2014, Sampriti Bhattacharya was a PhD student at MIT, since then she has turned her research into a startup Hydroswarm and been named one of Forbes 30 most powerful young change agents. Noriko Kageki moved from Kawada Robotics Corp in Japan to join the very female friendly Omron Adept Technologies in Silicon Valley.

2015’s Cecilia Laschi and Barbara Mazzolai are driving the Preparatory action for a European Robotics Flagship, which has the potential to become a 1B EUR project. The goal is to make new robots and AIs that are ethically, socially, economically, energetically, and environmentally responsible and sustainable. And PhD candidate Kavita Krishnaswamy. who depends on robots to travel, has received Microsoft, Google, Ford and NSF fellowships to help her design robots for people with disabilities.

2015’s Hanna Kurniawati was a Keynote Speaker at IROS 2018 (Oct 1-5) in Madrid, Spain. As were nominees Raquel Urtasun, Jamie Paik, Barbara Mazzolai, Anca Dragan, Amy Loutfi (and many more women!) 2017’s Raia Hadsell was the Plenary Speaker at ICRA 2018 (May 21-25) in Brisbane, Australia. And while it’s great to see so many women showcased this year at robotics conferences – don’t forget 2015 when the entire ICRA organizing committee  was comprised of women.

ICRA 2015 Organizing Committee
ICRA 2015 Organizing Committee

2016’s Vivian Chu finished her Social Robotics PhD and founded the robotics startup Diligent Robotics with her supervisor Dr Andrea Thomaz (featured in 2013). Their hospital robot Moxi was just featured on the BBC . And 2016’s Gudrun Litzenberger was just awarded the Engelberger Award by the RIA, joining 2013’s Daniela Rus. (The RIA is finally recognizing the role of women after we/Andra pointed out that they’d given out 120 awards and only 1 was to a woman – Bala Krishnamurthy in 2007 – and are now also offering grants for eldercare robots and women in robotics.)

We try to cover the whole globe, not just the whole career journey for women in robotics – so we welcome a nominee from Ashesi University in Ghana to this year’s list!  So, without further ado, here are the 25 Women In Robotics you need to know about – 2018 edition!

Crystal Chao

Chief Scientist of AI/Robotics – Huawei

Crystal Chao is Chief Scientist at Huawei and the Global Lead of Robotics Projects, overseeing a team that operates in Silicon Valley, Boston, Shenzhen, Beijing, and Tokyo. She has worked with every part of the robotics software stack in her previous experience, including a stint at X, Google’s moonshot factory. In 2012, Chao won Outstanding Doctoral Consortium Paper Award, ICMI, for her PhD at Georgia Tech, where she developed an architecture for social human-robot interaction (HRI) called CADENCE: Control Architecture for the Dynamics of Natural Embodied Coordination and Engagement, enabling a robot to collaborate fluently with humans using dialogue and manipulation.

Sougwen Chung

Interdisciplinary Artist

Sougwen Chung is a Chinese-born, Canadian-raised artist based in New York. Her work explores the mark-made-by-hand and the mark-made-by-machine as an approach to understanding the interaction between humans and computers. Her speculative critical practice spans installation, sculpture, still image, drawing, and performance. She is a former researcher fellow at MIT’s Media Lab and inaugural member of NEW INC, the first museum-led art and technology in collaboration with The New Museum. She received a BFA from Indiana University and a masters diploma in interactive art from Hyper Island in Sweden.

Emily Cross

Professor of Social Robotics / Director of SoBA Lab

Emily Cross is a cognitive neuroscientist and dancer. As the Director of the Social Brain in Action Laboratory (www.soba-lab.com), she explores how our brains and behaviors are shaped by different kinds of experience throughout our lifespans and across cultures. She is currently the Principal Investigator on the European Research Council Starting Grant entitled ‘Social Robots’, which runs from 2016-2021.

Rita Cucchiara

Full Professor / Head of AImage Lab

Rita Cucchiara is Full Professor of Computer Vision at the Department of Engineering “Enzo Ferrari” of the University of Modena and Reggio Emilia, where since 1998 she has led the AImageLab, a lab devoted to computer vision and pattern recognition, AI and multimedia. She coordinates the RedVision Lab UNIMORE-Ferrari for human-vehicle interaction. She was President of the Italian Association in Computer Vision, Pattern Recognition and Machine Learning (CVPL) from 2016 to 2018, and is currently Director of the Italian CINI Lab in Artificial Intelligence and Intelligent Systems. In 2018 she was recipient of the Maria Petrou Prize of IAPR

Sanja Fidler

Assistant Professor / Director of AI at NVIDIA

Sanja Fidler is Director of AI at NVIDIA’s new Toronto Lab, conducting cutting-edge research projects in machine learning, computer vision, graphics, and the intersection of language and vision. She remains Assistant Professor at the Department of Computer Science, University of Toronto. She is recipient of the Amazon Academic Research Award (2017) and the NVIDIA Pioneer of AI Award (2016). She completed her PhD in computer science at University of Ljubljana in 2010, and has served as a Program Chair of the 3DV conference, and as an Area Chair of CVPR, EMNLP, ICCV, ICLR, and NIPS.

Kanako Harada

ImPACT Program Manager

Kanako Harada, is Program Manager of the ImPACT program “Bionic Humanoids Propelling New Industrial Revolution” of the Cabinet Office, Japan. She is also Associate Professor of the departments of Bioengineering and Mechanical Engineering, School of Engineering and the University of Tokyo, Japan. She obtained her M.Sc. in Engineering from the University of Tokyo in 2001, and her Ph.D. in Engineering from Waseda University in 2007. She worked for Hitachi Ltd., Japan Association for the Advancement of Medical Equipment, and Scuola Superiore Sant’Anna, Italy, before joining the University of Tokyo. Her research interests include surgical robots and surgical skill assessment.

Jessica Hodgins

Professor / FAIR Research Mgr and Operations Lead

Jessica Hodgins is a Professor in the Robotics Institute and Computer Science Department at Carnegie Mellon University, and the new lead of Facebook’s AI Research Lab in Pittsburgh. The FAIR lab will focus on robotics, lifelong-learning systems that learn continuously, teaching machines to reason and AI in support of creativity. From 2008-2016, Hodgins founded and ran research labs for Disney, rising to VP of Research and leading the labs in Pittsburgh and Los Angeles. She received her Ph.D. in Computer Science from Carnegie Mellon University in 1989. She has received an NSF Young Investigator Award, a Packard Fellowship, a Sloan Fellowship, the ACM SIGGRAPH Computer Graphics Achievement Award, and in 2017 she was awarded the Steven Anson Coons Award for Outstanding Creative Contributions to Computer Graphics. Her groundbreaking research focuses on computer graphics, animation, and robotics, with an emphasis on generating and analyzing human motion.

Heather Justice

Mars Exploration Rover Driver

Heather Justice has the dream job title of Mars Exploration Rover Driver, and is a Software Engineer at NASA JPL. As a 16-year-old watching the first Rover landing on Mars, she said: “I saw just how far robotics could take us and I was inspired to pursue my interests in computer science and engineering.” Justice graduated from Harvey Mudd College with a B.S. in computer science in 2009 and an M.S. from the Robotics Institute at Carnegie Mellon University in 2011, having also interned at three different NASA centers, and working in a variety of research areas including computer vision, mobile robot path planning, and spacecraft flight rule validation.

Sue Keay

COO

Sue Keay is the Chief Operating Officer of the ACRV and in 2018 launched Australia’s first National Robotics Roadmap at Parliament House. A university medallist and Jaeger scholar, Sue has more than 20 years experience in the research sector, managing and ensuring impact from multidisciplinary R&D programs and teams. She has a PhD in Earth Sciences from the Australian National University and was an ARC post-doctoral fellow at the University of Queensland, before turning to science communication, research management, research commercialisation, and IP management. Keay is a graduate of the Australian Institute of Company Directors, and Chairs the IP and Commercialisation Committee for the Board of the CRC for Optimising Resource Extraction. In 2017, Keay was also named one of the first Superstars of STEM by Science & Technology Australia.

Erin Kennedy

Founder

Erin Kennedy is a robot maker and the founder of Robot Missions, an organization that empowers communities to embark on missions aimed at helping our planet using robots. She designed and developed a robot to collect shoreline debris, replicable anywhere with a 3D printer. Kennedy studied digital fabrication at the Fab Academy, and worked with a global team at MIT on a forty-eight-hour challenge during Fab11 to build a fully functional submarine. A former fellow in social innovation and systems thinking at the MaRS Discovery District’s Studio Y, Kennedy has been recognized as a finalist in the Lieutenant Governor’s Visionaries Prize (Ontario), and her previous robotic work has been featured in Forbes, Wired, and IEEE Spectrum, and on the Discovery Channel.

Kathrine Kuchenbecker

Director at Max Planck Institute for Intelligent Systems / Associate Professor

Katherine J. Kuchenbecker is Director and Scientific Member at the Max Planck Institute for Intelligent Systems in Stuttgart, on leave from the Department of Computer and Information Science at UPenn. Kuchenbecker received her PhD degree in Mechanical Engineering from Stanford University in 2006. She received the IEEE Robotics and Automation Society Academic Early Career Award, NSF CAREER Award, and Best Haptic Technology Paper at the IEEE World Haptics Conference. Her keynote at RSS 2018 is online. Kuchenbecker’s research expertise is in the design and control of robotic systems that enable a user to touch virtual objects and distant environments as though they were real and within reach, uncovering new opportunities for its use in interactions between humans, computers, and machines.

Jasmine Lawrence

Technical Program Manager – Facebook

Jasmine Lawrence currently serves as a Technical Program Manager on the Building 8 team at Facebook, a research lab to develop hardware projects in the style of DARPA. Previously, she served as a Technical Program Manager at SoftBank Robotics where she lead a multidisciplinary team to create software for social, humanoid robots. Before that she was a Program Manager at Microsoft on the HoloLens Experience team and the Xbox Engineering team. Lawrence earned her B.S. in Computer Science, from the Georgia Institute of Technology, and her M.S. in Human Centered Design & Engineering from U of Washington. At the age of 13, after attending a NFTE BizCamp, Jasmine founded EDEN BodyWorks to meet her own need for affordable natural hair and body care products. After almost 14 years in business her products are available at Target, Wal-Mart, CVS, Walgreens, Amazon.com , Kroger, HEB, and Sally Beauty Supply stores just to name a few.

Jade Le Maître

CTO & CoFounder – Hease Robotics

Jade Le Maître spearheads the technical side of Hease Robotics, a robot catered to the retail industry and customer service. With a background in engineering and having conducted a research project about human-robot interaction, Le Maître found her passion in working in the science communication sector. Since then she has cofounded Hease Robotics to bring the robotics experience to the consumer.

Laura Margheri

Programme Manager and Knowledge Transfer Fellow – Imperial College London

Laura Margheri develops the scientific program and manages the research projects at the Aerial Robotics Laboratory at the Imperial College London, managing international and multidisciplinary partnerships. Before joining Imperial College, she was project manager and post doc fellow at the BioRobotics Institute of the Scuola Superiore Sant’Anna. Margheri has an M.S. in Biomedical Engineering (with Honours) and the PhD in BioRobotics (with Honours). She is also member of the IEEE RAS Technical Committee on Soft Robotics and of the euRobotics Topic Group on Aerial Robotics, with interdisciplinary expertise in bio-inspired robotics, soft robotics, and aerial robotics. Since the beginning of 2014 she is the Chair of the Women In Engineering (WIE) Committee of the Robotics & Automation Society.

Brenda Mboya

Undergraduate Student – Ashesi University Ghana

Brenda Mboya is just finishing a B.S. in Computer Science at Ashesi University in Ghana. A technology enthusiast who enjoys working with young people, she also volunteers in VR at Ashesi University, with Future of Africa, Tech Era, and as a coach with the Ashesi Innovation Experience (AIX). Mboya was a Norman Foster Fellows in 2017, one of 10 scholars chosen from around the world to attend a one week robotics atelier in Madrid. “Through this conference, the great potential robotics has, especially in Africa, been reaffirmed in my mind.” said Mboya.

Katja Mombaur

Professor at the Institute of Computer Engineering (ZITI) – Heidelberg University

Katja Mombaur is coordinator of the newly founded Heidelberg Center for Motion Research and full professor at the Institute of Computer Engineering (ZITI), where she is head of the Optimization in Robotics & Biomechanics (ORB) group and the Robotics Lab. She holds a diploma degree in Aerospace Engineering from the University of Stuttgart and a Ph.D. degree in Mathematics from Heidelberg University. Mombaur is PI in the European H2020 project SPEXOR. She coordinated the EU project KoroiBot and was PI in MOBOT and ECHORD–GOP, and founding chair of the IEEE RAS technical committee on Model-based optimization for robotics. Her research focuses on the interactions of humans with exoskeletons, prostheses, and external physical devices.

Devi Murthy

CEO – Kamal Kisan

Devi Murthy has a Bachelors degree in Engineering from Drexel University, USA and a Masters in Entrepreneurship from IIM, Bangalore. She has over 6 years of experience in Product Development & Business Development at Kamal Bells, a sheet metal fabrications and components manufacturing company. In 2013 she founded Kamal Kisan, a for-profit Social Enterprise that works on improving farmer livelihoods through smart mechanization interventions that help them adopt modern agricultural practices, and cultivate high value crops while reducing inputs costs to make them more profitable and sustainable.

Sarah Osentoski

COO – Mayfield Robotics

Sarah Osentoski is COO at Mayfield Robotics, who produced Kuri, ‘the adorable home robot’. Previously she was the manager of the Personal Robotics Group at the Bosch Research and Technology Center in Palo Alto, CA. Osentoski is one of the authors of Robot Web Tools. She was also a postdoctoral research associate at Brown University working with Chad Jenkins in the Brown Robotics Laboratory. She received her Ph.D. from the University of Massachusetts Amherst, under Sridhar Mahadevan. Her research interests include robotics, shared autonomy, web interfaces for robots, reinforcement learning, and machine learning. Osentoski featured as a 2017 Silicon Valley Biz Journal “Women of Influence”.

Kirsten H. Petersen

Assistant Professor – Cornell University

Kirstin H. Petersen is Assistant Professor of Electrical and Computer Engineering. She is interested in design and coordination of bio-inspired robot collectives and studies of their natural counterparts, especially in relation to construction. Her thesis work on a termite-inspired robot construction team made the cover of Science, and was ranked among the journal’s top ten scientific breakthroughs of 2014. Petersen continued on to a postdoc with Director Metin Sitti at the Max Planck Institute for Intelligent Systems 2014-2016, and became a fellow with the Max Planck ETH Center for Learning Systems in 2015. Petersen started the Collective Embodied Intelligence Lab in 2016 as part of the Electrical and Computer Engineering department at Cornell University, and has field memberships in Computer Science and Mechanical Engineering.

Kristen Y. Pettersen

Professor Department of Engineering Cybernetics – NTNU

Kristin Ytterstad Pettersen (1969) is a Professor at the Department of Engineering Cybernetics, and holds a PhD and an MSc in Engineering Cybernetics from NTNU. She is also a Key Scientist at the Center of Excellence: Autonomous marine operations and systems (NTNU AMOS) and an Adjunct Professor at the Norwegian Defence Research Establishment (FFI). Her research interests include nonlinear control theory and motion control, in particular for marine vessels, AUVs, robot manipulators, and snake robots. She is also Co-Founder and Board Member of Eelume AS, a company that develops technology for for subsea inspection, maintenance, and repair. In 2017 she received the Outstanding Paper Award from IEEE Transactions on Control Systems Technology, and in 2018 she was appointed Member of the Academy of the Royal Norwegian Society of Sciences and Letters.

Veronica Santos

Assoc. Prof. of UCLA Mechanical & Aerospace Engineering / Principal Investigator – Director of the UCLA Biomechatronics Laboratory

Veronica J. Santos is an Associate Professor in the Mechanical and Aerospace Engineering Department at UCLA, and Director of the UCLA Biomechatronics Lab. She is one of 16 individuals selected for the Defense Science Study Group (DSSG), a two year opportunity for emerging scientific leaders to participate in dialogues related to US security challenges. She received her B.S. from UC Berkeley in 1999 and her M.S. and Ph.D. degrees in Mech. Eng. with a biometry minor from Cornell University in 2007. Santos was a postdoctoral research associate at the Alfred E. Mann Institute for Biomedical Engineering at USC where she worked on a team to develop a novel biomimetic tactile sensor for prosthetic hands. She then directed the ASU Mechanical and Aerospace Engineering Program and ASU Biomechatronics Lab. Santos has received many honors and awards for both research and teaching.

Casey Schulz

Systems Engineer – Omron Adept

Casey Schulz is a Systems Engineer at Omron Adept Technologies (OAT). She currently leads the engineering and design verification testing for a new mobile robot. Prior to OAT, Schulz worked at several Silicon Valley startups, a biotech consulting firm, and the National Ignition Facility at Lawrence Livermore National Labs.Casey received her M.S in Mech. Eng. from Carnegie Mellon University in 2009 for NSF funded research in biologically inspired mobile robotics. She received her B.S from Santa Clara University in 2008 by building a proof-of-concept urban search and rescue mobile robot. Her focus is the development of new robotics technologies to better society.

Kavitha Velusamy

Senior Director Computer Vision – BossaNova Robotics

Kavita Velusamy is the Senior Director of Computer Vision at BossaNova Robotics, where she builds robot vision applications. Previously, she was a Senior Manager at NVIDA, where she managed a global team responsible for delivering computer vision and deep learning software for self-driving vehicles. Prior to this, she was Senior Manager at Amazon, where she wrote the “far field” white paper that defined the device side of Amazon Echo, its vision, its architecture and its price points, and got approval from Jeff Bezos to build a team and lead Amazon Echo’s technology from concept to product. She holds a PhD in Signal Processing/Electrical Communication Engineering from the Indian Institute of Science.

Martha Wells

Author

Martha Wells is a New York Times bestselling author of sci-fi and speculative fiction. Her Hugo award-winning series, The Murderbot Diaries, is about a self-aware security robot that hacks its “governor module”. Known for her world-building narratives, and detailed descriptions of fictional societies, Wells brings an academic grounding in anthropology to her fantasy writing. She holds a B.A. in Anthropology from Texas A&M University, and is the winner of over a dozen awards and nominations for fiction, including a Hugo Award, Nebula Award, and Locus Award.

Andie Zhang

Global Product Manager – ABB Collaborative Robotics

Andie Zhang is Global Product Manager of Robotics at ABB, where she has full global ownership of a portfolio of industrial robot products, develops strategy for the company’s product portfolio, and drives product branding. Zhang’s previous experience includes 10+ years working for world leading companies in Supply Chain, Quality, Marketing and Sales Management. She holds a Masters in Engineering from KTH in Stockholm. Her focus is on collaborative applications for robots and user centered interface design.

Join more than 700 women in our global online community https://womeninrobotics.org and find or host your own Women in Robotics event locally! Women In Robotics is a grassroots not-for-profit organization supported by Robohub and Silicon Valley Robotics.

And don’t forget to browse previous year’s lists, add all these women to wikipedia (let’s have a Wikipedia Hackathon!), or nominate someone for inclusion next year!

A fleet of miniature cars for experiments in cooperative driving

The deployment of connected, automated, and autonomous vehicles presents us with transformational opportunities for road transport. These opportunities reach beyond single-vehicle automation: by enabling groups of vehicles to jointly agree on maneuvers and navigation strategies, real-time coordination promises to improve overall traffic throughput, road capacity, and passenger safety. However, coordinated driving for intelligent vehicles still remains a challenging research problem, and testing new approaches is cumbersome. Developing true-scale facilities for safe, controlled vehicle testbeds is massively expensive and requires a vast amount of space. One approach to facilitating experimental research and education is to build low-cost testbeds that incorporate fleets of down-sized, car-like mobile platforms.

Following this idea, our lab (with key contributions by Nicholas Hyldmar and Yijun He) developed a multi-car testbed that allows for the operation of tens of vehicles within the space of a moderately large robotics laboratory. This testbed facilitates the development of coordinated driving strategies in dense traffic scenarios, and enables us to test the effects of vehicle-vehicle interactions (cooperative as well as non-cooperative). Our robotic car, the Cambridge Minicar, is based on a 1:24 model of an existing commercial car. The Minicar is an Ackermann-steering platform, and one out of very few openly available designs. It is built from off-the-shelf components (with the exception of one laser-cut piece), costs approximately US $76 in its basic configuration, and is especially attractive for robotics labs that already possess telemetry infrastructure. Its low cost enables the composition of large fleets, which can be used to test navigation strategies and driver models. Our Minicar design and code is available in an open-source repository (https://github.com/proroklab/minicar).

The movie above demonstrates the applicability of the testbed for large-fleet experimentation by implementing different driving schemes that lead to distinct traffic behaviors. Notably, in experiments on a fleet of 16 Minicars, we show the benefits of cooperative driving: when traffic disruptions occur, instead of queuing, a cooperative Minicar communicates its intention to lane-change; following vehicles in the new lane reduce their speeds to make space for this projected maneuver, hence maintaining traffic flow (and throughput), whilst ensuring safety.

New York: The gateway to industry 4.0

As Hurricane Florence raged across the coastline of Northern Carolina, 600 miles north the 174th Attack Wing Nation Guard base in Syracuse, New York was on full alert. Governor Cuomo just hung up with Defence Secretary Mattis to ready the airbase’s MQ-9’s drone force to “provide post-storm situational awareness for the on-scene commanders and emergency personnel on the ground.” Suddenly, the entire country turned to the Empire State as the epicenter for unmanned search & rescue operations.

nybase

Located a few miles from the 174th is the Genius NY Accelerator. Genius boasts of the largest competition for unmanned systems in the world. Previous winners that received one million dollars, include: AutoModality and FotoKite. One of Genius’ biggest financial backers is the Empire State Development (ESD). Last month, I moderated a discussion in New York City between Sharon Rutter of the ESD, Peter Kunz of Boeing Horizon X and Victor Friedberg of FoodShot Global. These three investors spanned the gamut of early stage funders of autonomous machines. I started our discussion by asking if they think New York is poised to take a leading role in shaping the future of automation. While Kunz and Friedberg shared their own perspectives as corporate and social impact investors accordingly, Rutter singled out one audience participant in particular as representing the future of New York’s innovation venture scene.

Andrew Hong of ff Venture Capital sat quietly in front of the presenters, yet his firm has been loudly reshaping the Big Apple’s approach to investing in mechatronics for almost a decade (with the ESD as a proud limited partner). Founded in 2008 by John Frankel, formerly of Goldman Sachs, ff has deployed capital in more than 100 companies with market values of over $6 billion. As the original backer of crowd-funding site Indiegogo, ff could be credited as a leading contributor to a new suite of technologies. As Frankel explains, “We like hardware if it is a vector to selling software, as recurring models based on services lead to better economics for us than one-off hardware sales.” In the spirit of fostering greater creativity for artificial intelligence software, ff collaborated with New York University in 2016 to start the NYU/ffVC AI NexusLab — the country’s first AI accelerator program between a university and a venture fund. NexusLab culminated in the Future Labs AI Summit in 2017. Frankel describes how this technology is influencing the future of autonomy, “As we saw that AI was coming into its own we looked at AI application plays and that took us deeper into cyber security, drones and robotics. In addition, both drones and robotics benefited as a byproduct of the massive investment into mobile phones and their embedded sensors and radios.  Thus we invested in a number of companies in the space (Skycatch, PlusOne Robotics, Cambrian Intelligence and TopFlight Technologies) and continue to look for more.”

Recently, ff VC bolstered its efforts to support the growth of an array of cognitive computing systems by opening a new state-of-the-art headquarter in the Empire State Building and expanding its venture partner program. In addition to providing seed capital to startups, ff VC has distinguished itself for more than a decade by augmenting technical founders with robust back-office services, especially accounting and financial management. Last year, ff also widened its industry venture partner program with the addition of Dr. Kathryn Hume to its network. Dr. Hume is probably best known for her work as the former president of Fast Forward Labs, a leading advisory to Fortune 500 companies in utilizing data science and artificial intelligence. I am pleased to announce that I have decided to join Dr. Hume and the ff team as a venture partner to widen their network in the robotics industry. I share Frankel’s vision that today we are witnessing “massive developments in AI and ML that had led to unprecedented demand for automation solutions across every industry.”

 

ff’s commitment is not an isolated example across the Big Apple but part of a growing invigorated community of venture capitalists, academics, inventors, and government sponsors. In a few weeks, New York City Economic Development Corporation (NYCEDC) will officially announce the winner of a $30 million investment grant to boost the city’s cybersecurity ecosystem. CyberNYC will include a new startup accelerator, city-wide programming, educational curricula, up-skilling/job placement, and a funding network for home-grown ventures. As NYCEDC President and CEO James Patchett explains, “The de Blasio Administration is investing in cybersecurity to both fuel innovation, and to create new, accessible pathways to jobs in the industry. We’re looking for big-thinking proposals to help us become the global capital of cybersecurity and to create thousands of good jobs for New Yorkers.” The Mayor’s office projects that its initiative will create 100,000 new jobs over the next ten years, enabling NYC to fully maximize the opportunities of an autonomous world.

The inspiration for CyberNYC could probably be found in the sands of the Israeli desert town of Beer Sheva. In the past decade, this bedouin city in the Holy Land has been transformed from tents into a high-tech engine for cybersecurity, remote sensing and automation technologies. At the center of this oasis is Cyber Labs, a government-backed incubator created by Jerusalem Venture Partners (JVP). Next week, JVP will kick off its New York City “Hub” with a $1 million competition called “New York Play” to bridge the opportunities between Israeli and NYC entrepreneurship.  In the words of JVP’s Chairman and Founder, Erel Margalit, “JVP’s expansion to New York and the launch of New York Play are all about what’s possible. As New York becomes America’s gateway for international collaboration and innovation, JVP, at the center of the “Startup Nation,” will play a significant role boosting global partnerships to create solutions that better the world and drive international business opportunities.”

Looking past the skyscrapers, I reflect on Margalit’s image of New York as a “Gateway” to the future of autonomy.  Today, the wheel of New York City is turning into a powerful hub, connected throughout America’s academic corridor and beyond, with spokes shooting in from Boston, Pittsburgh, Philadelphia, Washington DC, Silicon Valley, Europe, Asia and Israel. The Excelsior State is pulsing with entrepreneurial energy fostered by the partnerships of government, venture capital, academia and industry. As ff VC’s newest venture partner, I personally am excited to play a pivotal role in helping them harness the power of acceleration for the benefit of my city and, quite possibly, the world.

Come learn how New York’s Retail industry is utilizing robots to drive sales at the next RobotLab on “Retail Robotics” with Pano Anthos of XRC Labs and Ken Pilot, formerly President of Gap on October 17th, RSVP today.

Rethink Robotics closes its doors

Baxter Robot from Rethink Robotics – Source: YouTube

Rethink Robotics shut down this week, closing the chapter on a remarkable journey making collaborative robots a reality.

This will come as a surprise and with sadness to the robotics community. I fondly remember interviewing Rodney Brooks, co-founder of the company, back in 2012 for the Robots Podcast. I’d toured the laboratory and was impressed by how human-centered the robot design was. You could show it an object and it would memorise it, teach it new tasks just by moving its arms, the robot’s facial expressions made the process intuitive. Students in laboratories around the world (including ours) spent countless hours learning and doing state-of-the-art research with their Baxter robots.

There is no word on what happens next, although I hope this is the start of a new crazy project. . . 10 years before its time.

You can read more about this story at The Robot Report, and IEEE Automaton Blog.


Model helps robots navigate more like humans do

MIT researchers have devised a way to help robots navigate environments more like humans do.

By Rob Matheson

When moving through a crowd to reach some end goal, humans can usually navigate the space safely without thinking too much. They can learn from the behavior of others and note any obstacles to avoid. Robots, on the other hand, struggle with such navigational concepts.

MIT researchers have now devised a way to help robots navigate environments more like humans do. Their novel motion-planning model lets robots determine how to reach a goal by exploring the environment, observing other agents, and exploiting what they’ve learned before in similar situations. A paper describing the model was presented at this week’s IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

Popular motion-planning algorithms will create a tree of possible decisions that branches out until it finds good paths for navigation. A robot that needs to navigate a room to reach a door, for instance, will create a step-by-step search tree of possible movements and then execute the best path to the door, considering various constraints. One drawback, however, is these algorithms rarely learn: Robots can’t leverage information about how they or other agents acted previously in similar environments.

“Just like when playing chess, these decisions branch out until [the robots] find a good way to navigate. But unlike chess players, [the robots] explore what the future looks like without learning much about their environment and other agents,” says co-author Andrei Barbu, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute. “The thousandth time they go through the same crowd is as complicated as the first time. They’re always exploring, rarely observing, and never using what’s happened in the past.”

The researchers developed a model that combines a planning algorithm with a neural network that learns to recognize paths that could lead to the best outcome, and uses that knowledge to guide the robot’s movement in an environment.

In their paper, “Deep sequential models for sampling-based planning,” the researchers demonstrate the advantages of their model in two settings: navigating through challenging rooms with traps and narrow passages, and navigating areas while avoiding collisions with other agents. A promising real-world application is helping autonomous cars navigate intersections, where they have to quickly evaluate what others will do before merging into traffic. The researchers are currently pursuing such applications through the Toyota-CSAIL Joint Research Center.

“When humans interact with the world, we see an object we’ve interacted with before, or are in some location we’ve been to before, so we know how we’re going to act,” says Yen-Ling Kuo, a PhD in CSAIL and first author on the paper. “The idea behind this work is to add to the search space a machine-learning model that knows from past experience how to make planning more efficient.”

Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL, is also a co-author on the paper.

Trading off exploration and exploitation

Traditional motion planners explore an environment by rapidly expanding a tree of decisions that eventually blankets an entire space. The robot then looks at the tree to find a way to reach the goal, such as a door. The researchers’ model, however, offers “a tradeoff between exploring the world and exploiting past knowledge,” Kuo says.

The learning process starts with a few examples. A robot using the model is trained on a few ways to navigate similar environments. The neural network learns what makes these examples succeed by interpreting the environment around the robot, such as the shape of the walls, the actions of other agents, and features of the goals. In short, the model “learns that when you’re stuck in an environment, and you see a doorway, it’s probably a good idea to go through the door to get out,” Barbu says.

The model combines the exploration behavior from earlier methods with this learned information. The underlying planner, called RRT*, was developed by MIT professors Sertac Karaman and Emilio Frazzoli. (It’s a variant of a widely used motion-planning algorithm known as Rapidly-exploring Random Trees, or  RRT.) The planner creates a search tree while the neural network mirrors each step and makes probabilistic predictions about where the robot should go next. When the network makes a prediction with high confidence, based on learned information, it guides the robot on a new path. If the network doesn’t have high confidence, it lets the robot explore the environment instead, like a traditional planner.

For example, the researchers demonstrated the model in a simulation known as a “bug trap,” where a 2-D robot must escape from an inner chamber through a central narrow channel and reach a location in a surrounding larger room. Blind allies on either side of the channel can get robots stuck. In this simulation, the robot was trained on a few examples of how to escape different bug traps. When faced with a new trap, it recognizes features of the trap, escapes, and continues to search for its goal in the larger room. The neural network helps the robot find the exit to the trap, identify the dead ends, and gives the robot a sense of its surroundings so it can quickly find the goal.

Results in the paper are based on the chances that a path is found after some time, total length of the path that reached a given goal, and how consistent the paths were. In both simulations, the researchers’ model more quickly plotted far shorter and consistent paths than a traditional planner.

Working with multiple agents

In one other experiment, the researchers trained and tested the model in navigating environments with multiple moving agents, which is a useful test for autonomous cars, especially navigating intersections and roundabouts. In the simulation, several agents are circling an obstacle. A robot agent must successfully navigate around the other agents, avoid collisions, and reach a goal location, such as an exit on a roundabout.

“Situations like roundabouts are hard, because they require reasoning about how others will respond to your actions, how you will then respond to theirs, what they will do next, and so on,” Barbu says. “You eventually discover your first action was wrong, because later on it will lead to a likely accident. This problem gets exponentially worse the more cars you have to contend with.”

Results indicate that the researchers’ model can capture enough information about the future behavior of the other agents (cars) to cut off the process early, while still making good decisions in navigation. This makes planning more efficient. Moreover, they only needed to train the model on a few examples of roundabouts with only a few cars. “The plans the robots make take into account what the other cars are going to do, as any human would,” Barbu says.

Going through intersections or roundabouts is one of the most challenging scenarios facing autonomous cars. This work might one day let cars learn how humans behave and how to adapt to drivers in different environments, according to the researchers. This is the focus of the Toyota-CSAIL Joint Research Center work.

“Not everybody behaves the same way, but people are very stereotypical. There are people who are shy, people who are aggressive. The model recognizes that quickly and that’s why it can plan efficiently,” Barbu says.

More recently, the researchers have been applying this work to robots with manipulators that face similarly daunting challenges when reaching for objects in ever-changing environments.

Could an artificial intelligence be considered a person under the law?

Humans aren't the only people in society – at least according to the law. In the U.S., corporations have been given rights of free speech and religion. Some natural features also have person-like rights. But both of those required changes to the legal system. A new argument has laid a path for artificial intelligence systems to be recognized as people too – without any legislation, court rulings or other revisions to existing law.

Scientists develop smart technology for synchronized 3-D printing of concrete

Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed a technology whereby two robots can work in unison to 3-D-print a concrete structure. This method of concurrent 3-D printing, known as swarm printing, paves the way for a team of mobile robots to print even bigger structures in the future. Developed by Assistant Professor Pham Quang Cuong and his team at NTU's Singapore Centre for 3-D Printing, this new multi-robot technology is reported in Automation in Construction. The NTU scientist was also behind the Ikea Bot project earlier this year, in which two robots assembled an Ikea chair in about nine minutes.
Page 4 of 5
1 2 3 4 5