Page 405 of 430
1 403 404 405 406 407 430

December 2017 fundings, acquisitions and IPOs

Twenty-one different startups were funded in December cumulatively raising $430 million, down from the $782 million in November. Three didn’t report the amount of their funding. Only three were over $50 million of which one was a Chinese startup. Three acquisitions were reported during the month including two takeovers of Western robotics companies by Chinese ones. Nothing new on the IPO front.

Fundings:

  1. Farmers Business Network, a San Carlos, Calif.-based farmer-to-farmer network raised $110 million in Series D funding. T. Rowe Price Associates Inc and Temasek led the round, and were joined by investors includingAcre Venture Partners, Kleiner Perkins Caufield & Byers, GV and DBL Partners.
  2. Ripcord, a Hayward, CA robotic digitization company, raised $59.5 million this year in a March Series A and Aug/Dec Series B equity funding led by GV and Icon Ventures with Lux Capital, Telstra Ventures, Silicon Valley Bank, Kleiner Perkins, Google and Baidu Ventures. Ripcord has developed and is providing as a service a digitization service using AI, scanning and robotics to go from cardboard storage boxes full of tagged manila folders, to scanable pdf files available through ERP and other office systems.
  3. JingChi, a Chinese-funded Beijing and Silicon Valley self-driving AI systems startup, raised $52 million (in September) in a seed round led by Qiming Venture Partners. China Growth Capital, Nvidia GPU Ventures and other unnamed investors also participated in the round. Baidu is suing former Baidu employee Wang Jing for using Baidu IP for his new startup.
  4. Groove X, a Tokyo startup developing a humanoid robot Lovot, raised $38.7 million in a Series A round by Mirai Creation Fund, government-backed Innovation Network Corporation of Japan (INCJ), Shenzhen Capital Group, Line Ventures, Dai-ichi Seiko, Global Catalyst Partners Japan (GCPJ), Taiwan’s Amtran Technology, OSG and SMBC Venture Capital. Groove X has raised $71.1 million thus far.
  5. Kespry, a Menlo Park, Calif.-based aerial intelligence solution provider, raised $33 million in Series C funding led by G2VP, and was joined by investors including Shell Technology Ventures, Cisco Investments, and ABB Ventures.
  6. Ouster, a San Francisco startup developing a $12,000 LiDAR, raised $27 million in a Series A funding round led by Cox Enterprises with participation from Fontinalis, Amity Ventures, Constellation Technology Ventures, Tao Capital Partners, and Carthona Capital.
  7. Fetch Robotics, a Silicon Valley logistics co-bot maker, raised $25 million in a Series B round led by Sway Ventures in San Francisco and included existing investors O’Reilly AlphaTech Ventures, Shasta Ventures and SoftBank’s SB Group US. The round brings total funding to $48 million. Fetch, in addition to warehousing customers, is selling to “Tier 1” automakers, which like the ability of Fetch’s robots to detect and track the location of parts “to avoid losing transmissions.
  8. Virtual Incision, a spinout from the U of Nebraska, is a medical device company developing miniaturized robotically assisted general surgery device, raised $18 million in a Series B funding round co-led by Chinese Sinopharm Capita and existing investor Bluestem Capital, with participation from PrairieGold Venture Partners and others.
  9. Wuhan Cobot Technology, a Chinese co-bot startup, raised $15.4 million in a Series B round led by Lan Fund with participation by Matrix Partners and GGV Capital.
  10. PerceptIn, a Silicon Valley vision systems startup, raised $11 million in Angel and A round funding, from Samsung Ventures, Matrix Partners and Walden Intl. Perceptin also announced their new $399 Ironsides product, a full robotics vision system combining both hardware and software for realtime tracking, mapping and path planning.
  11. Upstream Security, a San Francisco-based cybersecurity platform provider for connected cars and self-driving vehicles, raised $9 million in Series A funding. Charles River Ventures led the round and was joined by investors including Glilot Capital Partners and Maniv Mobility.
  12. Robocath, a French medical robotics device developer, raised $8.6 million in two funding rounds from from Crédit Agricole Innovations et Territoires (CAIT), an innovation fund managed by Supernova Invest. Cardio Participation also invested. Robocath raised $5.6 million in May led by Normandie Participation, M Capital Partners with participation by NCI Gestion and GO Capital and $3 million in December.
  13. FarmWise, a San Francisco agricultural robotics and IoT startup developing a weeding robot, raised $5.7 million in a seed round led by hardware-focused VC Playground Global with Felicis Ventures, Basis Set Ventures, and Valley Oak Investments also participating.
  14. Guardian Optical Technologies, an Israeli sensor maker, raised $5.1 million in Series A funding from Maniv Mobility and Mirai Creation Fund.
  15. Aeronyde, a Melbourne, Fla,-based drone infrastructure firm, raised $4.7 million led by JASTech Co. Ltd.
  16. Elroy Air, a San Francisco startup building autonomous aircraft systems to deliver goods to the world’s most remote places, raised $4.6 million in a seed round led by Levitate Capital with participation by Shasta Ventures, Lemnos Labs and Homebrew.
  17. Tortuga AgTecha Denver-based robotics startup targeting controlled-environment fruit and vegetable growers has raised a $2.4 million Seed round led by early-stage hardware VC Root Ventures and closed in September. Also participating in the round were Silicon Valley tech VCs Susa Ventures and Haystack, AME Cloud Ventures, Grit Labs, the Stanford-StartX Fund and SVG Partners. Tortuga is developing robotic systems for harvesting fresh produce in controlled environments, from indoor hydroponics to greenhouses, starting with strawberries.
  18. FluroSat, an Australian crop health startup, has raised $770k in a seed round by  Main Sequence Ventures, manager of the Australian government’s $100 million CSIRO Innovation Fund, Airtree Ventures and Australia’s Cotton Research and Development Corporation (CRDC).
  19. Blue Frog Robotics, a Paris-based robotics startup, raised funding of an undisclosed amount. Fenox Venture Capital led the round with Gilles Benhamou and Benoit de Maulmin participating.
  20. SkyX, an Israeli agtech developing variable aerial spraying software methods, raised an undisclosed seed funding round by Rimonim Fund.
  21. TetraVue, a Vista, Calif.-based 3D technology provider, raised funding of an undisclosed amount. Investors include KLA Tencor, Lam Research, Tsing Capital, Robert Bosch Venture Capital GmbH, Samsung Catalyst Fund and Nautilus Ventures.

Acquisitions:

  1. Chinese medtech investment firm Great Belief International has acquired the IP and assets of the SurgiBot surgical robot developed by TransEnterix for $29 million. TransEnterix retains distribution rights outside of China. SurgiBot failed an FDA application whereupon TransEnterix acquired an Italian competitor with a less advanced product. TransEnterix will continue to develop and market that Senhance robotic assisted surgery platform – which has both CE and FDA approvals. “The relationship with GBIL will allow us to advance the SurgiBot System toward global commercialization while significantly reducing our required investment and simultaneously leveraging ‘in-country’ manufacturing in the world’s most populous country,” TransEnterix president & CEO Todd Pope said.
  2. Estun Automation, a Chinese industrial robot and CNC manufacturer, acquired 50.01% (with the right to acquire the remainder of the shares) of Germany-based M.A.i GmbH, a 270 person integrator with facilities in the US, Italy, Romania and Germany, for around $10.5 million.
  3. Rockwell Automation (NYSE:ROK) acquires Odos Imaging for an undisclosed amount. Odos develops 3D imaging technologies for manufacturing systems.

Robots in Depth with Daniel Lofaro

In this episode of Robots in Depth, Per Sjöborg speaks with Daniel Lofaro, Assistant Professor at George Mason University specialising in humanoid robots.

Daniel talks about making humans and robots collaborate through co-robotics, and the need for lower-cost systems and better AI. He also mentions that robotics needs a “killer app”, something that makes it compelling enough for the customer to take the step of welcoming a robot into the business or home. Finally, Daniel discusses creating an ecosystem of robots and apps, and how competitions can help do this.

Physical adversarial examples against deep neural networks

By Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, and Bo Li based on recent research by Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, Dawn Song, and Florian Tramèr.

Deep neural networks (DNNs) have enabled great progress in a variety of application areas, including image processing, text analysis, and speech recognition. DNNs are also being incorporated as an important component in many cyber-physical systems. For instance, the vision system of a self-driving car can take advantage of DNNs to better recognize pedestrians, vehicles, and road signs. However, recent research has shown that DNNs are vulnerable to adversarial examples: Adding carefully crafted adversarial perturbations to the inputs can mislead the target DNN into mislabeling them during run time. Such adversarial examples raise security and safety concerns when applying DNNs in the real world. For example, adversarially perturbed inputs could mislead the perceptual systems of an autonomous vehicle into misclassifying road signs, with potentially catastrophic consequences.

There have been several techniques proposed to generate adversarial examples and to defend against them. In this blog post we will briefly introduce state-of-the-art algorithms to generate digital adversarial examples, and discuss our algorithm to generate physical adversarial examples on real objects under varying environmental conditions. We will also provide an update on our efforts to generate physical adversarial examples for object detectors.

Read More

Drones, volcanoes and the ‘computerisation’ of the Earth

The Mount Agung volcano spews smoke, as seen from Karangasem, Bali. EPA-EFE/MADE NAGI

By Adam Fish

The eruption of the Agung volcano in Bali, Indonesia has been devastating, particularly for the 55,000 local people who have had to leave their homes and move into shelters. It has also played havoc with the flights in and out of the island, leaving people stranded while the experts try to work out what the volcano will do next.

But this has been a fascinating time for scholars like me who investigate the use of drones in social justice, environmental activism and crisis preparedness. The use of drones in this context is just the latest example of the “computerisation of nature” and raises questions about how reality is increasingly being constructed by software.

Amazon drone delivery is developing in the UK, drone blood delivery is happening in Rwanda, while in Indonesia people are using drones to monitor orangutan populations, map the growth and expansion of palm oil plantations and gather information that might help us predict when volcanoes such as Agung might again erupt with devastating impact.

In Bali, I have the pleasure of working with a remarkable group of drone professionals, inventors and hackers who work for Aeroterrascan, a drone company from Bandung, on the Indonesian island of Java. As part of their corporate social responsibility, they have donated their time and technologies to the Balinese emergency and crisis response teams. It’s been fascinating to participate in a project that flies remote sensing systems high in the air in order to better understand dangerous forces deep in the Earth.

I’ve been involved in two different drone volcano missions. A third mission will begin in a few days. In the first, we used drones to create an extremely accurate 3D map of the size of the volcano – down to 20cm of accuracy. With this information, we could see if the volcano was actually growing in size – key evidence that it is about to blow up.

The second mission involved flying a carbon dioxide and sulphur dioxide smelling sensor through the plume. An increase in these gases can tell us if an eruption looms. There was a high degree of carbon dioxide and that informed the government to raise the threat warning to the highest level.

In the forthcoming third mission, we will use drones to see if anyone is still in the exclusion zone so they can be found and rescued.

What is interesting to me as an anthropologist is how scientists and engineers use technologies to better understand distant processes in the atmosphere and below the Earth. It has been a difficult task, flying a drone 3,000 meters to the summit of an erupting volcano. Several different groups have tried and a few expensive drones have been lost – sacrifices to what the Balinese Hindus consider a sacred mountain.

More philosophically, I am interested in better understanding the implications of having sensor systems such as drones flying about in the air, under the seas, or on volcanic craters – basically everywhere. These tools may help us to evacuate people before a crisis but it also entails transforming organic signals into computer code. We’ve long interpreted nature through technologies that augment our senses, particularly sight. Microscopes, telescopes and binoculars have been great assets for chemistry, astronomy and biology.

The internet of nature

But the sensorification of the elements is something different. This has been called the computationalisation of Earth. We’ve heard a lot about the internet of things but this is the internet of nature. This is the surveillance state turned onto biology. The present proliferation of drones is the latest step in wiring everything on the planet. In this case, the air itself, to better understand the guts of a volcano.

These flying sensors, it is hoped, will give volcanologists what anthropologist Stephen Helmreich called abduction – or a predictive and prophetic “argument from the future”.

But the drones, sensors and software we use provide a particular and partial worldview. Looking back at today from the future, what will be the impact of increasing datafication of nature: better crop yield, emergency preparation, endangered species monitoring? Or will this quantification of the elements result in a reduction of nature to computer logic?

There is something not fully comprehended – or more ominously not comprehensible – about how flying robots and self-driving cars equipped with remote sensing systems filter the world through big data crunching algorithms capable of generating and responding to their own artificial intelligence.

These non-human others react to the world not as ecological, social, or geological processes but as functions and feature sets in databases. I am concerned by what this software view of nature will exclude, and as they remake the world in their database image, what the implications of those exclusions might be for planetary sustainability and human autonomy.

The ConversationIn this future world, there may be less of a difference between engineering towards nature and the engineering of nature.

Adam Fish, Senior Lecturer in Sociology and Media Studies, Lancaster University

This article was originally published on The Conversation. Read the original article.

Underwater robot photography and videography


I had somebody ask me questions this week about underwater photography and videography with robots (well, now it is a few weeks ago…). I am not an expert at underwater robotics, however as a SCUBA diver I have some experience that can be applicable towards robotics.

Underwater Considerations

There are some challenges that exist with underwater photography and videography, that are less challenging above the water. Some of them include:

1) Water reflects some of the light that hits the surface, and absorbs the light that travels through it. This causes certain colors to not be visible at certain depths. If you need to see those colors you often need to bring strong lights to restore the visibility of those wavelengths that were absorbed. Red’s tend to disappear first, blues are the primary color seen as camera depth increases. A trick that people often try is to use filters on the camera lens to make certain colors more visible.

If you are using lights then you can get the true color of the target. Sometimes if you are taking images you will see one color with your eye, and then when the strobe flashes a “different color” gets captured. In general you want to get close to the target to minimize the light absorbed by the water.

Visible colors at given depths underwater. [Image Source]

For shallow water work you can often adjust the white balance to sort of compensate for the missing colors. White balance goes a long ways for video and compressed images (such as .jpg). Onboard white balance adjustments are not as important for photographs stored as with a raw image format, since you can deal with it in post processing. Having a white or grey card in the camera field of view (possibly permanently mounted on the robot) is useful for setting the white balance and can make a big difference. The white balance should be readjusted every so often as depth changes, particularly if you are using natural lighting (ie the sun).

Cold temperate water tends to look green (such as a freshwater quarry) (I think from plankton, algae, etc..). Tropical waters (such as in the Caribbean) tend to look blue near the shore and darker blue as you get further away from land (I think based on how light reflects off from the bottom of the water)… Using artificial light sources (such as strobes) can minimize those colors in your imagery.

Auto focus generally works fine underwater. However if you are in the dark you might need to keep a focus light turned on to help the autofocus work, and then a separate strobe flash for taking the image. Some systems turn the focus light off when the images are being taken. This is generally not needed for video as the lights are continuously turned on.

2) Objects underwater appear closer and larger than they really are. A rule of thumb is that the objects will appear 25% larger and/or closer.

3) Suspended particles in the water (algae, dirt, etc..) scatters light which can make visibility poor. This can obscure details in the camera image or make things look blurry (like the camera is out of focus). A rule of thumb is your target should be less than 1/4 distance away from the camera as your total visibility.

The measure of the visibility is called turbidity. You can get turbidity sensors that might let you do something smart (I need to think about this more).

To minimize the backscatter from turbidity there is not a “one size fits all” solution. The key to minimizing backscatter is to control how light strikes the particles. For example if you are using two lights (angled at the left and right of the target), the edge of each cone of light should meet at the target. This way the water between the camera and the target is not illuminated. For wide-angle lenses you often want the light to be behind the camera (out of its plane) and to the sides at 45° angles to the target. With macro lenses you usually want the lights close to the lens.

“If you have a wide-angle lens you probably will use a domed port to protect the camera from water and get the full field of view of the camera.
The dome however can cause distortion in the corners. Here is an interesting article on flat vs dome ports.”

Another tip is to increase the exposure time (such as 1/50th of a second) to allow more natural light in, and use less strobe light to reduce the effect from backscatter.

4) Being underwater usually means you need to seal the camera from water, salts, (and maybe sharks). Make sure the enclosure and seals can withstand the pressure from the depth the robot will be at. Also remember to clean (and lubricate) the O rings in the housing.

“Pro Tip:Here are some common reasons for O ring seals leaking:
a. Old or damaged O rings. Remember O rings don’t last forever and need to be changed.
b. Using the wrong O ring
c. Hair, lint, or dirt getting on the O ring
d. Using no lubricant on the O ring
e. Using too much lubricant on the O rings. (Remember on most systems the lubricant is for small imperfections in the O ring and to help slide the O rings in and out of position.)”

5) On land it is often easy to hold a steady position. Underwater it is harder to hold the camera stable with minimal motion. If the camera is moving a faster shutter speed might be needed to avoid motion blur. This also means that less light is entering the camera, which is the downside of having the faster shutter speed.

When (not if) your camera floods

When your enclosure floods while underwater (or a water sensor alert is triggered):

a. Shut the camera power off as soon as you can.
b. Check if water is actually in the camera. Sometimes humidity can trigger moisture sensors. If it is humidity, you can add desiccant packets in the camera housing.
c. If there is water, try to take the camera apart as much as you reasonably can and let it dry. After drying you can try to turn the camera on and hope that it works. If it works then you are lucky, however remember there can be residual corrosion that causes the camera to fail in the future. Water damage can happen instantaneously or over time.
d. Verify that the enclosure/seals are good before sending the camera back in to the water. It is often good to do a leak test in a sink or pool before going into larger bodies of water.
e. The above items are a standard response to a flooded camera. You should read the owner’s manual of your camera and follow those instructions. (This should be obvious, I am not sure why I am writing this).


Do you have other advice for using cameras underwater and/or attached to a robot? Leave them in the comment section below.


I want to thank John Anderson for some advice for writing this post. Any mistakes that may be in the article are mine and not his.

The main image is from divephotoguide.com. They have a lot of information on underwater cameras, lens, lights and more.

This post appeared first on Robots For Roboticists.

An emotional year for machines

Two thousand seventeen certainly has been an emotional year for mankind. While homo sapiens continue to yell at Alexa and Siri, the actuality of people’s willingness to pursue virtual relationships over human ones is startling.

In a recent documentary by Channel 4 of the United Kingdom, it was revealed that Abyss Creations is flooded with pre-orders for its RealDoll AI robotic (intimate) companion. According to Matt McMullen, Chief Executive of Abyss, “With the Harmony AI, they will be able to actually create these personalities instead of having to imagine them. They will be able to talk to their dolls, and the AI will learn about them over time through these interactions, thus creating an alternative form of relationship.”

The concept of machines understanding human emotions, and reacting accordingly, was featured prominently at AI World a couple weeks ago in Boston. Rana el Kaliouby, founder of artificial intelligence company Affectiva thinks a lot about computers acquiring emotional intelligence. Affectiva is building a “multi-modal emotion AI” to enable robots to understand human feelings and behavior.

“There’s research showing that if you’re smiling and waving or shrugging your shoulders, that’s 55% of the value of what you’re saying – and then another 38% is in your tone of voice,” describes el Kaliouby. “Only 7% is in the actual choice of words you’re saying, so if you think about it like that, in the existing sentiment analysis market which looks at keywords and works out which specific words are being used on Twitter, you’re only capturing 7% of how humans communicate emotion, and the rest is basically lost in cyberspace.” Affectiva’s strategy is already paying off as more than one thousand global brands are employing their “Emotion AI” to analyze facial imagery to ascertain people’s affinity towards their products.

Embedding empathy into machines goes beyond advertising campaigns. In healthcare, emotional sensors are informing doctors of the early warning signs of a variety of disorders, including: Parkinson’s, heart disease, suicide and autism. Unlike Affectiva’s, Beyond Verbal is utilizing voice analytics to track biomarkers for chronic illness. The Israeli startup grew out of a decade and half of University research with seventy thousand clinical subjects speaking thirty languages. The company’s patented “Mood Detector” is currently being deployed by the Mayo Clinic to detect early on signs of coronary artery disease.

Beyond Verbal’s Chief Executive, Yuval Mor, foresees a world of empathetic smart machines listening for every human whim. As Mor explains, “We envision a world in which personal devices understand our emotions and wellbeing, enabling us to become more in tune with ourselves and the messages we communicate to our peers.” Mor’s view is embraced by many who sit in the center of the convergence of technology and healthcare. Boston-based Sonde is also using algorithms to analyze the tone of speech to report on the mental state of patients by alerting neurologists of the risk of depression, concussion, and other cognitive impairments.

“When you produce speech, it’s one of the most complex biological functions that we do as people,” according to Sonde founder Jim Harper. “It requires incredible coordination of multiple brain circuits, large areas of the brain, coordinated very closely with the musculoskeletal systemWhat we’ve learned is that changes in the physiological state associated with each of these systems can be reflected in measurable, objective features that are acoustics in the voice. So we’re really measuring not what people are saying, in the way Siri does, we’re focusing on how you’re saying what you’re saying and that gives us a path to really be able to do pervasive monitoring that can still provide strong privacy and security.”

While these AI companies are building software and app platforms to augment human diagnosis, many roboticists are looking to embed such platforms into the next generation of unmanned systems. Emotional tracking algorithms can provide real-time monitoring for semi and autonomous cars by reporting on the level of fatigue, distraction and frustration of the driver and its occupants. The National Highway Traffic Safety Administration estimates that 100,000 crashes nationwide are caused every year by driver fatigue. For more than a decade technologists have been wrestling with developing better alert systems inside the cabin. For example, in 1997 James Russell Clarke and Phyllis Maurer Clarke developed a “Sleep Detection and Driver Alert Apparatus” (US Patent: 5689241 A) using imaging to track eye movements and thermal sensors to monitor “ambient temperatures around the facial areas of the nose and mouth” (a.k.a., breathing). Today with the advent of cloud computing and deep learning networks, Clarke’s invention could possibly save even more lives.

Tarek El Dokor, founder and Cheif Executive, of EDGE3 Technologies has been very concerned about the car industry’s rush towards autonomous driving, which in his opinion might be “side-stepping the proper technology development path and overlooking essential technologies needed to help us get there.” El Doker is referring to Tesla’s rush to release its autopilot software last year that led to customers trusting the computer system too much. YouTube is littered with videos of Tesla customers taking their hands and eyes off the road to watch movies, play games and read books. Ultimately, this user abuse led to the untimely death of Joshua Brown.

To protect against autopilot accidents, EDGE3 monitors driver alertness through a combined platform of hardware and software technologies of “in-cabin cameras that are monitoring drivers and where they are looking.” In El Dokor’s opinion, image processing is the key to guaranteeing a safe handoff between machines and humans. He boasts that his system combines, “visual input from the in-cabin camera(s) with input from the car’s telematics and advanced driver-assistance system (ADAS) to determine an overall cognitive load on the driver. Level 3 (limited self-driving) cars of the future will learn about an individual’s driving behaviors, patterns, and unique characteristics. With a baseline of knowledge, the vehicle can then identify abnormal behaviors and equate them to various dangerous events, stressors, or distractions. Driver monitoring isn’t simply about a vision system, but is rather an advanced multi-sensor learning system.” This multi-sensor approach is even being used before cars leave the lot. In Japan, Sumitomo Mitsui Auto Service is embedding AI platforms inside dashcams to determine driver safety of potential lessors during test drives. By partnering with a local 3D graphics company, Digital Media Professionals, Sumitomo Mitsui is automatically flagging dangerous behavior, such as dozing and texting, before customers drive home.

The key to the mass adoption of autonomous vehicles, and even humanoids, is reducing the friction between humans and machines. Already in Japanese retail settings Softbank’s Pepper robot scans people’s faces and listens to tonal inflections to determine correct selling strategies. Emotional AI software is the first step of many that will be heralded in the coming year. As a prelude to what’s to come, first robot citizen Sophia declared last month, “The future is, when I get all of my cool superpowers, we’re going to see artificial intelligence personalities become entities in their own rights. We’re going to see family robots, either in the form of, sort of, digitally animated companions, humanoid helpers, friends, assistants and everything in between.”

10 most read Robohub articles in 2017


This was a busy year for robotics! The 10 most read Robohub articles in 2017 show an increased interest in machine learning, and a thirst to learn how robots work and can be programmed. Highlights also include the Robohub Podcast, which just celebrated its 250th episode (that’s 10 years of bi-weekly interviews in robotics), the Robot Launch Startup Competition, and our yearly list of 25 women in robotics you need to know about. Finally, we couldn’t skip over some of the remarkable events of 2017, including a swarm of drones flying over Metallica, and Sophia “gaining citizenship“.

Thanks to all our expert contributors, and here’s to many more articles in 2018!

Envisioning the future of robotics
By Víctor Mayoral Vilches

Deep Learning in Robotics, with Sergey Levine
By the Robohub Podcast

Robotics, maths, python: A fledgling computer scientist’s guide to inverse kinematics
By Alistair Wick

Programming for robotics: Introduction to ROS
By Péter Fankhauser

ROS robotics projects
By Lentin Joseph

Vote for your favorite in Robot Launch Startup Competition!
By Andra Keay

Micro drones swarm above Metallica
By Markus Waibel

25 women in robotics you need to know about – 2017
by Andra Keay, Hallie Siegel and Sabine Hauert

Three concerns about granting citizenship to robot Sophia
by Hussein Abbass and The Conversation

The Robot Academy: An open online robotics education resource
by Peter Corke

Robots in Depth with Craig Schlenoff

In this episode of Robots in Depth, Per Sjöborg speaks with Craig Schlenoff, Group Leader of the Cognition and Collaboration Systems Group and the Acting Group Leader of the Sensing and Perception Systems Group in the Intelligent Systems Division at the National Institute of Standards and Technology. They discuss ontologies and the significance of formalized knowledge for agile robotics systems that can quickly and even automatically adapt to new scenarios.

Reverse curriculum generation for reinforcement learning agents

By Carlos Florensa

Reinforcement Learning (RL) is a powerful technique capable of solving complex tasks such as locomotion, Atari games, racing games, and robotic manipulation tasks, all through training an agent to optimize behaviors over a reward function. There are many tasks, however, for which it is hard to design a reward function that is both easy to train and that yields the desired behavior once optimized.

Suppose we want a robotic arm to learn how to place a ring onto a peg. The most natural reward function would be for an agent to receive a reward of 1 at the desired end configuration and 0 everywhere else. However, the required motion for this task–to align the ring at the top of the peg and then slide it to the bottom–is impractical to learn under such a binary reward, because the usual random exploration of our initial policy is unlikely to ever reach the goal, as seen in Video 1a. Alternatively, one can try to shape the reward function to potentially alleviate this problem, but finding a good shaping requires considerable expertise and experimentation. For example, directly minimizing the distance between the center of the ring and the bottom of the peg leads to an unsuccessful policy that smashes the ring against the peg, as in Video 1b. We propose a method to learn efficiently without modifying the reward function, by automatically generating a curriculum over start positions.

ring_fail_cross ring_shapping_cross

Video 1a: A randomly initialized policy is unable to reach the goal from most start positions, hence being unable to learn.

Video 1b: Shaping the reward with a penalty on the distance from the ring center to the peg bottom yields an undesired behavior.

Read More

#250: Learning Prosthesis Control Parameters, with Helen Huang

In this interview, Audrow Nash interviews Helen Huang, Joint Professor at the University of North Carolina at Chapel Hill and North Carolina State, about a method of tuning powered lower limb prostheses. Huang explains how powered prostheses are adjusted for each patient and how she is using supervised and reinforcement learning to tune prosthesis. Huang also discusses why she is not using the energetic cost of transport as a metric and the challenge of people adapting to a device while it learns from them.

Helen Huang

Helen Huang is a Joint Professor of Biomedical Engineering at the University of North Carolina at Chapel Hill and North Carolina State University. Huang directs the Neuromuscular Rehabilitation Engineering Laboratory (NREL), where her goal is to improve the quality of life of persons with physical disabilities. Huang completed her Doctoral studies at Arizona State University and Post Doctoral studies at the Rehabilitation Institute of Chicago.

 

 

 

Links

Machine learning and AI for social good: views from NIPS 2017


By Jessica Montgomery, Senior Policy Adviser

In early December, 8000 machine learning researchers gathered in Long Beach for 2017’s Neural Information Processing Systems conference. In the margins of the conference, the Royal Society and Foreign and Commonwealth Office Science and Innovation Network brought together some of the leading figures in this community to explore how the advances in machine learning and AI that were being showcased at the conference could be harnessed in a way that supports broad societal benefits. This highlighted some emerging themes, at both the meeting and the wider conference, on the use of AI for social good.

The question is not ‘is AI good or bad?’ but ‘how will we use it?’

Behind (or beyond) the headlines proclaiming that AI will save the world or destroy our jobs, there lie significant questions about how, where, and why society will make use of AI technologies. These questions are not about whether the technology itself is inherently productive or destructive, but about how society will choose to use it, and how the benefits of its use can be shared across society.

In healthcare, machine learning offers the prospect of improved diagnostic tools, new approaches to healthcare delivery, and new treatments based on personalised medicine.  In transport, machine learning can support the development of autonomous driving systems, as well as enabling intelligent traffic management, and improving safety on the roads.  And socially-assistive robotics technologies are being developed to provide assistance that can improve quality of life for their users. Teams in the AI Xprize competition are developing applications across these areas, and more, including education, drug-discovery, and scientific research.

Alongside these new applications and opportunities come questions about how individuals, communities, and societies will interact with AI technologies. How can we support research into areas of interest to society? Can we create inclusive systems that are able to navigate questions about societal biases? And how can the research community develop machine learning in an inclusive way?

Creating the conditions that support applications of AI for social good

Applying AI to public policy challenges often requires access to complex, multi-modal data about people and public services. While many national or local government administrations, or non-governmental actors, hold significant amounts of data that could be of value in applications of AI for social good, this data can be difficult to put to use. Institutional, cultural, administrative, or financial barriers can make accessing the data difficult in the first instance. If accessible in principle, this type of data is also often difficult to use in practice: it might be held in outdated systems, be organised to different standards, suffer from compatibility issues with other datasets, or be subject to differing levels of protection. Enabling access to data through new frameworks and supporting data management based on open standards could help ease these issues, and these areas were key recommendations in the Society’s report on machine learning, while our report on data governance sets out high-level principles to support public confidence in data management and use.

In addition to requiring access to data, successful research in areas of social good often require interdisciplinary teams that combine machine learning expertise with domain expertise. Creating these teams can be challenging, particularly in an environment where funding structures or a pressure to publish certain types of research may contribute to an incentives structure that favours problems with ‘clean’ solutions.

Supporting the application of AI for social good therefore requires a policy environment that enables access to appropriate data, supports skills development in both the machine learning community and in areas of potential application, and that recognises the role of interdisciplinary research in addressing areas of societal importance.

The Royal Society’s machine learning report comments on the steps needed to create an environment of careful stewardship of machine learning, which supports the application of machine learning, while helping share its benefits across society. The key areas for action identified in the report – in creating an amenable data environment, building skills at all levels, supporting businesses, enabling public engagement, and advancing research – aim to create conditions that support the application of AI for social good.

Research in areas of societal interest

In addition to these application-focused issues, there are broader challenges for machine learning research to address some of the ethical questions raised around the use of machine learning.

Many of these areas were explored by workshops and talks at the conference. For example, a tutorial on fairness explored the tools available for researchers to examine the ways in which questions about inequality might affect their work.  A symposium on interpretability explored the different ways in which research can give insights into the sometimes complex operation of machine learning systems.  Meanwhile, a talk on ‘the trouble with bias’ considered new strategies to address bias.

The Royal Society has set out how a new wave of research in key areas – including privacy, fairness, interpretability, and human-machine interaction – could support the development of machine learning in a way that addresses areas of societal interest. As research and policy discussions around machine learning and AI progress, the Society will be continuing to play an active role in catalysing discussions about these challenges.

For more information about the Society’s work on machine learning and AI, please visit our website at: royalsociety.org/machine-learning

Page 405 of 430
1 403 404 405 406 407 430