Archive 06.03.2019

Page 33 of 40
1 31 32 33 34 35 40

#281: On Demand Drone Deliveries, with Yariv Bash



In this episode Abate interviews Yariv Bash from Flytrex. Yariv discusses how Flytrex works in cooperation with local businesses in a city to use drones to rapidly transport goods in a local region. A practical application is the delivery of food from local restaurants. Yariv discusses Flytrex’s plans for using their USD$7.5 million series B round of funding.

Yariv Bash

Yariv Bash is the CEO of Flytrex, a drone technology company providing comprehensive, autonomous drone delivery systems that enable any business, from SMBs to e-commerce giants, to integrate instant, autonomous on-demand drone delivery into their offering. Flytrex is an end-to-end drone logistics service. Prior to Flytrex, Yariv was the Founder and CEO of SpaceIL – the Israeli team at the Google Lunar X-Prize competition. SpaceIL’s mission: landing an unmanned spaceship on the moon, for a fraction of the traditional cost. Under Yariv’s management, the non-profit SpaceIL raised more than $30 million.

Links

 

A smart soft orthosis for a stronger back

When workers in Germany call in sick, back pain is often to blame. It frequently affects employees in logistics, manufacturing and services where physically strenuous patterns of movement are part of the daily job routine. In a bid to prevent back problems, Fraunhofer researchers have come up with ErgoJack to offer a smart soft orthosis that supports workers with real-time motion detection. A prototype of this smart vest will be presented live at Hannover Messe from April 1 through 5, 2019, at Booth C24 in Hall 17.

A prosthetic that restores the sense of where your hand is

Researchers have developed a next-generation bionic hand that allows amputees to regain their proprioception. The results of the study, which have been published in Science Robotics, are the culmination of ten years of robotics research.

The next-generation bionic hand, developed by researchers from EPFL, the Sant’Anna School of Advanced Studies in Pisa and the A. Gemelli University Polyclinic in Rome, enables amputees to regain a very subtle, close-to-natural sense of touch. The scientists managed to reproduce the feeling of proprioception, which is our brain’s capacity to instantly and accurately sense the position of our limbs during and after movement – even in the dark or with our eyes closed.

The new device allows patients to reach out for an object on a table and to ascertain an item’s consistency, shape, position and size without having to look at it. The prosthesis has been successfully tested on several patients and works by stimulating the nerves in the amputee’s stump. The nerves can then provide sensory feedback to the patients in real time – almost like they do in a natural hand.

The findings have been published in the journal Science Robotics. They are the result of ten years of scientific research coordinated by NCCR Professor Silvestro Micera, who teaches bioengineering at EPFL and the Sant’Anna School of Advanced Studies, and Paolo Maria Rossini, director of neuroscience at the A. Gemelli University Polyclinic in Rome. NCCR Robotics supported the research, together with the European Commission and the Bertarelli Foundation.

Sensory feedback

Current myoelectric prostheses allow amputees to regain voluntary motor control of their artificial limb by exploiting residual muscle function in the forearm. However, the lack of any sensory feedback means that patients have to rely heavily on visual cues. This can prevent them from feeling that their artificial limb is part of their body and make it more unnatural to use.

Recently, a number of research groups have managed to provide tactile feedback in amputees, leading to improved function and prosthesis embodiment. But this latest study has taken things one step further.

“Our study shows that sensory substitution based on intraneural stimulation can deliver both position feedback and tactile feedback simultaneously and in real time,” explains Micera. “The brain has no problem combining this information, and patients can process both types in real time with excellent results.”

Intraneural stimulation re-establishes the flow of external information using electric pulses sent by electrodes inserted directly into the amputee’s stump. Patients then have to undergo training to gradually learn how to translate those pulses into proprioceptive and tactile sensations.

This technique enabled two amputees to regain high proprioceptive acuity, with results comparable to those obtained in healthy subjects. The simultaneous delivery of position information and tactile feedback allowed the two amputees to determine the size and shape of four objects with a high level of accuracy (75.5%).

“These results show that amputees can effectively process tactile and position information received simultaneously via intraneural stimulation,” says Edoardo D’Anna, EPFL researcher and lead author of the study.

The technologies pioneered by this study will be further explored during the third and final phase of NCCR Robotics, which will run until 2022 with Micera leading the Wearable Robotics research programme. “During the next phase, we plan to expand the use of implanted systems for prosthetics and rehabilitation, and the implants we used in this experiment will be tested in combination with different wearable devices and for different applications” explains Micera.

Literature
E. D’Anna, G. Valle, A. Mazzoni, I. Strauss, F. Iberite, J. Patton, F. Petrini, S. Raspopovic, G. Granata, R. Di Iorio, M. Controzzi, C. Cipriani, T. Stieglitz, P. M. Rossini, and S. Micera, “A closed-loop hand prosthesis with simultaneous intraneural tactile and position feedback”, Science Robotics.

AI for society: creating AI that supports equality, transparency, and democracy

By Jessica Montgomery, Senior Policy Adviser

The Royal Society’s artificial intelligence (AI) programme explores the frontiers of AI technologies, and their implications for individuals, communities, and society.

As part of our programme of international science and policy dialogue about AI, last year we worked with the American Academy of Arts and Sciences to bring together leading researchers from across disciplines to consider the implications of AI for equality, transparency, and democracy.

This blog gives some of the key points from discussions, which are summarised in more detail in the workshop discussions note (PDF).

AI for society

Today’s AI technologies can help create highly accurate systems, which are able to automate sophisticated tasks. As these technologies progress, researchers and policymakers are grappling with questions about how well these technologies serve society’s needs.

Experience of previous waves of transformative technological change shows that, even when society has a good understanding of the issues new technologies present, it is challenging to create a common vision for the future, to set in place measures that align the present with that desired future, and to engage collective action in ways that help bring it into being.

Where might this common vision for the future come from? International conventions on human rights and sustainable development already point to areas of internationally-agreed need for action, and a growing list of organisations have produced guidelines or principles aiming to shape the development of AI for societal benefit. However, the mechanisms for putting these principles into practice remain unclear. At the same time, there are many other ways in which the things that societies value seem to be contested. In this context, what might AI mean for core values like fairness, transparency, or democracy?

Fairness and equality

Real-world data is messy: it contains missing entries, it can be skewed or subject to sampling errors, and it is often re-purposed for different types of analysis. These sampling errors or other issues in data collection can skew the outputs of a machine learning system, influencing how well it works for different communities of users.

The models created by a machine learning system can also generate unfair outputs, even if trained on accurate data. In recruitment, for example, systems that make predictions about the outcomes of job offers or training can be influenced by biases arising from social structures that are embedded in data at the point of collection. A lack of diversity in the tech community can compound these technical issues, if it reduces the extent to which developers are aware of potential biases when designing machine learning systems.

In seeking to resolve these issues, both technology-enabled and human-led solutions can play a role. For example:

  • Initiatives to address issues of bias in datasets, for example Datasheets for Datasets (PDF), set out recommended usage for datasets that are made available for open use.
  • A combination of technical and domain insights can improve the performance of machine learning systems on ‘real world’ problems. This requires teams of people from a range of backgrounds and areas of expertise.

Interpretability and transparency

The terms ‘interpretability’ or ‘transparency’ mean different things to different people (PDF), and words such as interpretable, explainable, intelligible, transparent or understandable are often used interchangeably, or inconsistently, in debates about AI. But is this variability in meaning problematic?

There are many reasons why users or developers might want to understand why an AI system reached a decision: interpretability can help developers improve system design; it can help users assess risk or understand how a system might fail; and it might be necessary for regulatory or legal standards.

And there are different approaches to creating interpretable systems. Some AI is interpretable by design, while in other cases researchers can create tools to interrogate complex AI systems.

The significance attached to interpretability, and the type of interpretability that is desirable, will likely depend on the context and user. So the absence of a clear definition of interpretability might ultimately be helpful, if it encourages researchers to reach out to a range of stakeholders to understand what different communities need from AI systems.

Democracy and civil society

Early in the development of digital technologies, a great hope had been that they would enable people to connect and build communities in new ways, strengthening society and promoting new forms of citizen engagement. To some extent, this goal has been achieved: people have an opportunity to communicate with much broader – or much narrower – groups in ways that were not previously possible.

Many countries today are grappling with the unintended consequences of these networks. The information echo chambers that have existed in the physical world have found new manifestations in algorithmically-enabled filter bubbles, and the anonymity afforded by digital interactions has raised new questions also arise about the trustworthiness of online information.

Public and policy debates about AI and democracy have tended to concentrate on how changing patterns of news consumption might shape people’s political opinions. In response, could AI be used to improve the circulation of information, providing people with trustworthy information necessary to inform political debate? Perhaps, but insights from behavioural and social sciences show that the process of making political choices is influenced by emotional and social forces, as well as information.

Democracy is more than the exchange of information in campaigns and elections. It draws from a collection of institutions and civic interactions. Democracy persists because institutions preserve it: in the press and the electoral process, but also in courts, in schools, in hospitals, and more. If democracy resides in institutions, then how can AI support them? There is a need for spaces where people can develop civic networks or new civic institutions that allow people from different backgrounds to engage as citizens on common endeavours, as well as interacting online.

To read more about the key questions raised by this discussion, check out the meeting note (PDF), and you can also read more about the Society’s AI programme.

 

TUV Rheinland Robot Integrator Program

The industry’s first comprehensive Robot Integrator Program saves robot integrators significant time and cost investments by allowing them to mark each cell compliant with ANSI/RIA R15.06 with the TUV Rheinland Mark. As opposed to a traditional certification or an on-site field labeling, TÜV Rheinland’s Robot Integrator Program certifies the knowledge and skill-set of robot integrators in addition to testing robotic cells and processes against ANSI/RIA R15.06. This reduces the need for frequent onsite or off site testing and allows manufacturers to apply a single TÜV Rheinland label to multiple cells. The Robot Integrator Program individually assesses a robot integrator’s understanding of the ANSI/RIA R15.06 standard along with the ability to consistently produce compliant robot cells. Following the requirements and procedures of the new program will enable robot integrators to produce individually compliant robotic cells under one serialized TÜV Rheinland Mark, which meets the national electric code and allows acceptance by Authorities Having Jurisdiction (AHJ) and end users.

How to break down work into tasks that can be automated

Virtually every organization is wrestling and experimenting with automation. But most are missing the benefits that come from deep and systemic change. One of the largest failings, in our estimation, is that organizations aren't spending the time necessary to deeply understand the work they're considering automating. They aren't deconstructing jobs so the specific tasks that can be automated can be identified. And without deconstruction, companies risk significant collateral damage and minimizing their ROI as they attempt to automate entire jobs.
Page 33 of 40
1 31 32 33 34 35 40