Lobe.ai Review
Lobe.ai just released for open beta and the short story is that you should go try it out. I was lucky and got to test it in the closed beta so I figured i should review a short review.
Making AI more understandable and accessible for most people is something I spend a lot of time on and Lobe is without a doubt right down my alley. The tagline is “machine learning made simple” and that is exactly what they do.
Overall great tool and I see it as an actual advance in the AI technology by making AI and deep learning models even more accessible than the AutoML wave is already doing.
So what is Lobe.ai exactly?
Lobe.ai is an Automl tool. That means that you can make AI without coding. In Lobe’s case they work with image classification only. So in short you give Lobe a set of images with labels and Lobe will automatically find the most optimal model to classify the images.
Lobe is also acquired by Microsoft. I think that’s a pretty smart move by Microsoft. The big clouds can be difficult to get started with and especially Microsoft's current AutoML solutions is first of all only tabular data but also requires a good degree of technical skills to get started.
It’s free. I don’t really understand the business model yet, but so far the software is free. That is pretty cool but I’m still curious about how the plan on getting revenue to keep up the good work.
Features
Image classification
So far Lobe only has one main feature and that’s training and image classification network. And it does that pretty well. In all the tests I have done I have gotten decent results with only very little training data.
Speed
The speed is insane. The models are being trained in something that seems like a minut. That’s a really cool feature. You can also decide to train it for longer to get better accuracy.
Export
You can export the model to CoreML, TensorFlow, TensorFlow Lite and they also provide a local API.
Use Cases
I’m planning to use Lobe for both hobby and commercial projects. For commercial use I’m going to use it for the main three purposes:
Producing models
Since the quality is good and the export possibilities are ready for production I see no reason not to use this for production purposes when helping clients with AI projects. You might think it’s better to hand build models for commercial use, but my honest experience is that many simple problems should be solved with the simplest solution first. You might stay with a model build with Lobe for at least the first few iterations of a project and sometimes forever.
Teaching
As I’m teaching people about applied AI, Lobe is going to be one of my go to tools from now on. It makes AI development very tangible and accessible and you can play around with it to get a feeling about AI without having to code. When project and product managers get to try developing models themself I expect a lot more understanding of edge cases and unforeseen problems.
Selling
When trying to convince a potential customer that they should invest in AI development you easily run into decision makers that have no understanding about AI. By showing Lobe and doing live tests I hope to be able to make the discussion more leveled since there’s a chance we are now talking about the same thing.
Compared with other AutoML solutions
The bad:
Less insights
In short you don’t get any model analysis like you would with Kortical for example.
Less options
As mentioned Lobe only offers image classification. Compared to Google Automl, that does that and objectregocnition, text, tabular, video and so on, it is still limited in use cases.
The good:
It’s Easy
This is the whole core selling point for Lobe and it does it perfectly. Lobe is so easy to use that it could easily be used for teaching 3rd graders.
It’s Fast
The model building is so fast that you can barely get a glass of water while training.
The average:
Quality
When I compared a model I build in Google AutoML to one I build in Lobe, Google seemed to be a bit better but not by far. That being said the Google model took me 3 hours to train vs. minutes with Lobe.
Future possibilities
For me to see Lobe.ai can go in two different directions in the future. They can either go for making a bigger part of the pipeline and let you build small apps on top of the models or they can go for more types of models such as tabular models or text classification. Both directions could be pretty interesting and whatever direction they go for I’m looking forward to testing it out.
Conclusion
In conclusion Lobe.ai is a great step forward for accessible AI and already in it’s beta it’s very impressive and surely will be the first in a new niche of AI.
It doesn’t get easier than this and with the export functionality it’s actually a good candidate for many commercial products.
Make sure you test it out, even if it’s just for fun.
Lily the barn owl reveals how birds fly in gusty winds
Scientists from the University of Bristol and the Royal Veterinary College have discovered how birds are able to fly in gusty conditions – findings that could inform the development of bio-inspired small-scale aircraft.
“Birds routinely fly in high winds close to buildings and terrain – often in gusts as fast as their flight speed. So the ability to cope with strong and sudden changes in wind is essential for their survival and to be able to do things like land safely and capture prey,” said Dr Shane Windsor from the Department of Aerospace Engineering at the University of Bristol.
“We know birds cope amazingly well in conditions which challenge engineered air vehicles of a similar size but, until now, we didn’t understand the mechanics behind it,” said Dr Windsor.
The study, published in Proceedings of the Royal Society B, reveals how bird wings act as a suspension system to cope with changing wind conditions. The team, which included Bristol PhD student Nicholas Durston and researchers Jialei Song and James Usherwood from Dongguan University of Technology in China and the RVC respectively, used an innovative combination of high-speed, video-based 3D surface reconstruction, computed tomography (CT) scans, and computational fluid dynamics (CFD) to understand how birds ‘reject’ gusts through wing morphing, i.e. by changing the shape and posture of their wings.
In the experiment, conducted in the Structure and Motion Laboratory at the Royal Veterinary College, the team filmed Lily, a barn owl, gliding through a range of fan-generated vertical gusts, the strongest of which was as fast as her flight speed. Lily is a trained falconry bird who is a veteran of many nature documentaries, so wasn’t fazed in the least by all the lights and cameras. “We began with very gentle gusts in case Lily had any difficulties, but soon found that – even at the highest gust speeds we could make – Lily was unperturbed; she flew straight through to get the food reward being held by her trainer, Lloyd Buck,” commented Professor Richard Bomphrey of the Royal Veterinary College.
“Lily flew through the bumpy gusts and consistently kept her head and torso amazingly stable over the trajectory, as if she was flying with a suspension system. When we analysed it, what surprised us was that the suspension-system effect wasn’t just due to aerodynamics, but benefited from the mass in her wings. For reference, each of our upper limbs is about 5% of our body weight; for a bird it’s about double, and they use that mass to effectively absorb the gust,” said joint lead-author Dr Jorn Cheney from the Royal Veterinary College.
“Perhaps most exciting is the discovery that the very fastest part of the suspension effect is built into the mechanics of the wings, so birds don’t actively need to do anything for it to work. The mechanics are very elegant. When you strike a ball at the sweetspot of a bat or racquet, your hand is not jarred because the force there cancels out. Anyone who plays a bat-and-ball sport knows how effortless this feels. A wing has a sweetspot, just like a bat. Our analysis suggests that the force of the gust acts near this sweetspot and this markedly reduces the disturbance to the body during the first fraction of a second. The process is automatic and buys just enough time for other clever stabilising processes to kick in,” added joint lead-author, Dr Jonathan Stevenson from the University of Bristol.
Dr Windsor said the next step for the research, which was funded by the European Research Council (ERC), Air Force Office of Scientific Research and the Wellcome Trust, is to develop bio-inspired suspension systems for small-scale aircraft.
International conference on intelligent robots and systems (IROS)
This Sunday sees the start of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). This year the event is online and free for anyone to attend. Content will be available from the platform on demand, with access available from 25 October to 25 November 2020.
IROS conferences have traditionally had a theme and this year is no different with the emphasis being on “consumer robotics and our future”. You can sign up here.
Plenaries
IROS will feature three plenary talks. The speakers and topics are as follows:
- Danica Kragic Robotics and artificial intelligence impacts on the fashion industry
- Cynthia Breazeal Living with social robots: from research to commercialization and back
- Yukie Nagai Cognitive development in humans and robots: new insights into intelligence
Keynote speakers
There are also nine keynote talks covering a number of topics.
- Frank Dellaert Air, sea, and space robots
- Anya Petrovskaya Driverless vehicles and field robots
- Ashish Deshpande Rehabilitation robotics
- Jonathan Hurst Humanoids
- I-Ming Chen Food handling robotics
- Steve LaValle Perception, action and control
- Nikolaus Correll Grasping, haptics and end-effectors
- Andrea Thomaz Human-robot interaction
- Sarah Bergbreiter Design, micro and bio-inspired robotics
Technical talks
The technical talks have been divided into 12 topic areas.
- Air, sea, and space robots
- Driverless vehicles and field robots
- Medical, cellular, micro and nano robots
- Humanoids, exoskeletons, and rehab robots
- Localization, mapping and navigation
- Dynamics, control and learning
- Design, mechanisms, actuators, soft and bio-inspired robots
- Perception, action, and cognition
- Grasping, haptics and end-effectors
- Human-robot interaction, teleoperation, and virtual reality
- Swarms and multi-robots
- Industry 4.0
Each talk will feature its digest slide, pre-recorded video presentation, and the paper’s PDF. These will be available from 25 October, so keep an eye on the website.
Workshops
There are a whopping 35 workshops to choose from. These have on-demand content and also live sessions (dates vary so visit the webpages below for more information about each specific workshop).
- 3rd workshop on proximity perception in robotics: towards multi-modal cognition, Stefan Escaida Navarro*, Stephan Mühlbacher-Karrer, Hubert Zangl, Keisuke Koyama, Björn Hein, Ulrike Thomas, Hosam Alagi, Yitao Ding, Christian Stetco
- Bringing geometric methods to robot learning, optimization and control, Noémie Jaquier*, Leonel Rozo, Søren Hauberg, Hans-Peter Schröcker, Suvrit Sra
- 12th IROS20 workshop on planning, perception and navigation for intelligent vehicles, Philippe Martinet*, Christian Laugier, Marcelo H Ang Jr, Denis Fernando Wolf
- Robot-assisted training for primary care: how can robots help train doctors in medical examinations?, Thrishantha Nanayakkara*, Florence Ching Ying Leong, Thilina Dulantha Lalitharatne, Liang He, Fumiya Iida, Luca Scimeca, Simon Hauser, Josie Hughes, Perla Maiolino
- Workshop on animal-robot interaction, Cesare Stefanini and Donato Romano*
- Ergonomic human-robot collaboration: opportunities and challenges, Wansoo Kim*, Luka Peternel, Arash Ajoudani, Eiichi Yoshida
- New advances in soft robots control, Concepción A. Monje*, Egidio Falotico, Santiago Martínez de la Casa
- Autonomous system in medicine: current challenges in design, modeling, perception, control and applications, Hang Su, Yue Chen*, Jing GUO, Angela Faragasso, Haoyong Yu, Elena De Momi
- MIT MiniCheetah workshop, Sangbae Kim*, Patrick M. Wensing, Inhyeok Kim
- Workshop on humanitarian robotics, Garrett Clayton*, Raj Madhavan, Lino Marques
- Robotics-inspired biology, Nick Gravish*, Kaushik Jayaram, Chen Li, Glenna Clifton, Floris van Breugel
- Robots building robots. Digital manufacturing and human-centered automation for building consumer robots, Paolo Dario*, George Q. Huang, Peter Luh, MengChu Zhou
- Cognitive robotic surgery, Michael C. Yip, Florian Richter*, Danail Stoyanov, Francisco Vasconcelos, Fanny Ficuciello, Emmanuel B Vander Poorten, Peter Kazanzides, Blake Hannaford, Gregory Scott Fischer
- Application-driven soft robotic systems: Translational challenges, Sara Adela Abad Guaman, Lukas Lindenroth, Perla Maiolino, Agostino Stilli*, Kaspar Althoefer, Hongbin Liu, Arianna Menciassi, Thrishantha Nanayakkara, Jamie Paik, Helge Arne Wurdemann
- Reliable deployment of machine learning for long-term autonomy, Feras Dayoub*, Tomáš Krajník, Niko Sünderhauf, Ayoung Kim
- Robotic in-situ servicing, assembly, and manufacturing, Craig Carignan*, Joshua Vander Hook, Chakravarthini Saaj, Renaud Detry, Giacomo Marani
- Benchmarking progress in autonomous driving, Liam Paull*, Andrea Censi, Jacopo Tani, Matthew Walter, Felipe Codevilla, Sahika Genc, Sunil Mallya, Bhairav Mehta
- ROMADO: RObotic MAnipulation of Deformable Objects, Miguel Aranda*, Juan Antonio Corrales Ramon, Pablo Gil, Gonzalo Lopez-Nicolas, Helder Araujo, Youcef Mezouar
- Perception, learning, and control for autonomous agile vehicles, Giuseppe Loianno*, Davide Scaramuzza, Sertac Karaman
- Planetary exploration robots: challenges and opportunities, Hendrik Kolvenbach*, William Reid, Kazuya Yoshida, Richard Volpe
- Application-oriented modelling and control of soft robots, Thomas George Thuruthel*, Cosimo Della Santina, Seyedmohammadhadi Sadati, Federico Renda, Cecilia Laschi
- State of the art in robotic leg prostheses: where we are and where we want to be, Tommaso Lenzi*, Robert D. Gregg, Elliott Rouse, Joost Geeroms
- Worskhop on perception, planning and mobility in forestry robotics (WPPMFR 2020), João Filipe Ferreira* and David Portugal
- Why robots fail to grasp? – failure ca(u)ses in robot grasping and manipulation, Joao Bimbo*, Dimitrios Kanoulas, Giulia Vezzani, Kensuke Harada
- Trends and advances in machine learning and automated reasoning for intelligent robots and systems, Abdelghani Chibani, Craig Schlenoff, Yacine Amirat*, Shiqi Zhang, Jong-Hwan Kim, Ferhat Attal
- Learning impedance modulation for physical interaction: insights from humans and advances in robotics, Giuseppe Averta*, Franco Angelini, Meghan Huber, Jongwoo Lee, Manolo Garabini
- New horizons of robot learning – from industrial challenges to future capabilities, Kim Daniel Listmann* and Elmar Rueckert
- Robots for health and elderly care (RoboHEC), Leon Bodenhagen*, Oskar Palinko, Francois Michaud, Adriana Tapus, Julie Robillard
- Wearable SuperLimbs: design, communication, and control, Harry Asada*
- Human Movement Understanding for Intelligent Robots and Systems, Emel Demircan*, Taizo Yoshikawa, Philippe Fraisse, Tadej Petric
- Construction and architecture robotics, Darwin Lau*, Yunhui Liu, Tobias Bruckmann, Thomas Bock, Stéphane Caro
- Mechanisms and design: from inception to realization, Hao Su*, Matei Ciocarlie, Kyu-Jin Cho, Darwin Lau, Claudio Semini, Damiano Zanotto
- Bringing constraint-based robot programming to real-world applications, Wilm Decré*, Herman Bruyninckx, Gianni Borghesan, Erwin Aertbelien, Lars Tingelstad, Darwin G. Caldwell, Enrico Mingo, Abderrahmane Kheddar, Pierre Gergondet
- Managing deformation: a step towards higher robot autonomy, Jihong Zhu*, Andrea Cherubini, Claire Dune, David Navarro-Alarcon
- Social AI for human-robot interaction of human-care service robots, Ho Seok Ahn*, Hyungpil Moon, Minsu Jang, Jongsuk Choi
Robot challenges
Another element of the conference that sounds interesting is the robot challenges. There are three of these and you should be able to watch the competitions in action next week.
- Open cloud robot table organization challenge (OCRTOC). This competition focusses on table organisation tasks. Participants will need to organize the objects in the scene according to a target configuration. This competition will be broadcast on 25-27 October.
- 8th F1Tenth autonomous Grand Prix @ IROS 2020. This competition will take the form of a virtual race with standardised vehicles and hardware. The qualifying phase is a timed trial. The Grand Prix phase pits virtual competitors against each other on the same track. The race will be broadcast on 27 October.
- Robotic grasping and manipulation competition. There are two sections to this competition. In the first the robot has to make five cups of iced Matcha green tea. The second involves disassembly and assembly using a NIST Task Board.
Six high-tech companies to join the Innovation Hub at Les Roches Crans-Montana hospitality school
Electronic design tool morphs interactive objects
By Rachel Gordon
We’ve come a long way since the first 3D-printed item came to us by way of an eye wash cup, to now being able to rapidly fabricate things like car parts, musical instruments, and even biological tissues and organoids.
While much of these objects can be freely designed and quickly made, the addition of electronics to embed things like sensors, chips, and tags usually requires that you design both separately, making it difficult to create items where the added functions are easily integrated with the form.
Now, a 3D design environment from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) lets users iterate an object’s shape and electronic function in one cohesive space, to add existing sensors to early-stage prototypes.
The team tested the system, called MorphSensor, by modeling an N95 mask with a humidity sensor, a temperature-sensing ring, and glasses that monitor light absorption to protect eye health.
MorphSensor automatically converts electronic designs into 3D models, and then lets users iterate on the geometry and manipulate active sensing parts. This might look like a 2D image of a pair of AirPods and a sensor template, where a person could edit the design until the sensor is embedded, printed, and taped onto the item.
To test the effectiveness of MorphSensor, the researchers created an evaluation based on standard industrial assembly and testing procedures. The data showed that MorphSensor could match the off-the-shelf sensor modules with small error margins, for both the analog and digital sensors.
“MorphSensor fits into my long-term vision of something called ‘rapid function prototyping’, with the objective to create interactive objects where the functions are directly integrated with the form and fabricated in one go, even for non-expert users,” says CSAIL PhD student Junyi Zhu, lead author on a new paper about the project. “This offers the promise that, when prototyping, the object form could follow its designated function, and the function could adapt to its physical form.”
MorphSensor in action
Imagine being able to have your own design lab where, instead of needing to buy new items, you could cost-effectively update your own items using a single system for both design and hardware.
For example, let’s say you want to update your face mask to monitor surrounding air quality. Using MorphSensor, users would first design or import the 3D face mask model and sensor modules from either MorphSensor’s database or online open-sourced files. The system would then generate a 3D model with individual electronic components (with airwires connected between them) and color-coding to highlight the active sensing components.
Designers can then drag and drop the electronic components directly onto the face mask, and rotate them based on design needs. As a final step, users draw physical wires onto the design where they want them to appear, using the system’s guidance to connect the circuit.
Once satisfied with the design, the “morphed sensor” can be rapidly fabricated using an inkjet printer and conductive tape, so it can be adhered to the object. Users can also outsource the design to a professional fabrication house.
To test their system, the team iterated on EarPods for sleep tracking, which only took 45 minutes to design and fabricate. They also updated a “weather-aware” ring to provide weather advice, by integrating a temperature sensor with the ring geometry. In addition, they manipulated an N95 mask to monitor its substrate contamination, enabling it to alert its user when the mask needs to be replaced.
In its current form, MorphSensor helps designers maintain connectivity of the circuit at all times, by highlighting which components contribute to the actual sensing. However, the team notes it would be beneficial to expand this set of support tools even further, where future versions could potentially merge electrical logic of multiple sensor modules together to eliminate redundant components and circuits and save space (or preserve the object form).
Zhu wrote the paper alongside MIT graduate student Yunyi Zhu; undergraduates Jiaming Cui, Leon Cheng, Jackson Snowden, and Mark Chounlakone; postdoc Michael Wessely; and Professor Stefanie Mueller. The team will virtually present their paper at the ACM User Interface Software and Technology Symposium.
This material is based upon work supported by the National Science Foundation.
Boston Dynamics to give Spot a robot arm and charging station
Distributed Deep Learning training: Model and Data Parallelism in Tensorflow
Women in Robotics panel celebrating Ada Lovelace Day
We’d like to share the video from our 2020 Ada Lovelace Day celebration of Women in Robotics. The speakers were all on this year’s list, last year’s list, or nominated for next year’s list! and they present a range of cutting edge robotics research and commercial products. They are also all representatives of the new organization Black in Robotics which makes this video doubly powerful. Please enjoy the impactful work of:
Dr Ayanna Howard – Chair of Interactive Computing, Georgia Tech
Dr Carlotta Berry – Professor Electrical and Computer Engineering at Rose-Hulman Institute of Technology
Angelique Taylor – PhD student in Health Robotics at UCSD and Research Intern at Facebook
Dr Ariel Anders – roboticist and first technical hire at Robust.AI
Moderated by Jasmine Lawrence – Product Manager at X the Moonshot Factory
Follow them on twitter at @robotsmarts @DRCABerry @Lique_Taylor @Ariel_Anders @EDENsJasmine
Some of the takeaways from the talk were collected by Jasmine Lawrence at the end of the discussion and include the encouragement that you’re never too old to start working in robotics. While some of the panelists knew from an early age that robotics was their passion, for others it was a discovery later in life. Particularly as robotics has a fairly small academic footprint, compared to the impact in the world.
We also learned that Dr Ayanna Howard has a book available “Sex, Race and Robots: How to be human in the age of AI”
Another insight from the panel was that as the only woman in the room, and often the only person of color too, the pressure was on to be mindful of the impact on communities of new technologies, and to represent a diversity of viewpoints. This knowledge has contributed to these amazing women focusing on robotics projects with significant social impact.
And finally, that contrary to popular opinion, girls and women could be just as competitive as male counterparts and really enjoy the experience of robotics competitions, as long as they were treated with respect. That means letting them build and program, not just manage social media.
You can sign up for Women in Robotics online community here, or the newsletter here. And please enjoy the stories of 2020’s “30 women in robotics you need to know about” as well as reading the previous years’ lists!