The first 3D-printed biodegradable seed robot, able to change shape in response to humidity
[UPDATE] A list of resources, articles, and opinion pieces relating to large language models & robotics
Teresa Berndtsson / Better Images of AI / Letter Word Text Taxonomy / Licenced by CC-BY 4.0.
We’ve collected some of the articles, opinion pieces, videos and resources relating to large language models (LLMs). Some of these links also cover other generative models. We will periodically update this list to add any further resources of interest. This article represents the third in the series. (The previous versions are here: v1 | v2.)
What LLMs are and how they work
- What are Generative AI models?, Kate Soule, video from IBM Technology.
- Introduction to Large Language Models, John Ewald, video from Google Cloud Tech.
- What is GPT-4 and how does it differ from ChatGPT?, Alex Hern, The Guardian.
- What Is ChatGPT Doing … and Why Does It Work?, Stephen Wolfram.
- Understanding Large Language Models — A Transformative Reading List, Sebastian Raschka.
- How ChatGPT is Trained, video by Ari Seff.
- ChatGPT – what is it? How does it work? Should we be excited? Or scared?, Deep Dhillon, The Radical AI podcast.
- Everything you need to know about ChatGPT, Joanna Dungate, Turing Institute Blog.
- Turing video lecture series on foundation models: Session 1 | Session 2 | Session 3 | Session 4.
- Bard: What is Google’s Bard and how is it different to ChatGPT?, BBC.
- Bard FAQs, Google.
- Large Language Models from scratch | Large Language Models: Part 2, videos from Graphics in 5 minutes.
- What are Large Language Models (LLMs)?, video from Google for Developers.
- Risks of Large Language Models (LLM), Phaedra Boinodiris, video from IBM Technology.
- How ChatGPT and Other LLMs Work—and Where They Could Go Next, David Nield, Wired.
- What are Large Language Models, Machine Learning Mastery.
- How To Delete Your Data From ChatGPT, Matt Burgess, Wired.
- 5 Ways ChatGPT Can Improve, Not Replace, Your Writing, David Nield, Wired.
- AI prompt engineering: learn how not to ask a chatbot a silly question, Callum Bains, The Guardian.
Journal, conference, arXiv, and other articles
- Scientists’ Perspectives on the Potential for Generative AI in their Fields, Meredith Ringel Morris, arXiv.
- LaMDA: Language Models for Dialog Applications, Romal Thoppilan et al, arXiv.
- What Language Model to Train if You Have One Million GPU Hours?, Teven Le Scao et al, arXiv.
- Alpaca: A Strong, Replicable Instruction-Following Model, Rohan Taori et al.
- Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets, Irene Solaiman, Christy Dennison, NeurIPS 2021.
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
, Emily Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell, FAccT 2021.
- A Survey of Large Language Models, Wayne Xin Zhao et al, arXiv.
- A Watermark for Large Language Models, John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein, arXiv.
- Between Subjectivity and Imposition: Power Dynamics in Data Annotation for Computer Vision, Milagros Miceli, Martin Schuessler, Tianling Yang, Proceedings of the ACM on Human-Computer Interaction.
- AI classifier for indicating AI-written text, OpenAI.
- Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling, Stella Biderman et al, arXiv.
- GPT-4 Technical Report, OpenAI, arXiv.
- GPT-4 System Card, OpenAI.
- BloombergGPT: A Large Language Model for Finance, Shijie Wu et al, arXiv.
- Evading Watermark based Detection of AI-Generated Content, Zhengyuan Jiang, Jinghuai Zhang, Neil Zhenqiang Gong, arXiv.
- PaLM 2 Technical Report, Google.
- Large language models (LLM) and ChatGPT: what will the impact on nuclear medicine be?, Ian L. Alberts, Lorenzo Mercolli, Thomas Pyka, George Prenosil, Kuangyu Shi, Axel Rominger, and Ali Afshar-Oromieh, Eur J Nucl Med Mol Imaging.
- Ethics of large language models in medicine and medical research, Hanzhou Li, John T Moon, Saptarshi Purkayastha, Leo Anthony Celi, Hari Trivedi and Judy W Gichoya, The Lancet.
- Science in the age of large language models, Abeba Birhane, Atoosa Kasirzadeh, David Leslie & Sandra Wachter, Nature.
- Standardizing chemical compounds with language models, Miruna T Cretu, Alessandra Toniato, Amol Thakkar, Amin A Debabeche, Teodoro Laino and Alain C Vaucher, Machine Learning: Science and Technology.
- How to keep text private? A systematic review of deep learning methods for privacy-preserving natural language processing, Samuel Sousa & Roman Kern, Artificial Intelligence Review.
- Material transformers: deep learning language models for generative materials design, Nihang Fu, Lai Wei, Yuqi Song, Qinyang Li, Rui Xin, Sadman Sadeed Omee, Rongzhi Dong, Edirisuriya M Dilanga Siriwardane and Jianjun Hu, Machine Learning: Science and Technology.
- Large language models encode clinical knowledge, Karan Singhal et al, Nature.
- SELFormer: molecular representation learning via SELFIES language models, Atakan Yüksel, Erva Ulusoy, Atabey Ünlü and Tunca Doğan, Machine Learning: Science and Technology.
- GPT-4 + Stable-Diffusion = ?: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models, Long Lian, Boyi Li, Adam Yala, and Trevor Darrell, BAIR blog.
Newspaper, magazine, University website, and blogpost articles
- Why exams intended for humans might not be good benchmarks for LLMs like GPT-4, Ben Dickson, Venture Beat.
- Does GPT-4 Really Understand What We’re Saying?, David Krakauer, Nautilus.
- Large language models are biased. Can logic help save them?, Rachel Gordon, MIT News.
- Ecosystems graph for ML models and their relationships, researchers at Stanford University.
- ChatGPT struggles with Wordle puzzles, which says a lot about how it works, Michael G. Madden, The Conversation.
- AIhub coffee corner: Large language models for scientific writing, AIhub.
- ChatGPT Is a Blurry JPEG of the Web, Ted Chiang, The New Yorker.
- ChatGPT, Galactica, and the Progress Trap, Abeba Birhane and Deborah Raji, Wired.
- ChatGPT can’t lie to you, but you still shouldn’t trust it, Mackenzie Graham, The Conversation.
- AI information retrieval: A search engine researcher explains the promise and peril of letting ChatGPT and its cousins search the web for you, Chirag Shah, The Conversation.
- A small step for research but a giant leap for utility, Interview with Fredrik Heintz, Linköping University.
- ChatGPT threatens language diversity. More needs to be done to protect our differences in the age of AI, Collin Bjork, The Conversation.
- Column: Afraid of AI? The startups selling it want you to be, Brian Merchant, Los Angeles Times.
- Three ways AI chatbots are a security disaster, Melissa Heikkilä, MIT Tech Review.
- Time: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic, Billy Perrigo, TIME.
- Misplaced fears of an ‘evil’ ChatGPT obscure the real harm being done, John Naughton, The Guardian.
- Darktrace warns of rise in AI-enhanced scams since ChatGPT release, Mark Sweney, The Guardian.
- Lawmakers struggle to differentiate AI and human emails, Kate Blackwood, Cornell Chronicle.
- Colombian judge says he used ChatGPT in ruling, Luke Taylor, The Guardian.
- Bhashini: At your service an Indian language chatbot powered by ChatGPT, video from The Economic Times.
- ChatGPT & Co.: Conversational abilities of large language models, Marisa Tschopp, Luca Gafner, Teresa Windlin, Yelin Zhang, SCIP.
- AI machines aren’t ‘hallucinating’. But their makers are, Naomi Klein, The Guardian
- Google launches new AI PaLM 2 in attempt to regain leadership of the pack, Alex Hern, The Guardian.
- Executives fear accidental sharing of corporate data with ChatGPT: Report, Victor Dey, Venture Beat.
- Letter from the editor: How generative AI is shaping the future of journalism and our newsroom, Michael Nuñez, Venture Beat.
- The inside story of how ChatGPT was built from the people who made it, Will Douglas Heaven, MIT Tech Review.
- A chatbot that asks questions could help you spot when it makes no sense, Melissa Heikkilä, MIT Tech Review.
- Building LLM applications for production, Chip Huyen.
- Generative AI Won’t Revolutionize Search — Yet, Ege Gurdeniz and Kartik Hosanagar, Harvard Business Review.
- If AI image generators are so smart, why do they struggle to write and count?, Seyedali Mirjalili, The Conversation.
- ‘It’s destroyed me completely’: Kenyan moderators decry toll of training of AI models, Niamh Rowe, The Guardian.
- OpenAI launches web crawling GPTBot, sparking blocking effort by website owners and creators, Bryson Masse, Venture Beat
- Why it’s impossible to build an unbiased AI language model, Melissa Heikkilä, MIT Technology Review.
- How to Use Generative AI Tools While Still Protecting Your Privacy, David Nield, Wired.
- Don’t quit your day job: Generative AI and the end of programming, Mike Loukides, Venture Beat.
- OpenAI adds ‘huge set’ of ChatGPT updates, including suggested prompts, multiple file uploads, Carl Franzen, Venture Beat.
- Ageism, sexism, classism and more: 7 examples of bias in AI-generated images, T.J. Thomson and Ryan J. Thomas, The Conversation.
- ‘Open’ alternatives to ChatGPT are on the rise, but how open is AI really?, Radboud University.
- Visual captions: Using large language models to augment video conferences with dynamic visuals, Ruofei Du and Alex Olwal, Google.
- Why watermarking AI-generated content won’t guarantee trust online, Claire Leibowiczarchive page, MIT Technology Review.
Reports
- ChatGPT And More: Large Scale AI Models Entrench Big Tech Power, part of AI Now Institute report.
Podcasts and video discussions
- The Limitations of ChatGPT with Emily M. Bender and Casey Fiesler, Radical AI Podcast.
- CLAIRE AQuA: “ChatGPT and Large Language Models”, CLAIRE.
- Su Lin Blodgett on Creating Just Language Technologies, The Good Robot Podcast.
- The TWIML AI Podcast with Sam Charrington has a number of episodes on the topic of LLMs and generative AI.
Focus on LLMs and education
- Opinion: ChatGPT – what does it mean for academic integrity?, Giselle Byrnes, Massey University.
- Debate: ChatGPT offers unseen opportunities to sharpen students’ critical skills, Erika Darics, Lotte van Poppel, The Conversation.
- ChatGPT and cheating: 5 ways to change how students are graded, Louis Volante, Christopher DeLuca, Don A. Klinger, The Conversation.
- ChatGPT: students could use AI to cheat, but it’s a chance to rethink assessment altogether, Sam Illingworth, The Conversation.
- A Teacher’s Prompt Guide to ChatGPT, @herfteducator.
- Should educators worry about ChatGPT?, interview with Jodi Heckel, Illinois University.
- Large language models challenge the future of higher education, Silvia Milano, Joshua A. McGrane & Sabina Leonelli, Nature.
- ChatGPT, (We need to talk), Q&A with Vaughan Connolly and Steve Watson, University of Cambridge.
- Don’t fret about students using ChatGPT to cheat – AI is a bigger threat to educational equality, Collin Bjork, The Conversation.
- Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions, Alaa Abd-alrazaq, Rawan AlSaad, Dari Alhuwail, Arfan Ahmed, Padraig Mark Healy, Syed Latifi, Sarah Aziz, Rafat Damseh, Sadam Alabed Alrazak, and Javaid Sheikh, JMIR Medical Education.
- Large Language Models and Education, Maastricht University.
Relating to art and other creative processes
- ‘ChatGPT said I did not exist’: how artists and writers are fighting back against AI, Vanessa Thorpe, The Guardian.
- AI and the future of work: 5 experts on what ChatGPT, DALL-E and other AI tools mean for artists and knowledge workers, Lynne Parker, Casey Greene, Daniel Acuña, Kentaro Toyama Mark Finlayson, The Conversation.
- Is there a way to pay content creators whose work is used to train AI? Yes, but it’s not foolproof, Brendan Paul Murphy, The Conversation.
- ChatGPT is the push higher education needs to rethink assessment, Sioux McKenna, Dan Dixon, Daniel Oppenheimer, Margaret Blackie, Sam Illingworth, The Conversation.
- AI Art: How artists are using and confronting machine learning, YouTube video from the Museum of Modern Art.
- ‘We got bored waiting for Oasis to re-form’: AIsis, the band fronted by an AI Liam Gallagher, Rich Pelley, The Guardian.
- Photographer admits prize-winning image was AI-generated, Jamie Grierson, The Guardian.
- The folly of making art with text-to-image generative AI, Ahmed Elgammal, The Conversation.
- Computer-written scripts and deepfake actors: what’s at the heart of the Hollywood strikes against generative AI, Jasmin Pfefferkorn, The Conversation.
- Actors are really worried about the use of AI by movie studios – they may have a point, Dominic Lees, The Conversation.
Pertaining to robotics
- ChatGPT for Robotics: Design Principles and Model Abilities, Microsoft.
- Inner Monologue: Embodied Reasoning through Planning with Language Models, Wenlong Huang et al., arXiv.
- PaLM-E: An embodied multimodal language model, Danny Driess, Google.
- Consciousness, Embodiment, Language Models (with Professor Murray Shanahan), YouTube video from Machine Learning Street Talk.
- RoCo: Dialectic Multi-Robot Collaboration with Large Language Models, Zhao Mandi, Shreeya Jain and Shuran Song, arXiv.
- Awesome LLM robotics, GitHub repository which contains a curative list of papers using LLMs for Robotics and reinforcement learning.
- How can LLMs transform the robotic design process?, Francesco Stella, Cosimo Della Santina and Josie Hughes, Nature.
Misinformation, fake news and the impact on journalism
- Misinformation Monitor: March 2023, focus on GPT-4, NewsGuard.
- A fake news frenzy: why ChatGPT could be disastrous for truth in journalism, Emily Bell, The Guardian.
- Defending Against Neural Fake News, Rowan Zellers et al, arXiv.
- Doctored Sunak picture is just latest in string of political deepfakes, Dan Milmo, The Guardian.
Regulation and policy
- ‘Political propaganda’: China clamps down on access to ChatGPT, Helen Davidson, The Guardian.
- Chatbots, deepfakes, and voice clones: AI deception for sale, USA Federal Trade Commission blog post.
- ‘I didn’t give permission’: Do AI’s backers care about data law breaches?, Alex Hern and Dan Milmo, The Guardian.
- Italy’s ChatGPT ban attracts EU privacy regulators, Supantha Mukherjee, Elvira Pollina and Rachel More, Reuters.
- Training large generative AI models based on publicly available personal data: a GDPR conundrum that the AI act could solve, Sebastião Barros Vale, The Digital Constitutionalist.
- ChatGPT: what the law says about who owns the copyright of AI-generated content, Sercan Ozcan, Joe Sekhon and Oleksandra Ozcan, The Conversation.
- ChatGPT and lawful bases for training AI: a blended approach?, Sophie Stalla-Bourdillon and Pablo Trigo Kramcsák, The Digital Constitutionalist.
- How can we imagine the generative AIs regulatory scheme? Perspectives from Asia, Kuan-Wei Chen, The Digital Constitutionalist.
How Simulation Can Secure Your Robotics Investment
Researchers develop transient bio-inspired gliders from potato starch and wood waste
A neural coordination strategy for attachment and detachment of a climbing robot inspired by gecko locomotion
Scientists propose efficient kinematic calibration method for articulated robots
Ways to Improve Your IMU Performance
Team designs four-legged robotic system that can walk a balance beam
Researchers mimic the human hippocampus to improve autonomous navigation
The Top 5 Things Every Manufacturing CIO Should Know
Robots are everywhere – improving how they communicate with people could advance human-robot collaboration

Emotionally intelligent’ robots could improve their interactions with people. Andriy Onufriyenko/Moment via Getty Images
By Ramana Vinjamuri (Assistant Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore County)
Robots are machines that can sense the environment and use that information to perform an action. You can find them nearly everywhere in industrialized societies today. There are household robots that vacuum floors and warehouse robots that pack and ship goods. Lab robots test hundreds of clinical samples a day. Education robots support teachers by acting as one-on-one tutors, assistants and discussion facilitators. And medical robotics composed of prosthetic limbs can enable someone to grasp and pick up objects with their thoughts.
Figuring out how humans and robots can collaborate to effectively carry out tasks together is a rapidly growing area of interest to the scientists and engineers that design robots as well as the people who will use them. For successful collaboration between humans and robots, communication is key.

Robotics can help patients recover physical function in rehabilitation. BSIP/Universal Images Group via Getty Images
How people communicate with robots
Robots were originally designed to undertake repetitive and mundane tasks and operate exclusively in robot-only zones like factories. Robots have since advanced to work collaboratively with people with new ways to communicate with each other.
Cooperative control is one way to transmit information and messages between a robot and a person. It involves combining human abilities and decision making with robot speed, accuracy and strength to accomplish a task.
For example, robots in the agriculture industry can help farmers monitor and harvest crops. A human can control a semi-autonomous vineyard sprayer through a user interface, as opposed to manually spraying their crops or broadly spraying the entire field and risking pesticide overuse.
Robots can also support patients in physical therapy. Patients who had a stroke or spinal cord injury can use robots to practice hand grasping and assisted walking during rehabilitation.
Another form of communication, emotional intelligence perception, involves developing robots that adapt their behaviors based on social interactions with humans. In this approach, the robot detects a person’s emotions when collaborating on a task, assesses their satisfaction, then modifies and improves its execution based on this feedback.
For example, if the robot detects that a physical therapy patient is dissatisfied with a specific rehabilitation activity, it could direct the patient to an alternate activity. Facial expression and body gesture recognition ability are important design considerations for this approach. Recent advances in machine learning can help robots decipher emotional body language and better interact with and perceive humans.
Robots in rehab
Questions like how to make robotic limbs feel more natural and capable of more complex functions like typing and playing musical instruments have yet to be answered.
I am an electrical engineer who studies how the brain controls and communicates with other parts of the body, and my lab investigates in particular how the brain and hand coordinate signals between each other. Our goal is to design technologies like prosthetic and wearable robotic exoskeleton devices that could help improve function for individuals with stroke, spinal cord and traumatic brain injuries.
One approach is through brain-computer interfaces, which use brain signals to communicate between robots and humans. By accessing an individual’s brain signals and providing targeted feedback, this technology can potentially improve recovery time in stroke rehabilitation. Brain-computer interfaces may also help restore some communication abilities and physical manipulation of the environment for patients with motor neuron disorders.

Brain-computer interfaces could allow people to control robotic arms by thought alone. Ramana Kumar Vinjamuri, CC BY-ND
The future of human-robot interaction
Effective integration of robots into human life requires balancing responsibility between people and robots, and designating clear roles for both in different environments.
As robots are increasingly working hand in hand with people, the ethical questions and challenges they pose cannot be ignored. Concerns surrounding privacy, bias and discrimination, security risks and robot morality need to be seriously investigated in order to create a more comfortable, safer and trustworthy world with robots for everyone. Scientists and engineers studying the “dark side” of human-robot interaction are developing guidelines to identify and prevent negative outcomes.
Human-robot interaction has the potential to affect every aspect of daily life. It is the collective responsibility of both the designers and the users to create a human-robot ecosystem that is safe and satisfactory for all.

Ramana Vinjamuri receives funding from National Science Foundation.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Robot assistants in the operating room promise safer surgery

Advanced robotics can help surgeons carry out procedures where there is little margin for error. © Microsure BV, 2022
In a surgery in India, a robot scans a patient’s knee to figure out how best to carry out a joint replacement. Meanwhile, in an operating room in the Netherlands, another robot is performing highly challenging microsurgery under the control of a doctor using joysticks.
Such scenarios look set to become more common. At present, some manual operations are so difficult they can be performed by only a small number of surgeons worldwide, while others are invasive and depend on a surgeon’s specific skill.
Advanced robotics are providing tools that have the potential to enable more surgeons to carry out such operations and do so with a higher rate of success.
‘We’re entering the next revolution in medicine,’ said Sophie Cahen, chief executive officer and co-founder of Ganymed Robotics in Paris.
New knees
Cahen leads the EU-funded Ganymed project, which is developing a compact robot to make joint-replacement operations more precise, less invasive and – by extension – safer.
The initial focus is on a type of surgery called total knee arthroplasty (TKA), though Ganymed is looking to expand to other joints including the shoulder, ankle and hip.
Ageing populations and lifestyle changes are accelerating demand for such surgery, according to Cahen. Interest in Ganymed’s robot has been expressed in many quarters, including distributors in emerging economies such as India.
‘Demand is super-high because arthroplasty is driven by the age and weight of patients, which is increasing all over the world,’ Cahen said.
Arm with eyes
Ganymed’s robot will aim to perform two main functions: contactless localisation of bones and collaboration with surgeons to support joint-replacement procedures.
It comprises an arm mounted with ‘eyes’, which use advanced computer-vision-driven intelligence to examine the exact position and orientation of a patient’s anatomical structure. This avoids the need to insert invasive rods and optical trackers into the body.
“We’re entering the next revolution in medicine.”
– Sophie Cahen, Ganymed
Surgeons can then perform operations using tools such as sagittal saws – used for orthopaedic procedures – in collaboration with the robotic arm.
The ‘eyes’ aid precision by providing so-called haptic feedback, which prevents the movement of instruments beyond predefined virtual boundaries. The robot also collects data that it can process in real time and use to hone procedures further.
Ganymed has already carried out a clinical study on 100 patients of the bone-localisation technology and Cahen said it achieved the desired precision.
‘We were extremely pleased with the results – they exceeded our expectations,’ she said.
Now the firm is performing studies on the TKA procedure, with hopes that the robot will be fully available commercially by the end of 2025 and become a mainstream tool used globally.
‘We want to make it affordable and accessible, so as to democratise access to quality care and surgery,’ said Cahen.
Microscopic matters
Robots are being explored not only for orthopaedics but also for highly complex surgery at the microscopic level.
The EU-funded MEETMUSA project has been further developing what it describes as the world’s first surgical robot for microsurgery certified under the EU’s ‘CE’ regulatory regime.
Called MUSA, the small, lightweight robot is attached to a platform equipped with arms able to hold and manipulate microsurgical instruments with a high degree of precision. The platform is suspended above the patient during an operation and is controlled by the surgeon through specially adapted joysticks.
In a 2020 study, surgeons reported using MUSA to treat breast-cancer-related lymphedema – a chronic condition that commonly occurs as a side effect of cancer treatment and is characterised by a swelling of body tissues as a result of a build-up of fluids.

MUSA’s robotic arms. Microsure BV, 2022
To carry out the surgery, the robot successfully sutured – or connected – tiny lymph vessels measuring 0.3 to 0.8 millimetre in diameter to nearby veins in the affected area.
‘Lymphatic vessels are below 1 mm in diameter, so it requires a lot of skill to do this,’ said Tom Konert, who leads MEETMUSA and is a clinical field specialist at robot-assisted medical technology company Microsure in Eindhoven, the Netherlands. ‘But with robots, you can more easily do it. So far, with regard to the clinical outcomes, we see really nice results.’
Steady hands
When such delicate operations are conducted manually, they are affected by slight shaking in the hands, even with highly skilled surgeons, according to Konert. With the robot, this problem can be avoided.
MUSA can also significantly scale down the surgeon’s general hand movements rather than simply repeating them one-to-one, allowing for even greater accuracy than with conventional surgery.
‘When a signal is created with the joystick, we have an algorithm that will filter out the tremor,’ said Konert. ‘It downscales the movement as well. This can be by a factor-10 or 20 difference and gives the surgeon a lot of precision.’
In addition to treating lymphedema, the current version of MUSA – the second, after a previous prototype – has been used for other procedures including nerve repair and soft-tissue reconstruction of the lower leg.
Next generation
Microsure is now developing a third version of the robot, MUSA-3, which Konert expects to become the first one available on a widespread commercial basis.
“When a signal is created with the joystick, we have an algorithm that will filter out the tremor.”
– Tom Konert, MEETMUSA
This new version will have various upgrades, such as better sensors to enhance precision and improved manoeuvrability of the robot’s arms. It will also be mounted on a cart with wheels rather than a fixed table to enable easy transport within and between operating theatres.
Furthermore, the robots will be used with exoscopes – a novel high-definition digital camera system. This will allow the surgeon to view a three-dimensional screen through goggles in order to perform ‘heads-up microsurgery’ rather than the less-comfortable process of looking through a microscope.
Konert is confident that MUSA-3 will be widely used across Europe and the US before a 2029 target date.
‘We are currently finalising product development and preparing for clinical trials of MUSA-3,’ he said. ‘These studies will start in 2024, with approvals and start of commercialisation scheduled for 2025 to 2026.’
MEETMUSA is also looking into the potential of artificial intelligence (AI) to further enhance robots. However, Konert believes that the aim of AI solutions may be to guide surgeons towards their goals and support them in excelling rather than achieving completely autonomous surgery.
‘I think the surgeon will always be there in the feedback loop, but these tools will definitely help the surgeon perform at the highest level in the future,’ he said.
Research in this article was funded via the EU’s European Innovation Council (EIC).
This article was originally published in Horizon, the EU Research and Innovation magazine.