Iranian tech prodigies battle it out with robots
Robot Talk Episode 131 – Empowering game-changing robotics research, with Edith-Clare Hall
Claire chatted to Edith-Clare Hall from the Advanced Research and Invention Agency (ARIA) about accelerating scientific and technological breakthroughs.
Edith-Clare Hall is a PhD student at the University of Bristol, Frontier Specialist at ARIA, and leader of Women in Robotics UK. She focuses on the critical interfaces where interconnected systems meet, working to close the gap between academic research and real-world deployment to unlock cyber-physical autonomy. At ARIA, she works as a technical generalist, accelerating breakthroughs across emerging and future programmes. Her PhD research focussed on creating bespoke robotic systems that deliver support for people with progressive conditions such as motor neurone disease (MND).
First Autonomous Mobile Robots Roll Off the Line at Rockwell Automation’s Milwaukee Headquarters
Bionic leg’s pilot performance spotlights its technology and the role of teamwork
How tiny drones inspired by bats could save lives in dark and stormy conditions
Human-centric soft robotics flip the script on ‘The Terminator’
Rugged electric actuators provide reliable steering for extensible, self-driving platform
A flexible lens controlled by light-activated artificial muscles promises to let soft machines see
This rubbery disc is an artificial eye that could give soft robots vision. Image credit: Corey Zheng/Georgia Institute of Technology.
By Corey Zheng, Georgia Institute of Technology and Shu Jia, Georgia Institute of Technology
Inspired by the human eye, our biomedical engineering lab at Georgia Tech has designed an adaptive lens made of soft, light-responsive, tissuelike materials.
Adjustable camera systems usually require a set of bulky, moving, solid lenses and a pupil in front of a camera chip to adjust focus and intensity. In contrast, human eyes perform these same functions using soft, flexible tissues in a highly compact form.
Our lens, called the photo-responsive hydrogel soft lens, or PHySL, replaces rigid components with soft polymers acting as artificial muscles. The polymers are composed of a hydrogel − a water-based polymer material. This hydrogel muscle changes the shape of a soft lens to alter the lens’s focal length, a mechanism analogous to the ciliary muscles in the human eye.
The hydrogel material contracts in response to light, allowing us to control the lens without touching it by projecting light onto its surface. This property also allows us to finely control the shape of the lens by selectively illuminating different parts of the hydrogel. By eliminating rigid optics and structures, our system is flexible and compliant, making it more durable and safer in contact with the body.
Why it matters
Artificial vision using cameras is commonplace in a variety of technological systems, including robots and medical tools. The optics needed to form a visual system are still typically restricted to rigid materials using electric power. This limitation presents a challenge for emerging fields, including soft robotics and biomedical tools that integrate soft materials into flexible, low-power and autonomous systems. Our soft lens is particularly suitable for this task.
Soft robots are machines made with compliant materials and structures, taking inspiration from animals. This additional flexibility makes them more durable and adaptive. Researchers are using the technology to develop surgical endoscopes, grippers for handling delicate objects and robots for navigating environments that are difficult for rigid robots.
The same principles apply to biomedical tools. Tissuelike materials can soften the interface between body and machine, making biomedical tools safer by making them move with the body. These include skinlike wearable sensors and hydrogel-coated implants.

What other research is being done in this field
This work merges concepts from tunable optics and soft “smart” materials. While these materials are often used to create soft actuators – parts of machines that move – such as grippers or propulsors, their application in optical systems has faced challenges.
Many existing soft lens designs depend on liquid-filled pouches or actuators requiring electronics. These factors can increase complexity or limit their use in delicate or untethered systems. Our light-activated design offers a simpler, electronics-free alternative.
What’s next
We aim to improve the performance of the system using advances in hydrogel materials. New research has yielded several types of stimuli-responsive hydrogels with faster and more powerful contraction abilities. We aim to incorporate the latest material developments to improve the physical capabilities of the photo-responsive hydrogel soft lens.
We also aim to show its practical use in new types of camera systems. In our current work, we developed a proof-of-concept, electronics-free camera using our soft lens and a custom light-activated, microfluidic chip. We plan to incorporate this system into a soft robot to give it electronics-free vision. This system would be a significant demonstration for the potential of our design to enable new types of soft visual sensing.
The Research Brief is a short take on interesting academic work.![]()
Corey Zheng, PhD Student in Biomedical Engineering, Georgia Institute of Technology and Shu Jia, Assistant Professor of Biomedical Engineering, Georgia Institute of Technology
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Artificial muscles use ultrasound-activated microbubbles to move
New algorithm lets autonomous drones work together to transport heavy, changing payloads
Accelerating discovery with the AI for Math Initiative
Building a Better RFP: A Practical Guide to Choosing the Right Automation Partner
Robots you can wear like clothes: Automatic weaving of ‘fabric muscle’ brings commercialization closer
Delivering the agent workforce in high-security environments
Governments and enterprises alike are feeling mounting pressure to deliver value with agentic AI while maintaining data sovereignty, security, and regulatory compliance. The move to self-managed environments offers all of the above but also introduces new complexities that require a fundamentally new approach to AI stack design, especially in high security environments.
Managing an AI infrastructure means taking on the full weight of integration, validation, and compliance. Every model, component, and deployment must be vetted and tested. Even small updates can trigger rework, slow progress, and introduce risk. In high-assurance environments, there is added weight of doing all this under strict regulatory and data sovereignty requirements.
What’s needed is an AI stack that delivers both flexibility and assurance in on-prem environments, enabling complete lifecycle management anywhere agentic AI is deployed.
In this post, we’ll look at what it takes to deliver the agentic workforce of the future in even the most secure and highly regulated environments, the risks of getting it wrong, and how DataRobot and NVIDIA have come together to solve it.
With the recently announced Agent Workforce Platform and NVIDIA AI Factory for Government reference design, organizations can now deploy agentic AI anywhere, from commercial clouds to air-gapped and sovereign installations, with secure access to NVIDIA Nemotron reasoning models and complete lifecycle control.
Fit-for-purpose agentic AI in secure environments
No two environments are the same when it comes to building an agentic AI stack. In air-gapped, sovereign, or mission-critical environments, every component, from hardware to model, must be designed and validated for interoperability, compliance, and observability.
Without that foundation, projects stall as teams spend months testing, integrating, and revalidating tools. Budgets expand while timelines slip, and the stack grows more complex with each new addition. Teams often end up choosing between the tools they had time to vet, rather than what best fits the mission.
The result is a system that not only misaligns with business needs, where simply maintaining and updating components can cause operations to slow to a crawl.
Starting with validated components and a composable design addresses these challenges by ensuring that every layer—from accelerated infrastructure to development environments to agentic AI in production—operates securely and reliably as one system.
A validated solution from DataRobot and NVIDIA
DataRobot and NVIDIA have shown what is possible by delivering a fully validated, full-stack solution for agentic AI. Earlier this year, we introduced the DataRobot Agent Workforce Platform, a first-of-its-kind solution that enables organizations to build, operate, and govern their own agentic workforce.
Co-developed with NVIDIA, this solution can be deployed on-prem and even air-gapped environments, and is fully validated for the NVIDIA Enterprise AI Factory for Government reference architecture. This collaboration gives organizations a proven foundation for developing, deploying, and governing their agentic AI workforce across any environment with confidence and control.
This means flexibility and choice at every layer of the stack, and every component that goes into agentic AI solutions. IT teams can start with their unique infrastructure and choose the components that best fit their needs. Developers can bring the latest tools and models to where their data sits, and rapidly test, develop, and deploy where it can provide the most impact while ensuring security and regulatory rigor.
With the DataRobot Workbench and Registry, users gain access to NVIDIA NIM microservices with over 80 NIM, prebuilt templates, and assistive development tools that accelerate prototyping and optimization. Tracing tables and a visual tracing interface make it easy to compare at the component level and then fine tune performance of full workflows before agents move to production.
With easy access to NVIDIA Nemotron reasoning models, organizations can deliver a flexible and intelligent agentic workforce wherever it’s needed. NVIDIA Nemotron models merge the full-stack engineering expertise of NVIDIA with truly open-source accessibility, to empower organizations to build, integrate, and evolve agentic AI in ways that drive rapid innovation and impact across diverse missions and industries.
When agents are ready, organizations can deploy and monitor them with just a few clicks —integrating with existing CI/CD pipelines, applying real-time moderation guardrails, and validating compliance before going live.
The NVIDIA AI Factory for Government provides a trusted foundation for DataRobot with a full stack, end-to-end reference design that brings the power of AI to highly regulated organizations. Together, the Agent Workforce Platform and NVIDIA AI Factory deliver the most comprehensive solution for building, operating, and governing intelligent agentic AI on-premises, at the edge, and in the most secure environments.
Real-world agentic AI at the edge: Radio Intelligence Agent (RIA)
Deepwave, DataRobot, and NVIDIA have brought this validated solution to life with the Radio Intelligence Agent (RIA). This joint solution enables transformation of radio frequency (RF) signals into complex analysis — simply by asking a question.
Deepwave’s AIR-T sensors capture and process radio-frequency (RF) signals locally, removing the need to transmit sensitive data off-site. NVIDIA’s accelerated computing infrastructure and NIM microservices provide the secure inference layer, while NVIDIA Nemotron reasoning models interpret complex patterns and generate mission-ready insights.
DataRobot’s Agent Workforce Platform orchestrates and manages the lifecycle of these agents, ensuring each model and microservice is deployed, monitored, and audited with full control. The result is a sovereign-ready RF Intelligence Agent that delivers continuous, proactive awareness and rapid decision support at the edge.
This same design can be adapted across use cases such as predictive maintenance, financial stress testing, cyber defense, and smart-grid operations. Here are just a few applications for high-security agentic systems:
| Industrial & energy (edge / on-Prem) | Federal & secure environments | Financial services |
| Pipeline fault detection and predictive maintenance | Signal intelligence processing for secure comms monitoring | Cutting-edge trading research |
| Oil rig operations monitoring and safety compliance | Classified data analysis in air-gapped environments | Credit risk scoring with controlled data residency |
| Critical infra smart grid anomaly detection and reliability assurance | Secure battlefield logistics and supply chain optimization | Anti-money laundering (AML) with sovereign data handling |
| Remote mining site equipment health monitoring | Cyber defense and intrusion detection in restricted networks | Stress testing and scenario modeling under compliance controls |
Agentic AI built for the mission
Success in operationalizing agentic AI in high-security environments means going beyond balancing innovation with control. It means efficiently delivering the right solution for the job, where it’s needed, and keeping it running to the highest performance standards. It means scaling from one agentic solution to an agentic workforce with complete visibility and trust.
When every component, from infrastructure to orchestration, works together, organizations gain the flexibility and assurance needed to deliver value from agentic AI, whether in a single air-gapped edge solution or an entire self-managed agentic AI workforce.
With NVIDIA AI Factory for Government providing the trusted foundation and DataRobot’s Agent Workforce Platform delivering orchestration and control, enterprises and agencies can deploy agentic AI anywhere with confidence, scaling securely, efficiently, and with complete visibility.
To learn more how DataRobot can help advance your AI ambitions, visit us at datarobot.com/government.
The post Delivering the agent workforce in high-security environments appeared first on DataRobot.