Page 1 of 593
1 2 3 593

I developed an app that uses drone footage to track plastic litter on beaches

By Gerard Dooly, University of Limerick

Plastic pollution is one of those problems everyone can see, yet few know how to tackle it effectively. I grew up walking the beaches around Tramore in County Waterford, Ireland, where plastic debris has always been part of the coastline, including bottles, fragments of fishing gear and food packaging.

According to the UN, every year 19-23 million tonnes of plastic lands up in lakes, rivers and seas, and it has a huge impact on ecosystems, creating pollution and damaging animal habitats.

Community groups do tremendous work cleaning these beaches, but they’re essentially walking blind, guessing where plastic accumulates, missing hot spots, repeating the same stretches while problem areas may go untouched.

Years later, working in marine robotics at the University of Limerick, I began developing tools to support marine clean-up and help communities find plastic pollution along our coastline.

The question seemed straightforward: could we use drones to show people exactly where the plastic is? And could we turn finding the plastic littered on beaches and cleaning it up into something people enjoy – in other words, “gamify” it? Could we also build on other ways that drones have been used previously such as tracking wildfires or identifying shipwrecks.

Building the technology

At the University of Limerick’s Centre for Robotics and Intelligent Systems, my team combined drone-based aerial surveillance work with machine-learning algorithms (a type of artificial intelligence) to map where plastic was being littered, and this paired with a free mobile app that provides volunteers with precise GPS coordinates for targeted clean-up.

The technical challenge was more complex than it appeared. Training computer vision models to detect a bottle cap from 30 metres altitude, while distinguishing it from similar objects like seaweed, driftwood, shells and weathered rocks, required extensive field testing and checks of the accuracy of the detection system.

The development hasn’t been straightforward. Early versions of the algorithm struggled with shadows and confused driftwood for plastic bottles. We spent months refining the system through trial and error on beaches around Clare and Galway so the system can now spot plastic as small as 1cm.

We conducted hundreds of test flights across Irish coastlines under varying environmental conditions, different lighting, tidal states, weather patterns, building a robust training dataset.

Ireland’s plastic problem

The urgency of this work becomes clear when you look at the Marine Institute’s work. Ireland’s 3,172 kilometres of coastline, the longest per capita in Europe, faces a deepening crisis.

A 2018 study found that 73% of deep-sea fish in Irish waters had ingested plastic particles. More than 250 species, including seabirds, fish, marine turtles and mammals have all been reported to ingest large items of plastics.

The costs go beyond harming wildlife, and the economic impact can be significant.

Our drone surveys revealed that some stretches of coast accumulate plastic at rates five to ten times higher than neighbouring areas, driven by ocean currents and river mouths. Without systematic monitoring, these hotspots go unaddressed.

Making the technology accessible

The plastic detection platform accepts drone imagery from any source, such as ordinary people flying their own drones.

Processing requires only standard laptop software. Users upload footage and receive GPS coordinates showing detected plastic locations. The mobile app, available free on iOS and Android, displays these locations as an interactive map.

A piece of plastic litter on a beach.
Plastic is regularly found on beaches around Europe. Author’s own image.

Community groups, schools and individuals can see nearby plastic pollution and find it, saving a lot of time.

It has already been tested with five community groups around Ireland with positive results, averaging 30 plastics spotted per ten-minute drone flight, varying by location.

Working through the EU-funded BluePoint project, which is tackling plastic pollution of coastlines around Europe, we’ve distributed over 30 drones to partners across Ireland and Europe, including county councils and environmental organisations.

The technology has been deployed in areas including Spanish Point in County Clare, where the local Tidy Towns group (litter-picking volunteers), were named joint Clean Coast Community Group of the Year 2024.

Organising a litter pick. Video by Propeller BIC (Waterford).

The wider waste story

This is part of a broader European effort to address plastic pollution. Partners such as the sports store Decathlon are exploring how to transform recovered beach plastics into new consumer products – sports equipment, textiles and components.

The challenge isn’t just collection. Beach plastics arrive contaminated with sand and salt, in mixed types and grades. Our ongoing research characterises what’s actually found on Irish coastlines, providing manufacturers with data to design appropriate sorting and recycling processes.

The open source software platforms and the drone technology have already been used in nine countries, engaging more than 2,000 people. Pilot programmes are running in France, Spain, Portugal, Brazil and the UK. What began as a question about making beach clean-ups more effective has evolved into a practical system connecting citizen action to environmental outcomes.

Community feedback from pilots has been overwhelmingly positive. Groups report that the drone-derived GPS coordinates transform clean-up work. One participating Tidy Towns group said that volunteers now head straight to flagged locations.

Groups have also reported increased participation, the gamification aspect appeals to families and participants who might not volunteer otherwise. Additionally, the data we’ve gathered so far is being used by local authorities to understand litter patterns and inform policy decisions around waste management and coastal protection.The Conversation

Gerard Dooly, Assistant Professor in Engineering, University of Limerick

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Accelerating Digital Transformation in Automotive Parts Manufacturing with Autonomous Forklifts

For automotive parts manufacturers, reliable and high-quality component supply is essential to ensure vehicle safety and performance. The need for intelligent and digital transformation across manufacturing and intralogistics operations has become increasingly urgent.

NVIDIA CORPORATION (NASDAQ: NVDA)

Institutional Equity Research Report Date of Report: February 26, 2026 Analyst: Lead Equity Research Analyst Rating: BUY | Price Target: $265 | Current Price: ~$196 | Implied Upside: ~35% Data sourced from: NVIDIA Q4 FY2026 Earnings Release (Feb. 25, 2026), Q4 FY2026 Earnings Call Transcript (Motley Fool / Investing.com, Feb. 25, 2026), Yahoo Finance market...

The post NVIDIA CORPORATION (NASDAQ: NVDA) appeared first on 1redDrop.

Soft-robotic glove uses 37 actuators to cut hand swelling by up to 25%

A new glove with more than three dozen actuators across all five fingers and the palm, developed by Cornell researchers, aims to reduce swelling for people suffering from edema. The glove, known as EdemaFlex, was proven safe for unsupervised home use in a seven-participant study, with hand volume decreasing by up to 25% after one 30-minute session.

Researchers expose critical security vulnerability in autonomous drones

University of California, Irvine computer scientists have discovered a critical security vulnerability in autonomous target-tracking drones that could have far-reaching implications for public safety, border security and personal privacy. The UC Irvine team demonstrated how attackers could use an ordinary umbrella to manipulate drones, drawing the aircraft close enough to capture them or cause them to crash.

Translating music into light and motion with robots

Image taken from the YouTube video created by the authors (see below).

A system developed by researchers at the University of Waterloo lets people collaborate with groups of robots to create works of art inspired by music.

The new technology features multiple wheeled robots about the size of soccer balls that trail coloured light as they move within a fixed area on the floor in response to key features of music including tempo and chord progression.

A camera records the co-ordinated light trails as they snake within that area, which serves as the canvas for the creation of a “painting,” or visual representation of the emotional content of a particular piece of music.

“Basically, we programmed a swarm of robots to paint based on musical input,” said Dr Gennaro Notomista, a professor of electrical and computer engineering at Waterloo.

“The result is a cohesive system that not only processes musical input, but also co-ordinates multiple painting robots to create adaptive, expressive art that reflects the emotional essence of the music being played.”

The robots represent emotion as they “listen” to music via the colours, intensity and width of their lights trails, as well as their position on the canvas and the speed with which they move within it.

People can simultaneously influence a painting in progress using controls to change the width of light trails and their location on the virtual canvas.

“We included the human control input to allow people and robots to work together,” said Notomista, whose interests include the intersection of art and technology. “The human painter should complement and be complemented by what the robots do.”

The first challenge for researchers was developing an algorithm to control multiple robots within a given area. They tested the system with up to 12 robots, but it can be scaled to handle any number.

Step two involved creating technology to extract and analyze musical features that express emotion so they can then be translated into light trails that appropriately represent them.

Lessons learned during the project have potential applications in other areas requiring the control and co-ordination of multiple robots working in unison, such as environmental monitoring, precision agriculture, search and rescue missions, and planetary exploration.

The research also reflects the University of Waterloo’s Global Futures initiative, which advances interdisciplinary work that considers how emerging technologies can shape society, culture and the human experience.

Later, Notomista plans to enlist professional painters and musicians to explore the possibilities of the new tool in user studies and stage public exhibitions.

A paper on the system, Music-driven Robot Swarm Painting, by Notomista and Jingde Cheng, a former Waterloo graduate student, was presented at the 2025 IEEE International Conference on Advanced Robotics and its Social Impacts.

The AI Tax Is Real. Use Design to Get Your Refund.

AI doesn’t just add work; it changes work in ways that are now empirically undeniable. The HBR article “AI Doesn’t Reduce Work—It Intensifies It” validates what I called the “AI Tax” nearly a year ago: AI increases the volume, velocity, […]

The post The AI Tax Is Real. Use Design to Get Your Refund. appeared first on TechSpective.

The Blacklist Paradox: Why the Pentagon is Threatening its Only Working AI

The Department of War is currently playing a high-stakes game of chicken with Anthropic, the San Francisco AI darling known for its “safety-first” mantra. As of February 17, 2026, Defense Secretary Pete Hegseth is reportedly “close” to designating Anthropic a […]

The post The Blacklist Paradox: Why the Pentagon is Threatening its Only Working AI appeared first on TechSpective.

Rise of the rice robots—creating active smart materials

Rice becomes weaker when compressed quickly, while staying stronger under slow pressure—a discovery enabling scientists to design a new material that could be used to build "soft" robots that change stiffness automatically and protective gear that adapts to impact speed. Researchers harnessed this effect to design a new "metamaterial"—an artificially engineered composite structure designed to behave in ways impossible for natural materials.

Engineers demonstrate lightweight ‘exoskeleton’ that helps stroke survivors walk

A leading cause of disability in the United States is hemiparesis, a condition where impaired motor control, muscle weakness, and spasticity affect one side of the body. Occurring in 80% of stroke survivors, reduced mobility and decreased quality of life are challenges that impact millions of individuals.

How to build resilient agentic AI pipelines in a world of change

Change is the only constant in enterprise AI. If your data workflows aren’t built to handle it, you’re setting your entire operation up for failure.

Most data pipelines are brittle, breaking when data or infrastructures slightly change. That downtime can cost millions (upwards of $540,000 per hour), lead to compliance gaps that invite lawsuits, and ultimately result in failed AI initiatives that never make it past proof of concept.

But resilient agentic AI pipelines can adapt, recover, and keep delivering value even as everything around them changes. These systems maintain performance and recover without manual intervention, even when data drift, regulation changes, or infrastructure failures happen. 

Resilient pipelines reduce downtime costs, improve compliance, and accelerate AI deployment. Fragile ones do the opposite.

Why resilient AI pipelines matter in changing environments

When a traditional software application breaks, you might lose some functionality. But when an AI pipeline breaks, you lose trust from wrong recommendations and bad predictions.

The proof is in the numbers: organizations report up to 40% less downtime and 30% in cost savingswith smarter, more proactive AI systems.

Fragile pipelines Resilient pipelines
Monitoring and response Manual monitoring and reactive fixes Automated anomaly detection and proactive responses
System reliability Single points of failure Redundant, self-healing components
Architectural flexibility Rigid architectures that break under change Adaptive designs that evolve with business needs
Security and compliance Governance as an afterthought Built-in compliance and security
Deployment strategy Vendor lock-in and environment dependencies Cloud-agnostic, portable deployments

Resilient systems keep learning, adapting, and delivering value. That’s exactly why enterprise AI platforms like DataRobot build resilience into every layer of the stack. When the only constant is accelerating change, your AI either adapts or becomes obsolete.

Identifying vulnerabilities and failure points

Waiting for something to break and then scrambling to fix it is backward and ultimately hurts operations. Organizations that systematically evaluate risks at each stage of the pipeline can identify potential failure points before they become costly outages.

For AI pipelines, vulnerabilities cluster around three core categories: 

Data drift and pipeline breakdowns

Data drift is the silent killer of AI systems.

Your model was trained on historical data that reflected specific patterns, distributions, and relationships. But data evolves, customer behavior shifts, and market conditions change. Constantly. Suddenly, your model is making predictions based on an outdated reality.

For example, an e-commerce recommendation engine trained on shopping data pre-pandemic would completely miss the shift toward home fitness equipment and remote work tools. The model is operating on wildly outdated assumptions.

The warning signs are clear if you know where to look. Changes in your input data features, population stability index (PSI) scores above threshold, and gradual drops in model accuracy are all signs of drift in progress.

But monitoring isn’t enough. You need automated responses through machine learning pipelines that trigger retraining when drift detection crosses predetermined thresholds. Set up backtesting to validate new models against recent data before deployment, with rollback processes that can quickly revert to previous model versions if performance degrades.

It’s impossible to prevent drift completely. But you can detect it early and respond automatically, keeping your AI aligned with changing reality.

Model decay and technical debt

Model decay happens when shortcuts accumulate into larger systemic problems.

Every AI project starts with good intentions, along with organized code, clear notes, proper tracking, and thorough testing. But when deadlines approach, the pressure builds. Shortcuts start to creep in, and data tweaks become quick fixes. Models inevitably get messy, and the documentation never quite catches up.

Before you know it, you’re dealing with technical debt that makes your pipelines fragile and nearly impossible to maintain.

Ad hoc models that can’t be easily reproduced, feature logic buried in uncommented code, and deployment processes that depend on historical knowledge all point to (eventual) decay. And when your original developer leaves, that institutional knowledge walks out the door with them.

The fix takes proactive discipline: 

  • Enforce modular code architecture that separates data processing, feature engineering, model training, and deployment logic. 
  • Keep detailed documentation for every model and feature transformation. 
  • Use MLflow or similar tools for version control that tracks models, as well as the data and code that created them.

This gets you closer to operational resilience. When you can quickly understand, modify, and redeploy any component of your pipeline, you can adapt to change without breaking everything else.

Governance gaps and security risks

Governance is a business-critical requirement that, when missing, creates massive risk and potentially catastrophic vulnerabilities:

  • Weak access controls mean unauthorized users can modify production models. 
  • Missing audit trails make it impossible to track changes or investigate incidents. 
  • Unmanaged bias can lead to discriminatory outcomes that trigger lawsuits. 

Poor data lineage tracking makes compliance reporting a nightmare. GDPR, CCPA, and industry-specific regulations are just the beginning. More AI-specific legislation (like the EU AI Act and Executive Order 14179) is coming, and at some point, compliance won’t be optional.

A strong governance checklist includes:

  • Role-based access control (RBAC) that enforces least-privilege principles
  • Detailed audit logging that tracks every model change and prediction (and why it made each decision)
  • End-to-end encryption for data at rest and in transit
  • Automated fairness audits that detect and flag potential bias
  • Complete data lineage tracking, from data source to prediction

Of course, AI governance solutions aren’t just in place to check off compliance boxes. They ultimately build trust with customers, regulators, and internal stakeholders who need to know your AI systems are operating safely and ethically.

Designing adaptive pipeline architectures

Architecture is where resilience is won or lost.

Monolithic, tightly coupled systems might seem simpler to build, but they’re disasters waiting to happen. When one component fails, everything else does too. When you need to update a single model, you risk breaking the entire pipeline, leading to months of re-architecturing. 

Adaptive architectures are inherently resilient. They’re modular, cloud-ready, and designed to self-heal, anticipating change rather than resisting it.

Modular components for rapid updates

Modular design is your first line of defense against cascading failures.

Break up those monolithic pipelines into discrete, loosely connected components. Each component should have a single responsibility, well-defined interfaces, and the ability to be updated on its own.

Microservices also enable resource optimization, letting you scale only the components that need extra compute (e.g., a GPU-intensive tool) rather than the full system.

Containerization makes this practical. Docker containers keep each component contained with its dependencies, making them portable and version-controlled. Kubernetes orchestrates these containers, handling scaling, health checks, and resource allocation automatically.

The payoff is agility. When you need to update a single component, you can deploy changes without touching anything else, allocating resources precisely where they’re needed as you scale.

Cloud-native and hybrid harmony

Pure cloud deployments offer scalability and managed services, but many enterprises still need on-premises components for data sovereignty, latency requirements, or regulatory compliance. Solely on-premises deployments offer control, but lack cloud flexibility and managed AI services.

Hybrid architectures give you both. Your most important data stays on-premises, while compute-intensive training happens in the cloud. Secure on-premises AI handles sensitive workloads, while cloud services provide elastic scaling for batch processing.

The aim with this type of setup is standardization. Use Kubernetes for consistent workflow orchestration across environments, with APIs designed to work the same whether they’re calling on-premises or cloud services.

When your pipelines can run anywhere, you can avoid vendor lock-in, keep your negotiating power, and optimize costs by moving workloads to the most efficient environment.

Self-healing mechanisms for resilience

Implement self-healing mechanisms to keep your systems running smoothly without constant human intervention:

  • Build health checks into every component. Monitor response times, accuracy metrics, data quality scores, and resource utilization to make sure services are performing correctly.
  • Put circuit breakers in place that automatically block off failing components before they can cascade failures throughout your system. If your feature engineering service starts timing out, the circuit breaker prevents it from bringing down other services.
  • Design automatic rollback mechanisms. When a new model deployment shows degraded performance, your system should automatically revert to the previous version while alerting the operations team.
  • Add intelligent resource reallocation. When demand spikes for specific models, automatically scale those services while maintaining resource limits for the overall system.

These mechanisms can reduce your mean time to recovery (MTTR) from hours to minutes. But more importantly, they often prevent outages entirely by catching and resolving issues before they impact end users.

Automating monitoring, retraining, and governance

When you’re managing dozens (or hundreds) of models across multiple environments, manual monitoring is impossible. Human-driven retraining introduces delays and inconsistencies, while manual governance creates compliance gaps and audit headaches.

Automation helps you maintain continuous performance and compliance as your AI systems grow.

Real-time observability

You can’t manage what you can’t measure, and you can’t measure what you can’t see. AI observability gives you real-time visibility into model performance, data quality, prediction accuracy, and business impact through metrics like: 

  • Prediction latency and throughput
  • Model accuracy and drift indicators
  • Data quality scores and distribution shifts
  • Resource utilization and cost per prediction
  • KPIs tied to AI decisions

That said, metrics without action are just dashboards. So set up proactive alerting based on thresholds that adapt to normal variation while catching anomalies. Then have escalation paths that route different types of issues to the right teams, as well as automated responses for common scenarios.

You want to know about problems before your customers do, and resolve them before they impact the business.

Automated retraining

There’s no question about whether your models will need retraining. All models degrade over time, so retraining needs to be proactive and automatic.

Set up clear triggers for retraining, like accuracy dropping below defined thresholds, drift detection scores exceeding acceptable ranges, or data volume reaching predetermined refresh intervals. Don’t rely on calendar-based retraining schedules. They’re either too frequent (wasting resources) or not frequent enough (missing critical changes).

Use AutoML for consistent, repeatable retraining processes, along with strong backtesting that validates new models against recent data before deployment. Shadow deployments let you compare new model performance against current production models using real-world traffic.

This creates a continuous learning loop where your AI systems adapt to changing conditions automatically, maintaining performance without manual intervention.

Embedded governance

Trying to add governance after your pipeline is built? Too late. It needs to be baked in from the start, or you’re gambling with compliance violations and broken trust.

Automate your documentation with model cards that capture training data, metrics, limitations, and use cases. Run bias detection on every new version to catch fairness issues before deployment, and log every change, every deployment, every prediction. When regulators come knocking, you’ll need that paper trail.

Lock down access so only the right people can make changes, but keep it collaborative enough that work actually gets done. And automate your compliance reports so audits don’t become months-long nightmares.

Done right, governance runs silently in the background. Your data scientists and engineers work freely, and every model still meets your standards for performance, fairness, and compliance. 

Preparing for multi-cloud and hybrid deployments

When your AI pipelines are stuck with specific cloud providers or on-premises infrastructure, you lose flexibility, negotiating power, and the ability to optimize for changing business needs.

Environment-agnostic pipelines prevent vendor lock-in and support global operations across different regulatory and performance requirements, letting you optimize costs by moving workloads to the most efficient environment. They also provide redundancy that protects against bottlenecks like provider outages or service disruptions.

Build this portability in from Day 1. 

Use infrastructure-as-code tools like Terraform to define your environments declaratively. Helm charts keep Kubernetes deployments working consistently across providers, while CI/CD pipelines can deploy to any target environment with configuration changes rather than code changes.

Plan your redundancy strategies carefully. Implement active-passive replication for critical models with automatic failover, and set up load balancing that can route traffic between multiple environments. Design data synchronization that keeps your training and serving data consistent across locations.

Getting your AI infrastructure right means building for portability from the beginning, not trying to retrofit it later.

Ensuring compliance and security at scale

Fragile systems build walls around the perimeter and hope that nothing gets through. Resilient systems assume attackers will get in and plan accordingly with: 

  • Data encryption everywhere — at rest, in transit, in use
  • Granular access controls that limit who can do what
  • Continuous scanning for vulnerabilities in containers, dependencies, and infrastructure

Match your compliance needs to actual controls. SOC 2 requires audit logs and access management. ISO 27001 demands incident response plans. GDPR enforces privacy by design. Industry regulations each have their own specific requirements.

The cheapest fix is the earliest fix, so adopt DevSecOps practices that catch security issues during development, not after, when they can cost exponentially more to resolve. Build security and compliance checks into every stage using your machine learning project checklist. Retrofitting protection after the fact means you’re already losing the battle.

Incident response strategies for AI pipelines

Failures will happen. The question is whether you’ll respond quickly and effectively, or whether you’ll scramble in crisis mode while your business suffers.

Proactive incident response minimizes impact through preparation, not reaction. You need playbooks, tools, and processes ready before you need them.

Playbooks for containment and recovery

Every type of AI incident needs a specific response playbook with clear triage steps, escalation paths, rollback procedures, and communication templates. Here are some examples:

  • For pipeline outages: Immediate health checks to isolate the failure, automatic traffic routing to backup systems, rollback to last known good configuration, and transparent stakeholder communication about impact and recovery timeline
  • For accuracy drops: Model performance validation against recent data, comparison with shadow deployments or A/B tests, decision on rollback versus emergency retraining, and documentation of root cause for future prevention
  • For security breaches: Immediate isolation of affected systems, assessment of the data exposure, notification of legal and compliance teams, and coordinated response with existing security operations

Close any gaps by testing these playbooks regularly through simulated incidents. Update based on lessons learned, and keep them easily accessible to all team members who might need them.

Cross-team collaboration

AI incidents are “all-hands-on-deck” efforts that depend on collaboration between data science, engineering, operations, security, legal, and business stakeholders.

Set up shared dashboards that give all teams visibility into system health and incident status, and create dedicated incident response channels in Slack or Microsoft Teams that automatically include the right people based on incident type. Tools like PagerDuty can help with alerting and coordination, while Jira is useful for incident tracking and post-mortem analysis.

A coordinated response ensures everyone knows their role and has access to the information they need, so they can resolve issues quickly — without stepping on each other’s toes.

Driving real business outcomes with resilient AI

Resilient pipelines allow you to deploy with confidence, knowing your systems will adapt to changing conditions. They reduce operational costs and deliver faster time-to-value through automation, self-healing capabilities, and increased uptime and reliability, which ultimately builds trust with customers and stakeholders.

Most importantly, they enable AI at scale. When you’re not constantly reacting to broken pipelines, you can focus on building new capabilities, expanding to new use cases, and driving innovation that creates a competitive advantage.

DataRobot’s enterprise platform builds this resilience into every layer of the stack, from automated monitoring and retraining to built-in governance and security, reinforcing your systems so they keep delivering value no matter what changes around them.Find out how AI leaders leverage DataRobot’s enterprise platform to make resilience the default, not an aspiration.

The post How to build resilient agentic AI pipelines in a world of change appeared first on DataRobot.

Are Your Robot Frames Wearing Out Too Fast — and Is the Finish to Blame?

Choosing an appropriate finish is critical to preserving vulnerable mechanisms, such as robot frames, which can deteriorate with the wrong finish. Uncover when and how to use anodization and powder coatings for robotics in the most productive and sustainable ways.
Page 1 of 593
1 2 3 593