Page 24 of 599
1 22 23 24 25 26 599

‘OCTOID,’ a soft robot that changes color and moves like an octopus

Underwater octopuses change their body color and texture in the blink of an eye to blend perfectly into their surroundings when evading predators or capturing prey. They transform their bodies to match the colors of nearby corals or seaweed, turning blue or red, and move by softly curling their arms or snatching prey.

Teaching robot policies without new demonstrations: interview with Jiahui Zhang and Jesse Zhang

The ReWiND method, which consists of three phases: learning a reward function, pre-training, and using the reward function and pre-trained policy to learn a new language-specified task online.

In their paper ReWiND: Language-Guided Rewards Teach Robot Policies without New Demonstrations, which was presented at CoRL 2025, Jiahui Zhang, Yusen Luo, Abrar Anwar, Sumedh A. Sontakke, Joseph J. Lim, Jesse Thomason, Erdem Bıyık and Jesse Zhang introduce a framework for learning robot manipulation tasks solely from language instructions without per-task demonstrations. We asked Jiahui Zhang and Jesse Zhang to tell us more.

What is the topic of the research in your paper, and what problem were you aiming to solve?

Our research addresses the problem of enabling robot manipulation policies to solve novel, language-conditioned tasks without collecting new demonstrations for each task. We begin with a small set of demonstrations in the deployment environment, train a language-conditioned reward model on them, and then use that learned reward function to fine-tune the policy on unseen tasks, with no additional demonstrations required.

Tell us about ReWiND – what are the main features and contributions of this framework?

ReWiND is a simple and effective three-stage framework designed to adapt robot policies to new, language-conditioned tasks without collecting new demonstrations. Its main features and contributions are:

  1. Reward function learning in the deployment environment
    We first learn a reward function using only five demonstrations per task from the deployment environment.

    • The reward model takes a sequence of images and a language instruction, and predicts per-frame progress from 0 to 1, giving us a dense reward signal instead of sparse success/failure.
    • To expose the model to both successful and failed behaviors without having to collect failed behavior demonstrations, we introduce a video rewind augmentation: For a video segmentation V(1:t), we choose an intermediate point t1. We reverse the segment V(t1:t) to create V(t:t1) and append it back to the original sequence. This generates a synthetic sequence that resembles “making progress then undoing progress,” effectively simulating failed attempts.
    • This allows the reward model to learn a smoother and more accurate dense reward signal, improving generalization and stability during policy learning.
  2. Policy pre-training with offline RL
    Once we have the learned reward function, we use it to relabel the small demonstration dataset with dense progress rewards. We then train a policy offline using these relabeled trajectories.
  3. Policy fine-tuning in the deployment environment
    Finally, we adapt the pre-trained policy to new, unseen tasks in the deployment environment. We freeze the reward function and use it as the feedback for online reinforcement learning. After each episode, the newly collected trajectory is relabeled with dense rewards from the reward model and added to the replay buffer. This iterative loop allows the policy to continually improve and adapt to new tasks without requiring any additional demonstrations.

Could you talk about the experiments you carried out to test the framework?

We evaluate ReWiND in both the MetaWorld simulation environment and the Koch real-world setup. Our analysis focuses on two aspects: the generalization ability of the reward model and the effectiveness of policy learning. We also compare how well different policies adapt to new tasks under our framework, demonstrating significant improvements over state-of-the-art methods.

(Q1) Reward generalization – MetaWorld analysis
We collect a metaworld dataset in 20 training tasks, each task include 5 demos, and 17 related but unseen tasks for evaluation. We train the reward function with the metaworld dataset and a subset of the OpenX dataset.

We compare ReWiND to LIV[1], LIV-FT, RoboCLIP[2], VLC[3], and GVL[4]. For generalization to unseen tasks, we use video–language confusion matrices. We feed the reward model video sequences paired with different language instructions and expect the correctly matched video–instruction pairs to receive the highest predicted rewards. In the confusion matrix, this corresponds to the diagonal entries having the strongest (darkest) values, indicating that the reward function reliably identifies the correct task description even for unseen tasks.

Video-language reward confusion matrix. See the paper for more information.

For demo alignment, we measure the correlation between the reward model’s predicted progress and the actual time steps in successful trajectories using Pearson r and Spearman ρ. For policy rollout ranking, we evaluate whether the reward function correctly ranks failed, near-success, and successful rollouts. Across these metrics, ReWiND significantly outperforms all baselines—for example, it achieves 30% higher Pearson correlation and 27% higher Spearman correlation than VLC on demo alignment, and delivers about 74% relative improvement in reward separation between success categories compared with the strongest baseline LIV-FT.

(Q2) Policy learning in simulation (MetaWorld)
We pre-train on the same 20 tasks and then evaluate RL on 8 unseen MetaWorld tasks for 100k environment steps.

Using ReWiND rewards, the policy achieves an interquartile mean (IQM) success rate of approximately 79%, representing a ~97.5% improvement over the best baseline. It also demonstrates substantially better sample efficiency, achieving higher success rates much earlier in training.

(Q3) Policy learning in real robot (Koch bimanual arms)
Setup: a real-world tabletop bimanual Koch v1.1 system with five tasks, including in-distribution, visually cluttered, and spatial-language generalization tasks.
We use 5 demos for the reward model and 10 demos for the policy in this more challenging setting. With about 1 hour of real-world RL (~50k env steps), ReWiND improves average success from 12% → 68% (≈5× improvement), while VLC only goes from 8% → 10%.

Are you planning future work to further improve the ReWiND framework?

Yes, we plan to extend ReWiND to larger models and further improve the accuracy and generalization of the reward function across a broader range of tasks. In fact, we already have a workshop paper extending ReWiND to larger-scale models.

In addition, we aim to make the reward model capable of directly predicting success or failure, without relying on the environment’s success signal during policy fine-tuning. Currently, even though ReWiND provides dense rewards, we still rely on the environment to indicate whether an episode has been successful. Our goal is to develop a fully generalizable reward model that can provide both accurate dense rewards and reliable success detection on its own.

References

[1] Yecheng Jason Ma et al. “Liv: Language-image representations and rewards for robotic control.” International Conference on Machine Learning. PMLR, 2023.
[2] Sumedh Sontakke et al. “Roboclip: One demonstration is enough to learn robot policies.” Advances in Neural Information Processing Systems 36 (2023): 55681-55693.
[3] Minttu Alakuijala et al. “Video-language critic: Transferable reward functions for language-conditioned robotics.” arXiv:2405.19988 (2024).
[4] Yecheng Jason Ma et al. “Vision language models are in-context value learners.” The Thirteenth International Conference on Learning Representations. 2024.

About the authors

Jiahui Zhang is a Ph.D. student in Computer Science at the University of Texas at Dallas, advised by Prof. Yu Xiang. He received his M.S. degree from the University of Southern California, where he worked with Prof. Joseph Lim and Prof. Erdem Bıyık.

Jesse Zhang is a postdoctoral researcher at the University of Washington, advised by Prof. Dieter Fox and Prof. Abhishek Gupta. He completed his Ph.D. at the University of Southern California, advised by Prof. Jesse Thomason and Prof. Erdem Bıyık at USC, and Prof. Joseph J. Lim at KAIST.

Aerial microrobot can fly as fast as a bumblebee

In the future, tiny flying robots could be deployed to aid in the search for survivors trapped beneath the rubble after a devastating earthquake. Like real insects, these robots could flit through tight spaces larger robots can't reach, while simultaneously dodging stationary obstacles and pieces of falling rubble.

New control system teaches soft robots the art of staying safe

Imagine having a continuum soft robotic arm bend around a bunch of grapes or broccoli, adjusting its grip in real time as it lifts the object. Unlike traditional rigid robots that generally aim to avoid contact with the environment as much as possible and stay far away from humans for safety reasons, this arm senses subtle forces, stretching and flexing in ways that mimic more of the compliance of a human hand. Its every motion is calculated to avoid excessive force while achieving the task efficiently.

New robotic eyeball could enhance visual perception of embodied AI

Embodied artificial intelligence (AI) systems are robotic agents that rely on machine learning algorithms to sense their surroundings, plan their actions and execute them. A key aspect of these systems are visual perception modules, which allow them to analyze images captured by cameras and interpret them.

Small Business AI Adoption Statistics for 2025: A Comprehensive Analysis

 

Small Business AI Adoption Statistics for 2025: A Comprehensive Analysis

Small business AI adoption is surging in 2025, with the traditional large-small enterprise gap rapidly closing. Throughout this analysis, “small business” refers to companies with fewer than 500 employees, following the U.S. Small Business Administration’s standard definition, with most data focusing on businesses under 100 employees. New data from the SBA Office of Advocacy, U.S. Chamber of Commerce, and leading vendor surveys reveals SMBs are not just experimenting—they’re achieving measurable ROI through strategic AI implementation. However, significant barriers around skills training and data readiness persist, creating opportunities for businesses ready to lead.

Key Headlines:

  • Small business AI adoption jumped from 6.3% to 8.8% in six months (SBA/BTOS data)
  • 96% of SMBs plan to adopt emerging technologies including AI (U.S. Chamber 2025)
  • 63% of current AI users deploy it daily, saving 20+ hours monthly (Thryv 2025)
  • Skills gaps remain the #1 barrier, affecting 46% of business leaders (McKinsey)

SMB AI Use Surging in 2025: Headline Figures

The data is unequivocal: small business AI adoption is accelerating at an unprecedented pace. Multiple authoritative sources confirm this trend, though adoption rates vary significantly based on survey methodology and definitions.

The Numbers by Source — 2025

SourceCurrent AI UseSample SizeKey Finding
SBA Office of Advocacy8.8%200,000 businessesGap with large firms shrinking rapidly
U.S. Chamber of Commerce58%3,350 SMB leadersUp from 40% in 2024, 2x since 2023
Thryv Survey55%SMB leaders41% increase year-over-year
Salesforce Research75%3,350 SMBs globallyExperimenting or fully implemented

Sources: SBA Business Trends and Outlook Survey (BTOS), U.S. Chamber Empowering Small Business Report 2025, Thryv AI and Small Business Adoption Survey, Salesforce SMB Trends Report 6th Edition [1,2,3]

Research Methodology Note: The variance in adoption rates reflects different survey approaches—government data (BTOS) uses stricter definitions of “production AI use,” while vendor surveys include experimentation and pilot programs.

Adoption Levels & Momentum: The Gap is Closing

  • SBA/BTOS Analysis: Large-Small Gap Shrinking Fast

The most encouraging trend in the data comes from the SBA Office of Advocacy’s longitudinal analysis. In February 2024, large businesses used AI at 1.8 times the rate of small businesses (11.1% vs 6.3%). By August 2025, this gap had shrunk dramatically—small business usage reached 8.8% while large business adoption actually declined slightly to 10.5%. [1]

Key Insight: Small businesses may only be about a year behind large enterprises in AI adoption, a remarkable improvement from previous technology adoption cycles like broadband internet, where SMBs lagged by decades.

  • U.S. Chamber 2025: Massive Intent to Adopt

The U.S. Chamber’s latest research delivers the most striking headline: 96% of small business owners plan to adopt emerging technologies, including AI. This represents unprecedented intention to embrace new technology among traditionally cautious SMB operators. [2]

Current adoption statistics from the Chamber study:

  • 58% currently use generative AI (up from 40% in 2024)
  • More than double the adoption rate from 2023
  • 82% of AI-using SMBs increased workforce over the past year
  • 77% say limits on AI would negatively impact growth and operations
  • Vendor Data: Daily Use Patterns Emerging

Thryv 2025 Survey Results (labeled as vendor data):

  • 63% use AI daily among current adopters
  • 58% report saving over 20 hours per month
  • 66% save between $500-$2,000 monthly through AI implementation
  • 41% increase in adoption year-over-year (from 39% to 55%)
  • Salesforce 2024 Findings:

  • 75% of SMBs experimenting with AI, with 36% fully implemented
  • 91% of AI-using SMBs report revenue increases
  • Growing SMBs are 1.8x more likely to invest in AI than declining SMBs
  • 78% say AI will be a “game-changer” for their company [3]

Barriers & Enablers: Skills Gap Dominates Concerns

While adoption momentum builds, significant obstacles persist. Research consistently identifies capability confidence and training gaps as primary adoption barriers.

Top SMB AI Adoption Barriers — 2025

BarrierPercentage AffectedPrimary Source
Skills/Training Gaps46%McKinsey Research [4]
“Not Applicable to Business”82%SBA (businesses <5 employees)
Budget Constraints34%Various surveys
Data Readiness Issues28%Salesforce SMB Report
Security Concerns22%Multiple sources

Critical Finding: The belief that AI isn’t applicable to their business dominates among the smallest SMBs (under 5 employees), with 82% citing this as their primary reason for non-adoption. However, this drops significantly as business size increases, suggesting an education rather than applicability issue. [1]

Manufacturing SMBs: Where AI Lands First

Manufacturing small businesses show particular promise for AI adoption, with specific use cases gaining traction:

Top Manufacturing AI Applications:

  1. Quality Control & Inspection – 98-99.5% accuracy rates in defect detection
  2. Predictive Maintenance – 90-95% accuracy in failure prediction
  3. Production Scheduling – 80-90% efficiency improvements in target achievement
  4. Supply Chain Optimization – 15-25% cost reduction potential

Manufacturing SMBs face unique advantages: existing process data, clear ROI metrics, and immediate applicability to daily operations. However, they also confront the steepest skills gap—68% of manufacturers report difficulty finding qualified employees, up from 56% in 2023. [5]

ROI & Roadmap: Quick Wins and Strategic Implementation

The businesses succeeding with AI follow predictable patterns in their implementation approach, focusing on data foundation, integrated systems, and measured deployment.

Proven ROI Metrics from Early Adopters

Time and Cost Savings:

  • 58% save 20+ hours monthly (Thryv survey data)
  • 66% report $500-$2,000 monthly savings
  • 87% say AI helps scale operations (Salesforce)
  • 86% see improved profit margins (Salesforce)

Revenue and Growth Impact:

  • 91% report revenue increases among AI-using SMBs
  • 82% of AI users increased workforce over the past year
  • 78% call AI a “game-changer” for their business

90-Day SMB AI Implementation Roadmap

PhaseTimelineKey ActivitiesSuccess Metrics
DiscoveryDays 1-30

• Data audit & quality assessment

• Use case identification

• Current system inventory

• Defined ROI targets

• Priority use cases identified

PlanningDays 31-60

• Vendor selection & comparison

• Training program design

• Change management strategy

• Implementation partner selected

• Team training scheduled

PilotDays 61-90

• Limited deployment

• Performance monitoring

• User feedback collection

• Measurable efficiency gains

• User adoption >80%

Data Readiness: The Foundation Factor

Successful SMB AI implementations prioritize data foundation over technology selection. Research shows:

  • 74% of growing SMBs are increasing data management investments vs. 47% of declining SMBs
  • 85% of IT professionals confirm AI outputs are only as good as data inputs
  • 66% of all SMBs plan to increase data management investment next year [3]

Vendor Selection Criteria: Growing SMBs prioritize AI capabilities first (40% say “extremely important”) versus price-focused evaluation by struggling businesses (23% prioritize AI capabilities).

Get Your SMB AI Roadmap in 30 Days

The statistics are clear: SMB AI adoption is not a future trend—it’s happening now. The businesses that thrive will be those that move beyond experimentation to strategic, data-driven implementation.

Ready to join the 91% of AI-using SMBs seeing revenue growth?

USM Business Systems specializes in practical AI implementations for small and medium manufacturing businesses. Our proven methodology delivers measurable results in 30 days, not months.

 

→ Schedule Your AI Strategy Session

→ Download Our SMB AI Implementation Checklist

 

Our SMB/Manufacturing AI Solutions help you navigate vendor selection, data preparation, and change management—turning AI statistics into your competitive advantage.

 

References

 

Researchers develop new method for modeling complex sensor systems

A research team at Kumamoto University (Japan) has unveiled a new mathematical framework that makes it possible to accurately model systems using multiple sensors that operate at different sensing rates. This breakthrough could pave the way for safer autonomous vehicles, smarter robots, and more reliable sensor networks.

Optimizing Wheel Drives for AGVs and AMRs: What OEMs Need to Know About Motion Control

The motor and actuator selection behind each wheel can make or break the success of the entire system. In this post, we’ll explore the core challenges in mobile robot drive systems and how customized motion control solutions from DINGS' Motion USA can help you meet them.

The Future of Learning: Role of AI Agents in Education Apps Explained

The Future of Learning: Role of AI Agents in Education Apps Explained

Imagine a classroom where every student has a personal tutor who knows their strengths and learns at their pace. It’s the reality of AI agents that are shaping today’s education system. With the global AI in education market projected to reach $30 billion by 2032, these intelligent AI tools are no longer optional add-ons; they’re becoming the backbone of personalized, engaging, and future-ready learning experiences.

In this article, we will discuss the role of AI agents in educational apps, the advantages of AI agents in education, and the potential future impact of AI learning. 

What Are AI Agents in Education?

AI agents are more than just computer programs, they’re digital learning partners that sense, learn, and act with purpose. In education, their true value lies in adapting to each student’s needs, providing personalized guidance, and engaging through interactive conversations. Unlike static software, AI agents in education continuously improve with personalized interactions, empowering learners, boosting engagement, and making education more effective and accessible.

Some of the examples of AI agents used in education include:

  • Real-time chat tutors.
  • Adaptive learning systems adjust the level of lessons based on student performance.
  • Grading assistants assisting in automating student grading and providing feedback.
  • AI robot classroom management assistants that help teachers in monitoring students’ engagement and performance.

Role of AI in EducationKey Roles of AI Agents in Education Apps

  1. Personalized Learning Paths 

No two people learn the same way, some absorb best by seeing, others by doing. Standardized learning paths often fail to address this individuality. AI agents solve this gap by tracking each learner’s performance, identifying strengths and weaknesses, and creating personalized lesson plans that truly match their unique learning style.

 

  1. Intelligent Tutoring Systems 

Intelligent AI agents act like 24/7 personal tutors, adjusting to each student’s pace, identifying struggles, and reshaping lessons in real time to create a personalized path. They don’t just deliver answers, they guide with explanations, break down complex topics into simple steps, and keep learners motivated with instant feedback, making education more engaging, efficient, and tailored than ever before.

 

  1. Real-Time Feedback and Assessment

The top benefit of AI agents in education apps is instant feedback. Gone are the days when students had to wait for days to receive feedback from teachers before improving their performance. AI agents allow teachers to focus on more sophisticated teaching activities such as mentorship and critical thinking. 

 

  1. Enhancing Engagement Through Gamification 

AI assistants boost student engagement by using gamification like points, levels, and challenges, tailored to each learner, making progress rewarding and learning more interactive. This keeps students motivated and consistent in their learning journey.

 

  1. Language Translation and Accessibility

NLP-enabled AI agents enhance education applications by overcoming language and accessibility barriers. They are able to facilitate real-time translation, subtitles, and even audio reading aids for visually impaired learners so that everyone is catered to. This shift makes learning accessible to learners from everywhere across the globe.

 

  1. Teacher Support and Classroom Management

They do not just assist the students, but the teachers, too. AI frees the teacher from mundane activities like attendance, grading, and tracking to concentrate on instructing and student engagement. Furthermore, AI can even propose something on the basis of statistics.

Recommended To Read: Top 50 AI Companies in US, India & Europe

Top Benefits of AI Agents in Education Apps

The impact of AI on educational software can be summarized in three dimensions:

For Students:

  • Personalized learning paths
  • Tutoring assistance 24/7
  • Higher motivation and motivation
  • Accessibility regardless of location or ability

For Teachers: 

  • Reduced administrative load
  • Real-time feedback to student progress
  • Autonomy to work on high-level teaching
  • Support for coping with changing classroom conditions

For Institutions:

  • Scalable learning platforms
  • Low-cost delivery of instruction
  • Enhanced student performance and retention
  • Data-driven decision-making

 

The Future of AI Agents in Educational Apps Development

With AI in education projected to grow at over 45% CAGR by 2030, the future of AI agents in educational app development looks highly promising. They will power adaptive learning paths, real-time feedback, gamification, and language translation, making education more inclusive, engaging, and effective for learners worldwide.

  • Hyper-Personalization: AI agents will go beyond pace-based learning, adapting to students’ attention spans, emotional states, and thought patterns for deeper personalization.
  • Immersive Learning: Through AR/VR integration, AI agents will guide learners in virtual science labs, historical reconstructions, and interactive simulations for hands-on experiences.
  • Emotional Intelligence: By detecting frustration, distraction, or excitement, AI agents will adjust teaching styles in real time to keep learners engaged.
  • Global Collaboration: AI-powered platforms will connect students worldwide, encouraging cross-cultural teamwork and collaborative problem-solving.
  • Lifelong Learning: As reskilling and upskilling become essential, AI agents will serve as continuous learning companions, helping individuals adapt to evolving careers and industries.

How the Best AI Development Partner Helps Organizations Like You?

Choosing the right AI development partner can make all the difference in creating impactful educational apps. The best partners bring deep technical expertise, proven experience in AI integration, and a focus on delivering tailored solutions that drive engagement, accessibility, and learning outcomes. They not only build intelligent systems but also ensure scalability, security, and continuous innovation, helping your organization stay ahead in the rapidly evolving edtech landscape.

Where Is the Best AI Development Company-USM Business Systems Unique?

  • Proven Expertise: Extensive experience in AI and machine learning across industries.
  • EdTech Focus: Deep understanding of educational technology and learner-centric solutions.
  • Personalization & Engagement: Design AI agents that adapt to individual learning styles and boost student engagement.
  • Scalable & Secure Solutions: Build apps that grow with your user base while maintaining top-level security.
  • Continuous Innovation: Ensure your apps remain cutting-edge with ongoing optimization and feature enhancements.
  • Measurable Outcomes: Deliver educational solutions that produce tangible learning improvements.

 

Conclusion

Integration of AI agents in Education apps is no longer a futuristic concept, they transform the ways of learning, teaching, and practice in school. From immersive experiences and emotional intelligence to global collaboration and lifelong learning, they are redefining how students learn. As an AI development company, we empower educational apps to leverage these agents, creating smarter, future-ready learning experiences for every learner.

 

[contact-form-7]

AUCTION – FACILITY CLOSURE – MAJOR ROBOTICS AUTOMATION COMPANY

BTM Industrial is a leading asset disposition company assisting manufacturing companies with their surplus asset needs. Founded in 2011, it is a fully licensed-and-regulated, commission-based auction and liquidation company. The company’s full asset disposition programs provide customers with the ability to efficiently manage all aspects of their surplus and achieve higher value.

BTM Industrial -The industry leader in assisting companies with surplus assets.

BTM Industrial is a leading asset disposition company assisting manufacturing companies with their surplus asset needs. Founded in 2011, it is a fully licensed-and-regulated, commission-based auction and liquidation company. The company’s full asset disposition programs provide customers with the ability to efficiently manage all aspects of their surplus and achieve higher value.

Artificial tendons give muscle-powered robots a boost

Our muscles are nature's actuators. The sinewy tissue is what generates the forces that make our bodies move. In recent years, engineers have used real muscle tissue to actuate "biohybrid robots" made from both living tissue and synthetic parts. By pairing lab-grown muscles with synthetic skeletons, researchers are engineering a menagerie of muscle-powered crawlers, walkers, swimmers, and grippers.
Page 24 of 599
1 22 23 24 25 26 599