Page 18 of 576
1 16 17 18 19 20 576

Top Ten Stories in AI Writing, Q3 2025

As we close in on the third anniversary of ChatGPT’s release to the world – November 30, 2022 – the chatbot, along with others like it, is well on the way to transforming the world as we know it.

My only hope is they don’t screw it up.

On the plus side, after nearly three years of turning to ChatGPT for writing, brainstorming – and research that can be confirmed with hotlinks – the magic of ChatGPT is still as fresh as the day it was born.

No matter how many times I type a question or prompt into ChatGPT or similar AI, its ability to respond with often incredibly insightful and artfully written prose still feels fantastical to me.

Unfortunately, AI makers have taken to mixing that verifiable magic with a healthy dose of smoke and mirrors, leaving many users wondering: What’s real and what’s snake oil?

As the new year unfolded, for example, we were promised that 2025 would be the ‘Year of the AI Agent,’ a wondrous new AI application that would work autonomously on our behalf, completing multi-step tasks for us without the need of supervision.

Instead, we were given AI’s version of vaporware: Extremely unreliable applications that often only get part of the job done, if we’re lucky — or worse, report back to us that the task we assigned was simply too difficult to complete.

Meanwhile, the release of ChatGPT-5, pre-packaged as ‘Beyond AI’s Next Big Thing,’ landed with a thud, sporting an AI personality so bland and off-putting, its maker raced to reinstate the earlier version it was supposed to replace – ChatGPT-4o – lest scores of ChatGPT users jumped ship.

The problem with repeatedly burning consumers with those kinds of empty promises is that they often walk away in complete disgust, characterizing the companies behind the digital head-fakes as charlatans not worth dealing with on any level.

And with AI, that’s the real crime.

OpenAI, Google, Anthropic, xAI and similar – they truly have come up with incredibly dazzling technology that if released in the late 1600s probably would have been seen as the work of witches.

But if they continue to mix the proven magic of AI with ‘wouldn’t it be nice’ ideas portrayed as ‘finished products you can trust,’ they risk discrediting the entire industry — and setting back the widespread adoption of AI by business and society by years.

As an avid, daily user of AI who deeply appreciates what AI can actually do, I truly hope that does not happen.

In the meantime, here are the stories that emerged in Q3 that helped drive the aforementioned trend – as well as a number of bright spots:

*ChatGPT’s Top Use at Work: Writing: A new study by ChatGPT’s maker finds that writing is the number one use for the tool at work.

Observes the study’s lead researcher Aaron Chatterji: “Work usage is more common from educated users in highly paid professional occupations.”

Another major study finding: Once mostly embraced by men, ChatGPT is now popular with women.

Specifically, researchers found that by July 2025, 52% of ChatGPT users had names that could be classified as feminine.

*Bringing in ChatGPT for Email: The Business Case: While AI coders push the tech to ever-loftier heights, one thing we already know for sure is AI can write emails at the world-class level — in a flash.

True, long-term, AI may one day trigger a world in which AI-powered machines do all the work as we navigate a world resplendent with abundance.

But in the here and now, AI is already saving businesses and organizations serious coin in terms of slashing time spent on email, synthesizing ideas in new ways, ending email drudgery as we know it and boosting staff morale.

Essentially: There are all sorts of reasons for businesses and organizations to bring-in bleeding edge AI tools like ChatGPT, Gemini, Anthropic, Claude and similar to take over the heavy lifting when it comes to email.

This piece offers up the Top Ten.

*ChatGPT-Maker Brings Back ChatGPT-4o, Other Legacy AI Engines: Responding to significant consumer backlash, OpenAI has restored access to GPT-4 and other legacy models that were popular before the release of GPT-5.

Essentially, many users were turned-off by GPT-5’s initial personality, which was perceived as cold, distant and terse.

Observes writer Will Knight: “The backlash has sparked a fresh debate over the psychological attachments some users form with chatbots trained to push their emotional buttons.”

*ChatGPT Plus Users Get Meeting Recording, Transcripts, Summaries: Users of ChatGPT Plus can now use the AI to quickly record meetings – as well as generate transcripts and summaries of those meetings.

Dubbed ‘Record Mode,’ the feature was previously only available to users of higher-tier, ChatGPT subscriptions.

Observes writer Lance Whitney: The AI “converts the spoken audio into a text transcript. From there, you can tell ChatGPT to analyze or summarize the content — and ask specific questions about the topics discussed.”

*New Claude Sonnet 4.5: 61% Reliability in Agent Mode: Anthropic is out with an upgrade to its flagship AI that offers 61% reliability when used as an agent for everyday computing tasks.

Essentially, that means when you use the Sonnet 4.5 as an agent to complete an assignment featuring multi-step tasks like opening apps, editing files, navigating Web pages and filling out forms, it will complete those assignments for you 61% of the time.

One caveat: That reliability metric – known as the OSWorld-Verified Benchmark – is based on Sonnet 4.5’s performance in a sandbox environment, where researchers pit the AI against a set of pre-programmed, digital encounters that never change.

Out on the Web – where things can get unpredictable
very quickly — performance could be worse.

Bottom line: If an AI agent that finishes three-out-of-every-five tasks turns your crank, this could be the AI you’ve been looking for.

*Skepticism Over the ‘Magic’ of AI Agents Persists: Despite blue-sky promises, AI agents – ostensibly designed to handle tasks autonomously for you on the Web and elsewhere – are still getting a bad rap.

Observes writer Rory Bathgate: “Let’s be very clear here: AI agents are still not very good at their ‘jobs’ — or at least pretty terrible at producing returns-on-investment.”

In fact, tech market research firm Gartner is predicting that 40% of agents currently used by business will be ‘put out to pasture’ by 2027.

*AI Agents: Still Not Ready for Prime Time?: Add Futurism Magazine to the growing list of naysayers who believe AI agents are being over-hyped.

Ideally, AI agents are designed to work independently on a number of tasks for you – such as researching, writing and continually updating an article, all on its own.

But writer Joe Wilkins finds that “the failure rate is absolutely painful,” with OpenAI’s AI agent failing 91% of the time, Meta’s AI agent failing 93% of the time and Google’s AI agent failing 70% of the time.

*Coming Soon: ChatGPT With Ads: If you’re a ChatGPT user who has oft-looked wistfully at the platform and fantasized, “If only this thing had ads,” you’re in luck.

Observes writer Andrew Cain: “OpenAI is building a team to transform ChatGPT into an advertising platform, leveraging its 700 million users for in-house ad tools like campaign management and real-time attribution.

”Led by ex-Facebook exec Fidji Simo, this move aims to compete with Google and Meta, though it risks user trust and privacy concerns.

”Rollout is eyed for 2026.”

*Google’s New ‘Nano Banana’ Image Editor: Cool Use Cases: The fervor over Google’s new image editor continues to rage across the Web, as increasing numbers of users are entranced by its power and surgical precision.

One of the new tool’s most impressive features: The ability to stay true to the identity of a human face – no matter how many times it remakes that image.

For a quick study, check-out these videos on YouTube, which show you scores of ways to use the new editor – officially known as Gemini 2.5 Flash Image:

–Google Gemini 2.5 Flash Image (Nano Banana) – 20 Creative Use Cases

–15 New Use Cases with Nano Banana

–The Ultimate Guide to Gemini 2.5 Flash (Nano Banana)

–New Gemini 2.5 Flash Image is Insane & Free

–Nano Banana Just Crushed Image Editing

*Grammarly Gets Serious Chops as Writing Tool: Best known as a proofreading and editing solution, Grammarly has repositioned itself as a full-fledged AI writer.

Essentially, the tool has been significantly expanded with a new document editor designed to nurture an idea into a full-blown article, blog post, report and similar – with the help of a number of AI agents.

Dubbed Grammarly ‘Docs,’ the AI writer promises to amplify your idea every step of the way – without stepping on your unique voice.

Share a Link:  Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.

Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.

Never Miss An Issue
Join our newsletter to be instantly updated when the latest issue of Robot Writers AI publishes
We respect your privacy. Unsubscribe at any time -- we abhor spam as much as you do.

The post Top Ten Stories in AI Writing, Q3 2025 appeared first on Robot Writers AI.

The agent workforce: Redefining how work gets done 

The real future of work isn’t remote or hybrid — it’s human + agent. 

Across enterprise functions, AI agents are taking on more of the execution of daily work while humans focus on directing how that work gets done. Less time spent on tedious admin means more time spent on strategy and innovation — which is what separates industry leaders from their competitors.

These digital coworkers aren’t your basic chatbots with brittle automations that break when someone changes a form field. AI agents can reason through problems, adapt to new situations, and help achieve major business outcomes without constant human handholding.

This new division of labor is enhancing (not replacing) human expertise, empowering teams to move faster and smarter with systems designed to support growth at scale.

What is an agent workforce, and why does it matter?

An “agent workforce” is a collection of AI agents that operate like digital employees within your organization. Unlike rule-based automation tools of the past, these agents are adaptive, reasoning systems that can handle complex, multi-step business processes with minimal supervision.

This shift matters because it’s changing the enterprise operating model: You can push through more work through fewer hands — and you can do it faster, at a lower cost, and without increasing headcount.

Traditional automation understands very specific inputs, follows predetermined steps (based on those initial inputs), and gives predictable outputs. The problem is that these workflows break the moment something happens that’s outside of their pre-programmed logic.

With an agentic AI workforce, you give your agents objectives, provide context about constraints and preferences, and they figure out how to get the job done. They adapt when circumstances and business needs change, escalate issues to human teams when they hit roadblocks, and learn from each interaction (good or bad). 

Legacy automation toolsAgentic AI workforce
FlexibilityRule-based, fragile tasks; breaks on edge casesOutcome-driven orchestration; plans, executes, and replans to hit targets
CollaborationSiloed bots tied to one tool or teamCross-functional swarms that coordinate across apps, data, and channels
UpkeepHigh upkeep, constant script fixes and change ticketsSelf-healing, adapts to UI/schema changes and retains learning
AdaptabilityDeterministic only, fails outside predefined pathsAmbiguity-ready, reasons through novel inputs and escalates with context
FocusProject mindset; outputs delivered, then parkedKPI mindset; continuous execution against revenue, cost, risk, or CX goals

But the real challenge isn’t defining a single agent — it’s scaling to a true workforce.

From one agent to a workforce

While individual agent capabilities can be impressive, the real value comes from orchestrating hundreds or thousands of these digital workers to transform entire business processes. But scaling from one agent to an entire workforce is complex, and that’s the point where most proofs-of-concept stall or fail

The key is to treat agent development as a long-term infrastructure investment, not a “project.” Enterprises that get stuck in pilot purgatory are those that start with a plan to finish, not a plan to scale

Scaling agents requires governance and oversight — similar to how HR manages a human workforce. Without the infrastructure to do so, everything gets harder: coordination, monitoring, and control all break down as you scale. 

One agent making decisions is manageable. Ten agents collaborating across a workflow needs structure. A hundred agents working across different business units? That takes ironed-out, enterprise-grade governance, security, and monitoring.

An agent-first AI stack is what makes it possible to scale your digital workforce with clear standards and consistent oversight. That stack includes: 

  • Compute resources that scale as needed
  • Storage systems that handle multimodal data flows
  • Orchestration platforms that coordinate agent collaboration
  • Governance frameworks that keep performance consistent and sensitive data secure

Scaling AI apps and agents to deliver business-wide impact is an organizational redesign, and should be treated as such. Recognizing this early gives you the time to invest in platforms that can manage agent lifecycles from development through deployment, monitoring, and continuous improvement. Remember, the goal is scaling through iteration and improvement, not completion.

Business outcomes over chatbots

Many of the AI agents in use today are really just dressed-up chatbots with a handful of use cases: They can answer basic questions using natural language, maybe trigger a few API calls, but they can’t move the business forward without a human in the loop.

Real enterprise agents deliver end-to-end business outcomes, not answers. 

They don’t just regurgitate information. They act autonomously, make decisions within defined parameters, and measure success the same way your business does: speed, cost, accuracy, and uptime.

Think about banking. The traditional loan approval workflow looks something like:

Human reviews application -> human checks credit score -> human validates documentation -> human makes approval decision 

This process takes days or (more likely) weeks, is error-prone, creates bottlenecks if any single piece of information is missing, and scales poorly during high-demand periods.

With an agent workforce, banks can shift to “lights-out lending,” where agents handle the entire workflow from intake to approval and run 24/7 with humans only stepping in to focus on exceptions and escalations.

The results?

  • Loan turnaround times drop from days to minutes.
  • Operational costs fall sharply.
  • Compliance and accuracy improve through consistent logic and audit trails.

In manufacturing, the same transformation is happening in self-fulfilling supply chains. Instead of humans constantly monitoring inventory levels, predicting demand, and coordinating with suppliers, autonomous agents handle the entire process. They can analyze consumption patterns, predict shortages before they happen, automatically generate purchase orders, and coordinate delivery schedules with supplier systems.

The payoff here for enterprises is significant: fewer stockouts, lower carrying costs, and production uptime that isn’t tied to shift hours.

Security, compliance, and responsible AI

Trust in your AI systems will determine whether they help your organization accelerate or stall. Once AI agents start making decisions that impact customers, finances, and regulatory compliance, the question is no longer “Is this possible?” but “Is this safe at scale?”

Agent governance and trust are make-or-break for scaling a digital workforce. That’s why it deserves board-level visibility, not an IT strategy footnote. 

As agents gain access to sensitive systems and act on regulated data, every decision they make traces back to the enterprise. There’s no delegating accountability: Regulators and customers will expect transparent evidence of what an agent did, why it did it, and which data informed its reasoning. Black-box decision-making introduces risks that most enterprises cannot tolerate.

Human oversight will never disappear completely, but it will change. Instead of humans doing the work, they’ll shift to supervising digital workers and stepping in when human judgment or ethical reasoning is needed. That layer of oversight is your safeguard for sustaining responsible AI as your enterprise scales.

Secure AI gateways and governance frameworks form the foundation for the trust in your enterprise AI, unifying control, enforcing policies, and helping maintain full visibility across agent decisions. However, you’ll need to design the governance frameworks before deploying agents. Designing with built-in agent governance and lifecycle control from the start helps avoid costly rework and compliance risks that come from trying to retrofit your digital workforce later. 

Enterprises that design with control in mind from the start build a more durable system of trust that empowers them to scale AI safely and operate confidently — even under regulatory scrutiny.

Shaping the future of work with AI agents

So, what does this mean for your competitive strategy? Agent workforces aren’t just tweaking your existing processes. They’re creating entirely new ways to compete. The advantage isn’t about faster automation, but about building an organization where:

  • Work scales faster without adding headcount or sacrificing accuracy. 
  • Decision cycles go from weeks to minutes. 
  • Innovation isn’t limited by human bandwidth.

Traditional workflows are linear and human-dependent: Person A completes Task A and passes to Person B, who completes Task B, and so on. Agent workforces let dynamic, parallel processing happen where multiple agents collaborate in real time to optimize outcomes, not just check specific tasks off a list.

This is already leading to new roles that didn’t exist even five years ago:

  • Agent trainers specialize in teaching AI systems domain-specific knowledge. 
  • Agent supervisors monitor performance and jump in when situations require human judgment. 
  • Orchestration leads structure collaboration across different agents to achieve business objectives.

For early adopters, this creates an advantage that’s difficult for latecomer competitors to match. 

An agent workforce can process customer requests 10x faster than human-dependent competitors, respond to market changes in real time, and scale instantly during demand spikes. The longer enterprises wait to deploy their digital workforce, the harder it becomes to close that gap.

Looking ahead, enterprises are moving toward:

  • Reasoning engines that can handle even more complex decision-making 
  • Multimodal agents that process text, images, audio, and video simultaneously
  • Agent-to-agent collaboration for sophisticated workflow orchestration without human coordination

Enterprises that build on platforms designed for lifecycle governance and secure orchestration will define this next phase of intelligent operations. 

Leading the shift to an agent-powered enterprise

If you’re convinced that agent workforces offer a strategic opportunity, here’s how leaders move from pilot to production:

  1. Get executive sponsorship early. Agent workforce transformation starts at the top. Your CEO and board need to understand that this will fundamentally change how work gets done (for the better).
  2. Invest in infrastructure before you need it. Agent-first platforms and governance frameworks can take months to implement. If you start pilot projects on temporary foundations, you’ll create technical debt that’s more expensive to fix later.
  3. Build in governance frameworks from Day 1. Put security, compliance, and monitoring frameworks in place before your first agent goes live. These guardrails make scaling possible and safeguard your enterprise from risk as you add more agents to the mix.
  4. Partner with proven platforms that specialize in agent lifecycle management. Building agentic AI applications takes expertise that most teams haven’t developed internally yet. Partnering with platforms designed for this purpose shortens the learning curve and reduces execution risk.

Enterprises that lead with vision, invest in foundations, and operationalize governance from day one will define how the future of intelligent work takes shape.

Explore how enterprises are building, deploying, and governing secure, production-ready AI agents with the Agent Workforce Platform. 

The post The agent workforce: Redefining how work gets done  appeared first on DataRobot.

Quantum simulations that once needed supercomputers now run on laptops

A team at the University at Buffalo has made it possible to simulate complex quantum systems without needing a supercomputer. By expanding the truncated Wigner approximation, they’ve created an accessible, efficient way to model real-world quantum behavior. Their method translates dense equations into a ready-to-use format that runs on ordinary computers. It could transform how physicists explore quantum phenomena.

Figure AI shows latest robot – Figure 03

Figure AI has just introduced its latest android robot, Figure 03. The new robot comes with a lot of improvements over the previous one, such as, redesigned hands including softer fingertips with improved grasp and new cameras in hands, wireless charging through electromagnetic induction, faster actuators with improved torque density, improved audio system, improved camera […]

Main Components of a Humanoid Robot

A humanoid robot has several main systems working together which are: Sensors: Cameras, microphones, tactile sensors, motion sensors and more…. AI: Through artificial intelligence, decision making, computer vision, navigation, motion planning, interactions with surroundings and all other systems are coordinated and managed. Power Supply and Management Systems: Optimization of energy usage and efficient usage of […]

Scientists create a magnetic lantern that moves like it’s alive

A team of engineers at North Carolina State University has designed a polymer “Chinese lantern” that can rapidly snap into multiple stable 3D shapes—including a lantern, a spinning top, and more—by compression or twisting. By adding a magnetic layer, they achieved remote control of the shape-shifting process, allowing the lanterns to act as grippers, filters, or expandable mechanisms.

Robot Talk Episode 128 – Making microrobots move, with Ali K. Hoshiar

Claire chatted to Ali K. Hoshiar from University of Essex about how microrobots move and work together.

Ali Hoshiar is a Senior Lecturer in Robotics at the University of Essex and Director of the Robotics for Under Millimetre Innovation (RUMI) Lab. He leads the EPSRC-funded ‘In-Target’ project and was awarded the university’s Best Interdisciplinary Research Award. His research focuses on microrobotics, soft robotics, and data-driven mechatronic systems for medical and agri-tech applications. He also holds an MBA, adding strategic and commercial insight to his technical work.

 

Robotic programming brings increased productivity and faster return on investment

"By optimizing robot programs, users can see a significant increase in productivity while generating more profit with their robot, using robots for short production runs, and delivering closest conformance to design." Carlos Marcovici, Robotmaster Authorized Partner, Brazil

Why GPS fails in cities. And how it was brilliantly fixed

Our everyday GPS struggles in “urban canyons,” where skyscrapers bounce satellite signals, confusing even advanced navigation systems. NTNU scientists created SmartNav, combining satellite corrections, wave analysis, and Google’s 3D building data for remarkable precision. Their method achieved accuracy within 10 centimeters during testing. The breakthrough could make reliable urban navigation accessible and affordable worldwide.

Scientists suggest the brain may work best with 7 senses, not just 5

Scientists at Skoltech developed a new mathematical model of memory that explores how information is encoded and stored. Their analysis suggests that memory works best in a seven-dimensional conceptual space — equivalent to having seven senses. The finding implies that both humans and AI might benefit from broader sensory inputs to optimize learning and recall.

Interview with Zahra Ghorrati: developing frameworks for human activity recognition using wearable sensors


In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. Zahra Ghorrati is developing frameworks for human activity recognition using wearable sensors. We caught up with Zahra to find out more about this research, the aspects she has found most interesting, and her advice for prospective PhD students.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I am pursuing my PhD at Purdue University, where my dissertation focuses on developing scalable and adaptive deep learning frameworks for human activity recognition (HAR) using wearable sensors. I was drawn to this topic because wearables have the potential to transform fields like healthcare, elderly care, and long-term activity tracking. Unlike video-based recognition, which can raise privacy concerns and requires fixed camera setups, wearables are portable, non-intrusive, and capable of continuous monitoring, making them ideal for capturing activity data in natural, real-world settings.

The central challenge my dissertation addresses is that wearable data is often noisy, inconsistent, and uncertain, depending on sensor placement, movement artifacts, and device limitations. My goal is to design deep learning models that are not only computationally efficient and interpretable but also robust to the variability of real-world data. In doing so, I aim to ensure that wearable HAR systems are both practical and trustworthy for deployment outside controlled lab environments.

This research has been supported by the Polytechnic Summer Research Grant at Purdue. Beyond my dissertation work, I contribute to the research community as a reviewer for conferences such as CoDIT, CTDIAC, and IRC, and I have been invited to review for AAAI 2026. I was also involved in community building, serving as Local Organizer and Safety Chair for the 24th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2025), and continuing as Safety Chair for AAMAS 2026.

Could you give us an overview of the research you’ve carried out so far during your PhD?

So far, my research has focused on developing a hierarchical fuzzy deep neural network that can adapt to diverse human activity recognition datasets. In my initial work, I explored a hierarchical recognition approach, where simpler activities are detected at earlier levels of the model and more complex activities are recognized at higher levels. To enhance both robustness and interpretability, I integrated fuzzy logic principles into deep learning, allowing the model to better handle uncertainty in real-world sensor data.

A key strength of this model is its simplicity and low computational cost, which makes it particularly well suited for real-time activity recognition on wearable devices. I have rigorously evaluated the framework on multiple benchmark datasets of multivariate time series and systematically compared its performance against state-of-the-art methods, where it has demonstrated both competitive accuracy and improved interpretability.

Is there an aspect of your research that has been particularly interesting?

Yes, what excites me most is discovering how different approaches can make human activity recognition both smarter and more practical. For instance, integrating fuzzy logic has been fascinating, because it allows the model to capture the natural uncertainty and variability of human movement. Instead of forcing rigid classifications, the system can reason in terms of degrees of confidence, making it more interpretable and closer to how humans actually think.

I also find the hierarchical design of my model particularly interesting. Recognizing simple activities first, and then building toward more complex behaviors, mirrors the way humans often understand actions in layers. This structure not only makes the model efficient but also provides insights into how different activities relate to one another.

Beyond methodology, what motivates me is the real-world potential. The fact that these models can run efficiently on wearables means they could eventually support personalized healthcare, elderly care, and long term activity monitoring in people’s everyday lives. And since the techniques I’m developing apply broadly to time series data, their impact could extend well beyond HAR, into areas like medical diagnostics, IoT monitoring, or even audio recognition. That sense of both depth and versatility is what makes the research especially rewarding for me.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

Moving forward, I plan to further enhance the scalability and adaptability of my framework so that it can effectively handle large scale datasets and support real-time applications. A major focus will be on improving both the computational efficiency and interpretability of the model, ensuring it is not only powerful but also practical for deployment in real-world scenarios.

While my current research has focused on human activity recognition, I am excited to broaden the scope to the wider domain of time series classification. I see great potential in applying my framework to areas such as sound classification, physiological signal analysis, and other time-dependent domains. This will allow me to demonstrate the generalizability and robustness of my approach across diverse applications where time-based data is critical.

In the longer term, my goal is to develop a unified, scalable model for time series analysis that balances adaptability, interpretability, and efficiency. I hope such a framework can serve as a foundation for advancing not only HAR but also a broad range of healthcare, environmental, and AI-driven applications that require real-time, data-driven decision-making.

What made you want to study AI, and in particular the area of wearables?

My interest in wearables began during my time in Paris, where I was first introduced to the potential of sensor-based monitoring in healthcare. I was immediately drawn to how discreet and non-invasive wearables are compared to video-based methods, especially for applications like elderly care and patient monitoring.

More broadly, I have always been fascinated by AI’s ability to interpret complex data and uncover meaningful patterns that can enhance human well-being. Wearables offered the perfect intersection of my interests, combining cutting-edge AI techniques with practical, real-world impact, which naturally led me to focus my research on this area.

What advice would you give to someone thinking of doing a PhD in the field?

A PhD in AI demands both technical expertise and resilience. My advice would be:

  • Stay curious and adaptable, because research directions evolve quickly, and the ability to pivot or explore new ideas is invaluable.
  • Investigate combining disciplines. AI benefits greatly from insights in fields like psychology, healthcare, and human-computer interaction.
  • Most importantly, choose a problem you are truly passionate about. That passion will sustain you through the inevitable challenges and setbacks of the PhD journey.

Approaching your research with curiosity, openness, and genuine interest can make the PhD not just a challenge, but a deeply rewarding experience.

Could you tell us an interesting (non-AI related) fact about you?

Outside of research, I’m passionate about leadership and community building. As president of the Purdue Tango Club, I grew the group from just 2 students to over 40 active members, organized weekly classes, and hosted large events with internationally recognized instructors. More importantly, I focused on creating a welcoming community where students feel connected and supported. For me, tango is more than dance, it’s a way to bring people together, bridge cultures, and balance the intensity of research with creativity and joy.

I also apply these skills in academic leadership. For example, I serve as Local Organizer and Safety Chair for the AAMAS 2025 and 2026 conferences, which has given me hands-on experience managing events, coordinating teams, and creating inclusive spaces for researchers worldwide.

About Zahra

Zahra Ghorrati is a PhD candidate and teaching assistant at Purdue University, specializing in artificial intelligence and time series classification with applications in human activity recognition. She earned her undergraduate degree in Computer Software Engineering and her master’s degree in Artificial Intelligence. Her research focuses on developing scalable and interpretable fuzzy deep learning models for wearable sensor data. She has presented her work at leading international conferences and journals, including AAMAS, PAAMS, FUZZ-IEEE, IEEE Access, System and Applied Soft Computing. She has served as a reviewer for CoDIT, CTDIAC, and IRC, and has been invited to review for AAAI 2026. Zahra also contributes to community building as Local Organizer and Safety Chair for AAMAS 2025 and 2026.

Page 18 of 576
1 16 17 18 19 20 576