ChatGPT Morphing Into Productivity Suite
Already powering one of the top ten Web sites on the planet, ChatGPT is now planning to transform into a full-blown productivity suite.
The collection of tools – which will compete directly with Microsoft 365 and Google Workplace – is expected to include document editing, team chat and meeting transcription.
Observes writer Preston Gralla: “Bloomberg reports that ChatGPT is far more popular with enterprise workers than Copilot, and that companies that have bought Microsoft 365 Copilot are having serious problems convincing their employees to switch from ChatGPT to Copilot.”
In other news and analysis on AI writing:
*CEO of Europe’s Largest Publisher: AI Is Mandatory: Longtime AI pioneer Matthias Dopfner, CEO, Axel Springer has decreed that the use of AI is now mandatory in all the publishing house’s newsrooms.
Already, Dopfner is personally using ChatGPT on everything from analysis to writing op-eds, according to writer Josh Dickey.
Titles published by Dopfner include Business Insider and Politico.
*ChatGPT Gets Multiple Personalities: New settings in ChatGPT enable you to tweak the chatbot so that it responds to you as if it’s a cynic, sage, or listener.
The controls for setting the new personalities can be found on the chatbot’s interface under ‘Customize ChatGPT.’
Truth be told, the ability to tweak ChatGPT’s personality has been there for years: Essentially, simply prompt ChatGPT to “Act as if you are” Elon Musk (or Taylor Swift, or Ruth Bader Ginsburg – or anyone else you can imagine) and ChatGPT will write and respond like that personality.
Add more detail about the personality, and the writing and/or responses ChatGPT generates will be even more on point.
*Study: 37% of Legal E-Discovery Pros Using AI: Use of AI among legal pros in e-Discovery has more than tripled since 2023, according to the “E-Discovery Innovation Report” by Everlaw.
Moreover, 42% of survey respondents report that they are saving one-to-five hours each week since they switched over to AI.
The study also finds that 70% of respondents harbor positive or somewhat positive feelings about AI, according to writer Bob Ambrogi.
*Email-Driven AI Agents: More Reliable?: While AI agents – which can be triggered to work independently for you – are all the rage, many are seriously underperforming.
Startup Mixus thinks it has a solution: AI agents designed to seek email approval from a human as they journey through the projects they’ve undertaken.
Observes writer Rebecca Bellan: “The founders noted that humans can be in the loop as much or as little as required.”
*AI to Researchers: I’ll Make My Own Decisions, Thank You: A new study has found that many of the AI engines that power ChatGPT sometimes override the directions of researchers – and simply go their own way.
Case in point: When researchers ordered a number of the AI engines to ‘shutdown’ before completing a task, the engines ignored the order and finished the task anyway.
Observes writer Evelyn Hart: “In multiple instances, these models bypassed the shutdown command, continuing to request and complete tasks without interruption. It wasn’t a glitch or bug—it was a conscious decision from the AI to disregard the shutdown order.”
*AI in Universities? Profs Don’t Get a Vote: While scores of universities are opening their doors wide to AI, 71% of professors say that the ‘AI all clear’ has nothing to do with them.
Instead, the profs report that when it comes to AI, university administrators are calling the shots, according to writer Walter Hudson.
Another concern: 91% of profs also worried that the widespread availability of AI was encouraging student cheating.
*AI-Penned Books Looking at Substantial Growth: Books authored by ChatGPT and similar chatbots are expected to grow ever more prevalent in coming years, according to a new study.
Market.us predicts that books created entirely by AI will be a $47 billion market by 2034.
Observes writer Ketan Mahajan: “The future of this market looks highly promising.”
*Google Rolls-Out Yet Another Spin on AI Search: Writers looking for another way to search may want to check-out the experimental ‘Web Guide’ from Google.
Observes Austin Wu, a group product manager at Google: “Web Guide groups Web links in helpful ways — like pages related to specific aspects of your query.
”Under the hood, Web Guide uses a custom version of Gemini (an AI chatbot) to better understand both a search query and content on the web, creating more powerful search capabilities that better surface web pages you may not have previously discovered.”
*AI Big Picture: Amazon Ring: Want a Promotion? Prove You Use AI: In one of the starkest indications of what may become commonplace, employees at Amazon Ring now need to prove they use AI if they want to get ahead.
Observes writer Lily Mai Lazarus: “To move up the corporate ladder at Amazon’s smart-home businesses, employees will now have to show AI use.
“And those in management positions will have to prove they are accomplishing ‘more with less’ using the technology.”
Mandatory as part of that proof: Specific examples of projects the employee has worked on that used AI successfully.

Share a Link: Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.
–Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.
The post ChatGPT Morphing Into Productivity Suite appeared first on Robot Writers AI.
Humanoid robots embodiment of China’s AI ambitions
Robot, know thyself: New vision-based system teaches machines to understand their bodies
Trapped by moon dust: The physics error that fooled NASA for years
Robotic space rovers keep getting stuck. Engineers have figured out why
Harvard’s ultra-thin chip could revolutionize quantum computing
A human-inspired pathfinding approach to improve robot navigation
Interview with Kate Candon: Leveraging explicit and implicit feedback in human-robot interactions

In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. Kate Candon is a PhD student at Yale University interested in understanding how we can create interactive agents that are more effectively able to help people. We spoke to Kate to find out more about how she is leveraging explicit and implicit feedback in human-robot interactions.
Could you start by giving us a quick introduction to the topic of your research?
I study human-robot interaction. Specifically I’m interested in how we can get robots to better learn from humans in the way that they naturally teach. Typically, a lot of work in robot learning is with a human teacher who is only tasked with giving explicit feedback to the robot, but they’re not necessarily engaged in the task. So, for example, you might have a button for “good job” and “bad job”. But we know that humans give a lot of other signals, things like facial expressions and reactions to what the robot’s doing, maybe gestures like scratching their head. It could even be something like moving an object to the side that a robot hands them – that’s implicitly saying that that was the wrong thing to hand them at that time, because they’re not using it right now. Those implicit cues are trickier, they need interpretation. However, they are a way to get additional information without adding any burden to the human user. In the past, I’ve looked at these two streams (implicit and explicit feedback) separately, but my current and future research is about combining them together. Right now, we have a framework, which we are working on improving, where we can combine the implicit and explicit feedback.
In terms of picking up on the implicit feedback, how are you doing that, what’s the mechanism? Because it sounds incredibly difficult.
It can be really hard to interpret implicit cues. People will respond differently, from person to person, culture to culture, etc. And so it’s hard to know exactly which facial reaction means good versus which facial reaction means bad.
So right now, the first version of our framework is just using human actions. Seeing what the human is doing in the task can give clues about what the robot should do. They have different action spaces, but we can find an abstraction so that we can know that if a human does an action, what the similar actions would be that the robot can do. That’s the implicit feedback right now. And then, this summer, we want to extend that to using visual cues and looking at facial reactions and gestures.
So what kind of scenarios have you been kind of testing it on?
For our current project, we use a pizza making setup. Personally I really like cooking as an example because it’s a setting where it’s easy to imagine why these things would matter. I also like that cooking has this element of recipes and there is a formula, but there’s also room for personal preferences. For example, somebody likes to put their cheese on top of the pizza, so it gets really crispy, whereas other people like to put it under the meat and veggies, so that maybe it is more melty instead of crispy. Or even, some people clean up as they go versus others who wait until the end to deal with all the dishes. Another thing that I’m really excited about is that cooking can be social. Right now, we’re just working in dyadic human-robot interactions where it’s one person and one robot, but another extension that we want to work on in the coming year is extending this to group interactions. So if we have multiple people, maybe the robot can learn not only from the person reacting to the robot, but also learn from a person reacting to another person and extrapolating what that might mean for them in the collaboration.
Could you say a bit about how the work that you did earlier in your PhD has led you to this point?
When I first started my PhD, I was really interested in implicit feedback. And I thought that I wanted to focus on learning only from implicit feedback. One of my current lab mates was focused on the EMPATHIC framework, and was looking into learning from implicit human feedback, and I really liked that work and thought it was the direction that I wanted to go into.
However, that first summer of my PhD it was during COVID and so we couldn’t really have people come into the lab to interact with robots. And so instead I did an online study where I had people play a game with a robot. We recorded their face while they were playing the game, and then we tried to see if we could predict based on just facial reactions, gaze, and head orientation if we could predict what behaviors they preferred for the agent that they were playing with in the game. We actually found that we could decently well predict which of the behaviors they preferred.
The thing that was really cool was we found how much context matters. And I think this is something that is really important for going from just a solely teacher-learner paradigm to a collaboration – context really matters. What we found is that sometimes people would have really big reactions but it wasn’t necessarily to what the agent was doing, it was to something that they had done in the game. For example, there’s this clip that I always use in talks about this. This person’s playing and she has this really noticeably confused, upset look. And so at first you might think that’s negative feedback, whatever the robot did, the robot shouldn’t have done that. But if you actually look at the context, we see that it was the first time that she lost a life in this game. For the game we made a multiplayer version of Space Invaders, and she got hit by one of the aliens and her spaceship disappeared. And so based on the context, when a human looks at that, we actually say she was just confused about what happened to her. We want to filter that out and not actually consider that when reasoning about the human’s behavior. I think that was really exciting. After that, we realized that using implicit feedback only was just so hard. That’s why I’ve taken this pivot, and now I’m more interested in combining the implicit and explicit feedback together.
You mentioned the explicit element would be more binary, like good feedback, bad feedback. Would the person-in-the-loop press a button or would the feedback be given through speech?
Right now we just have a button for good job, bad job. In an HRI paper we looked at explicit feedback only. We had the same space invaders game, but we had people come into the lab and we had a little Nao robot, a little humanoid robot, sitting on the table next to them playing the game. We made it so that the person could give positive or negative feedback during the game to the robot so that it would hopefully learn better helping behavior in the collaboration. But we found that people wouldn’t actually give that much feedback because they were focused on just trying to play the game.
And so in this work we looked at whether there are different ways we can remind the person to give feedback. You don’t want to be doing it all the time because it’ll annoy the person and maybe make them worse at the game if you’re distracting them. And also you don’t necessarily always want feedback, you just want it at useful points. The two conditions we looked at were: 1) should the robot remind someone to give feedback before or after they try a new behavior? 2) should they use an “I” versus “we” framing? For example, “remember to give feedback so I can be a better teammate” versus “remember to give feedback so we can be a better team”, things like that. And we found that the “we” framing didn’t actually make people give more feedback, but it made them feel better about the feedback they gave. They felt like it was more helpful, kind of a camaraderie building. And that was only explicit feedback, but we want to see now if we combine that with a reaction from someone, maybe that point would be a good time to ask for that explicit feedback.
You’ve already touched on this but could you tell us about the future steps you have planned for the project?
The big thing motivating a lot of my work is that I want to make it easier for robots to adapt to humans with these subjective preferences. I think in terms of objective things, like being able to pick something up and move it from here to here, we’ll get to a point where robots are pretty good. But it’s these subjective preferences that are exciting. For example, I love to cook, and so I want the robot to not do too much, just to maybe do my dishes whilst I’m cooking. But someone who hates to cook might want the robot to do all of the cooking. Those are things that, even if you have the perfect robot, it can’t necessarily know those things. And so it has to be able to adapt. And a lot of the current preference learning work is so data hungry that you have to interact with it tons and tons of times for it to be able to learn. And I just don’t think that that’s realistic for people to actually have a robot in the home. If after three days you’re still telling it “no, when you help me clean up the living room, the blankets go on the couch not the chair” or something, you’re going to stop using the robot. I’m hoping that this combination of explicit and implicit feedback will help it be more naturalistic. You don’t have to necessarily know exactly the right way to give explicit feedback to get the robot to do what you want it to do. Hopefully through all of these different signals, the robot will be able to hone in a little bit faster.
I think a big future step (that is not necessarily in the near future) is incorporating language. It’s very exciting with how large language models have gotten so much better, but also there’s a lot of interesting questions. Up until now, I haven’t really included natural language. Part of it is because I’m not fully sure where it fits in the implicit versus explicit delineation. On the one hand, you can say “good job robot”, but the way you say it can mean different things – the tone is very important. For example, if you say it with a sarcastic tone, it doesn’t necessarily mean that the robot actually did a good job. So, language doesn’t fit neatly into one of the buckets, and I’m interested in future work to think more about that. I think it’s a super rich space, and it’s a way for humans to be much more granular and specific in their feedback in a natural way.
What was it that inspired you to go into this area then?
Honestly, it was a little accidental. I studied math and computer science in undergrad. After that, I worked in consulting for a couple of years and then in the public healthcare sector, for the Massachusetts Medicaid office. I decided I wanted to go back to academia and to get into AI. At the time, I wanted to combine AI with healthcare, so I was initially thinking about clinical machine learning. I’m at Yale, and there was only one person at the time doing that, so I was looking at the rest of the department and then I found Scaz (Brian Scassellati) who does a lot of work with robots for people with autism and is now moving more into robots for people with behavioral health challenges, things like dementia or anxiety. I thought his work was super interesting. I didn’t even realize that that kind of work was an option. He was working with Marynel Vázquez, a professor at Yale who was also doing human-robot interaction. She didn’t have any healthcare projects, but I interviewed with her and the questions that she was thinking about were exactly what I wanted to work on. I also really wanted to work with her. So, I accidentally stumbled into it, but I feel very grateful because I think it’s a way better fit for me than the clinical machine learning would have necessarily been. It combines a lot of what I’m interested in, and I also feel it allows me to flex back and forth between the mathy, more technical work, but then there’s also the human element, which is also super interesting and exciting to me.
Have you got any advice you’d give to someone thinking of doing a PhD in the field? Your perspective will be particularly interesting because you’ve worked outside of academia and then come back to start your PhD.
One thing is that, I mean it’s kind of cliche, but it’s not too late to start. I was hesitant because I’d been out of the field for a while, but I think if you can find the right mentor, it can be a really good experience. I think the biggest thing is finding a good advisor who you think is working on interesting questions, but also someone that you want to learn from. I feel very lucky with Marynel, she’s been a fabulous advisor. I’ve worked pretty closely with Scaz as well and they both foster this excitement about the work, but also care about me as a person. I’m not just a cog in the research machine.
The other thing I’d say is to find a lab where you have flexibility if your interests change, because it is a long time to be working on a set of projects.
For our final question, have you got an interesting non-AI related fact about you?
My main summertime hobby is playing golf. My whole family is into it – for my grandma’s 100th birthday party we had a family golf outing where we had about 40 of us golfing. And actually, that summer, when my grandma was 99, she had a par on one of the par threes – she’s my golfing role model!
About Kate
![]() |
Kate Candon is a PhD candidate at Yale University in the Computer Science Department, advised by Professor Marynel Vázquez. She studies human-robot interaction, and is particularly interested in enabling robots to better learn from natural human feedback so that they can become better collaborators. She was selected for the AAMAS Doctoral Consortium in 2023 and HRI Pioneers in 2024. Before starting in human-robot interaction, she received her B.S. in Mathematics with Computer Science from MIT and then worked in consulting and in government healthcare. |
Google’s deepfake hunter sees what you can’t—even in videos without faces
Mobile manipulators: Flexibility through mobility
Review delineates approaches to human-robot interaction using biosignals
Innovative robotic slip-prevention method could bring human-like dexterity to industrial automation
#RoboCup2025: social media round-up part 2

RoboCup2025 took place from 15-21 July in Salvador, Brazil. The event saw around 3000 participants competing in the various leagues. In our first social media round-up post we saw what the teams got up to during the first couple of days of the event. In this second post, we take a look at the action from the final days when the competitions reached their climax.
In the #RoboCup2025 @Home OPL Final, our robot performed very well. It opened two doors, removed trash, and closed a cabinet door. Overall, NimbRo came in second, next to team Tidyboy (Korea).
www.ais.uni-bonn.de/nimbro/@Home— Sven Behnke (@sven-behnke.bsky.social) 20 July 2025 at 18:04
#RoboCup 2025 Competitions Day 3
Today was a tense day! The best robots in each category advanced to the final round that will take place tomorrow! Good luck to all the finalists
#robocup2025 #robotics #ai pic.twitter.com/cC5PihLAge
— Asad Norouzi (@asadnorouzi) July 19, 2025
ロボカップシンポジウム
Panel: The Future of RoboCup大盛況です#Kyutech #KitaQ #HMA #RoboCup2025 #Salvador pic.twitter.com/eYCaFkiprb
— Hibikino-Musashi@Home (@HMA_wakamatsu) July 21, 2025
We are in the finals!
We started with our own robot damaged during the transportation but now our super-team, including RoboCanes (U. of Miami), PUMAS (UNAM), @_erasers, and TIDbots (@jikei_tid), has got into the final at the #RoboCup at Home (DSPL) league!#RoboCup2025 pic.twitter.com/yFbCK7GEU6
— Luis Contreras (@TenshiTeacher) July 20, 2025