All posts by Artificial Intelligence News -- ScienceDaily

Page 7 of 7
1 5 6 7

AI headphones let wearer listen to a single person in a crowd, by looking at them just once

Engineers have developed an artificial intelligence system that lets someone wearing headphones look at a person speaking for three to five seconds to 'enroll' them. The system then plays just the enrolled speaker's voice in real time, even as the pair move around in noisy environments.

Large language models can’t effectively recognize users’ motivation, but can support behavior change for those ready to act

Large language model-based chatbots can't effectively recognize users' motivation when they are hesitant about making healthy behavior changes, but they can support those who are committed to take action, say researchers.

Building a better sarcasm detector

Sarcasm is notoriously tricky to convey through text, and the subtle changes in tone that convey sarcasm often confuse computer algorithms as well, limiting virtual assistants and content analysis tools. So researchers have now developed a multimodal algorithm for improved sarcasm detection that examines multiple aspects of audio recordings for increased accuracy. They used two complementary approaches -- sentiment analysis using text and emotion recognition using audio -- for a more complete picture.

When consumers would prefer a chatbot over a person

Actually, sometimes consumers don't want to talk to a real person when they're shopping online, a new study suggests. In fact, what they really want is a chatbot that makes it clear that it is not human at all. In a new study, researchers found that people preferred interacting with chatbots when they felt embarrassed about what they were buying online -- items like antidiarrheal medicine or, for some people, skin care products.

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.

Computer vision researcher develops privacy software for surveillance videos

Computer vision can be a valuable tool for anyone tasked with analyzing hours of footage because it can speed up the process of identifying individuals. For example, law enforcement may use it to perform a search for individuals with a simple query, such as 'Locate anyone wearing a red scarf over the past 48 hours.'
Page 7 of 7
1 5 6 7