Researchers at Kobe University have developed an AI system that can detect acromegaly, a rare hormone disorder, by analyzing photos of the back of the hand and a clenched fist. The disease often develops slowly and can take years to diagnose, even though untreated cases may shorten life expectancy.
Choosing the right method for multimodal AI—systems that combine text, images, and more—has long been trial and error. Emory physicists created a unifying mathematical framework that shows many AI techniques rely on the same core idea: compress data while preserving what’s most predictive. Their “control knob” approach helps researchers design better algorithms, use less data, and avoid wasted computing power. The team believes it could pave the way for more accurate, efficient, and environmentally friendly AI.
As millions turn to ChatGPT and other AI chatbots for therapy-style advice, new research from Brown University raises a serious red flag: even when instructed to act like trained therapists, these systems routinely break core ethical standards of mental health care. In side-by-side evaluations with peer counselors and licensed psychologists, researchers uncovered 15 distinct ethical risks — from mishandling crisis situations and reinforcing harmful beliefs to showing biased responses and offering “deceptive empathy” that mimics care without real understanding.
Researchers tested whether generative AI could handle complex medical datasets as well as human experts. In some cases, the AI matched or outperformed teams that had spent months building prediction models. By generating usable analytical code from precise prompts, the systems dramatically reduced the time needed to process health data. The findings hint at a future where AI helps scientists move faster from data to discovery.
Scientists at the University of New Hampshire have unleashed artificial intelligence to dramatically speed up the hunt for next-generation magnetic materials. By building a massive, searchable database of 67,573 magnetic compounds — including 25 newly recognized materials that stay magnetic even at high temperatures — the team is opening the door to cheaper, more sustainable technologies.
Neuromorphic computers modeled after the human brain can now solve the complex equations behind physics simulations — something once thought possible only with energy-hungry supercomputers. The breakthrough could lead to powerful, low-energy supercomputers while revealing new secrets about how our brains process information.
Dinosaur footprints have always been mysterious, but a new AI app is cracking their secrets. DinoTracker analyzes photos of fossil tracks and predicts which dinosaur made them, with accuracy rivaling human experts. Along the way, it uncovered footprints that look strikingly bird-like—dating back more than 200 million years. That discovery could push the origin of birds much deeper into prehistory.
NASA’s Perseverance rover has just made history by driving across Mars using routes planned by artificial intelligence instead of human operators. A vision-capable AI analyzed the same images and terrain data normally used by rover planners, identified hazards like rocks and sand ripples, and charted a safe path across the Martian surface. After extensive testing in a virtual replica of the rover, Perseverance successfully followed the AI-generated routes, traveling hundreds of feet autonomously.
Researchers have turned artificial intelligence into a powerful new lens for understanding why cancer survival rates differ so dramatically around the world. By analyzing cancer data and health system information from 185 countries, the AI model highlights which factors, such as access to radiotherapy, universal health coverage, and economic strength, are most closely linked to better survival in each nation.
Humans pay enormous attention to lips during conversation, and robots have struggled badly to keep up. A new robot developed at Columbia Engineering learned realistic lip movements by watching its own reflection and studying human videos online. This allowed it to speak and sing with synchronized facial motion, without being explicitly programmed. Researchers believe this breakthrough could help robots finally cross the uncanny valley.
Foams were once thought to behave like glass, with bubbles frozen in place at the microscopic level. But new simulations reveal that foam bubbles are always shifting, even while the foam keeps its overall shape. Remarkably, this restless motion follows the same math used to train artificial intelligence. The finding hints that learning-like behavior may be a fundamental principle shared by materials, machines, and living cells.
A generative AI system can now analyze blood cells with greater accuracy and confidence than human experts, detecting subtle signs of diseases like leukemia. It not only spots rare abnormalities but also recognizes its own uncertainty, making it a powerful support tool for clinicians.
New research shows that AI doesn’t need endless training data to start acting more like a human brain. When researchers redesigned AI systems to better resemble biological brains, some models produced brain-like activity without any training at all. This challenges today’s data-hungry approach to AI development. The work suggests smarter design could dramatically speed up learning while slashing costs and energy use.
Spanish researchers have created a powerful new open-source tool that helps uncover the hidden genetic networks driving cancer. Called RNACOREX, the software can analyze thousands of molecular interactions at once, revealing how genes communicate inside tumors and how those signals relate to patient survival. Tested across 13 different cancer types using international data, the tool matches the predictive power of advanced AI systems—while offering something rare in modern analytics: clear, interpretable explanations that help scientists understand why tumors behave the way they do.
AI tools designed to diagnose cancer from tissue samples are quietly learning more than just disease patterns. New research shows these systems can infer patient demographics from pathology slides, leading to biased results for certain groups. The bias stems from how the models are trained and the data they see, not just from missing samples. Researchers also demonstrated a way to significantly reduce these disparities.