Large Language Models, like ChatGPT, are learning to play Dungeons & Dragons. The reason? Simulating and playing the popular tabletop role-playing game provides a good testing ground for AI agents that need to function independently for long stretches of time.
MIT researchers have identified significant examples of machine-learning model failure when those models are applied to data other than what they were trained on, raising questions about the need to test whenever a model is deployed in a new setting.
Emotions are a fundamental part of human psychology—a complex process that has long distinguished us from machines. Even advanced artificial intelligence (AI) lacks the capacity to feel. However, researchers are now exploring whether the formation of emotions can be computationally modeled, providing machines with a deeper, more human-like understanding of emotional states.
Are generative artificial intelligence systems such as ChatGPT truly creative? A research team led by Professor Karim Jerbi from the Department of Psychology at the Université de Montréal, and including AI pioneer Yoshua Bengio, also a professor at Université de Montréal, has just published the largest comparative study ever conducted on the creativity of large language models versus humans.
A study led by UC Riverside researchers offers a practical fix to one of artificial intelligence's toughest challenges by enabling AI systems to reason more like humans—without requiring new training data beyond test questions.
The vision of a fully connected world is rapidly becoming a reality through the Internet of Things (IoT)—a growing network of physical devices that collect and share data over the Internet, including everything from small sensors to autonomous vehicles and industrial equipment.
Large language models (LLMs), the computational models underpinning the functioning of ChatGPT, Gemini and other widely used artificial intelligence (AI) platforms, can rapidly source information and generate texts tailored for specific purposes. As these models are trained on large amounts of texts written by humans, they could exhibit some human-like biases, which are inclinations to prefer specific stimuli, ideas or groups that deviate from objectivity.
Generative AI is reshaping software development—and fast. A new study published in Science shows that AI-assisted coding is spreading rapidly, though unevenly: in the U.S., the share of new code relying on AI rose from 5% in 2022 to 29% in early 2025, compared with just 12% in China. AI usage is highest among less experienced programmers, but productivity gains go to seasoned developers.
The sheen of satin, the subtle glints of twill, the translucence of sheer silk: Fabric has long been difficult to render digitally because of the myriad ways different yarns can be woven or knitted together.
Artificial intelligence (AI) is increasingly used to analyze medical images, materials data and scientific measurements, but many systems struggle when real-world data do not match ideal conditions. Measurements collected from different instruments, experiments or simulations often vary widely in resolution, noise and reliability. Traditional machine-learning models typically assume those differences are negligible—an assumption that can limit accuracy and trustworthiness.
The data inputs that enable modern search and recommendation systems were thought to be secure, but an algorithm developed by Cornell Tech researchers successfully teased out names, medical diagnoses and financial information from encoded datasets.
Researchers at Los Alamos National Laboratory have developed a new approach that addresses the limitations of generative AI models. Unlike generative diffusion models, the team's Discrete Spatial Diffusion approach honors scientific and physics principles. The team validated their model on two challenging scientific applications—subsurface rock microstructures and lithium-ion battery electrodes—with promising results.
These days, large language models can handle increasingly complex tasks, writing complex code and engaging in sophisticated reasoning. But when it comes to four-digit multiplication, a task taught in elementary school, even state-of-the-art systems fail. Why?
A research team affiliated with UNIST has unveiled a novel AI system capable of grading and providing detailed feedback on even the most untidy handwritten math answers—much like a human instructor.
For people, matching what they see on the ground to a map is second nature. For computers, it has been a major challenge. A Cornell research team has introduced a new method that helps machines make these connections—an advance that could improve robotics, navigation systems, and 3D modeling.
---- End of list of Tech Xplore Computer Science News Articles on this page 1 of 2 total pages ----