chemnology.com

TechXplore Computer Science News Posts

Posts are copyright TechXplore.com

Compression technique makes AI models leaner and faster while they're still learning

Training a large artificial intelligence model is expensive, not just in dollars, but in time, energy, and computational resources. Traditionally, obtaining a smaller, faster model either requires training a massive one first and then trimming it down, or training a small one from scratch and accepting weaker performance.

View Article

Smartwatch-like device could help detect plastic particles in the human body

Nano- and microplastics are increasingly being detected in the human body. However, their detection remains challenging, often relying on invasive techniques and specialized equipment. Researchers at the Institute of Computer Science at the University of Tartu are developing a device that can measure plastic in the human body. Their research is published in the journal Proceedings of the 27th International Workshop on Mobile Computing Systems and Applications.

View Article

AI maps science papers to predict research trends two to three years ahead

The number of scientific papers is growing so rapidly that scientists are no longer able to keep track of all of them, even in their own research area. Researchers from the Karlsruhe Institute of Technology (KIT), in collaboration with scientific partners, have shown how new research ideas can still be obtained from this wealth of information. Using artificial intelligence (AI), they systematically analyzed materials science publications to identify potential new avenues of research. Their results have been published in Nature Machine Intelligence.

View Article

Fair decisions, clear reasons: Creating fuzzy AI with fairness built in from the start

Although AI is not intentionally biased, it can inherit biases from the data fed into it, learning and repeating them until the system becomes inherently unfair. This is complicated by the problem of identifying where the AI system introduced the bias, as most AI systems display their final decision without showing the steps that made it. Unfair patterns may go unnoticed simply because they are hard to identify.

View Article

New AI testing method flags fairness risks in autonomous systems

Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.

View Article

Helping resolve quantum computers' memory problem

A major problem with quantum computers is memory, as the information they contain can be quickly lost. Quantum computers are not yet fully reliable—they are far too unstable. However, all around the world, people are trying to improve them—some of whom are based in Norway.

View Article

Can AI understand literature? Researchers put it to the test

Even with all the recent advances in the ability of large language models (like ChatGPT) to help us think, research, summarize, and learn complex and technical texts, how do they fare in understanding storytelling and literature? These questions around interpretive nuance remain.

View Article

Improving AI models' ability to explain their predictions

In high-stakes settings like medical diagnostics, users often want to know what led a computer vision model to make a certain prediction, so they can determine whether to trust its output. Concept bottleneck modeling is one method that enables artificial intelligence systems to explain their decision-making process. These methods force a deep-learning model to use a set of concepts, which can be understood by humans, to make a prediction. In new research, MIT computer scientists developed a method that coaxes the model to achieve better accuracy and clearer, more concise explanations.

View Article

Early-warning model developed to predict toxic social media storms

Researchers at the University at Albany and Rutgers University have developed an early-warning framework that can predict harmful social media interactions before they erupt, paving the way for interventions that can minimize harm and make platforms safer for users. Using publicly available datasets from Reddit and Instagram, two social media platforms with distinct conversation dynamics, researchers trained models to predict from just the first 10 comments whether a thread would escalate into "concentrated waves of toxic interactions"—or what they have dubbed a "negative storm" or "neg storm."

View Article

The AI that taught itself: How AI can learn what it never knew

For years, the guiding assumption of artificial intelligence has been simple: an AI is only as good as the data it has seen. Feed it more, train it longer, and it performs better. Feed it less, and it stumbles. A new study from the USC Viterbi School of Engineering was accepted at the IEEE SoutheastCon 2026, taking place March 12–15. It suggests something far more surprising: with the right method in place, an AI model can dramatically improve its performance in territory it was barely trained on, pushing well past what its training data alone would ever allow.

View Article

Can AI read papers like a scientist? A new benchmark shows where LLMs fail

To stay up to date and work forward in their fields, scientists must have at their fingertips and in their minds thousands of published studies. Large language models (LLMs) show promise as a tool for exploring the vast scientific literature, but are they trustworthy when it comes to providing full and scientifically accurate answers to complex questions in specialized fields?

View Article

AI is homogenizing human expression and thought, computer scientists and psychologists say

AI chatbots are standardizing how people speak, write, and think. If this homogenization continues unchecked, it risks reducing humanity's collective wisdom and ability to adapt, computer scientists and psychologists argue in an opinion paper published in Trends in Cognitive Sciences.

View Article

Human brain and AI speech recognition decode speech in similar step-by-step stages, study finds

Over the past decades, computer scientists have developed numerous artificial intelligence (AI) systems that can process human speech in different languages. The extent to which these models replicate the brain processes via which humans understand spoken language, however, has not yet been clearly determined.

View Article

Shortest paths research narrows a 25-year gap in graph algorithms

Most of you have used a navigation app like Google Maps for your travels at some point. These apps rely on algorithms that compute shortest paths through vast networks. Now imagine scaling that task to calculate distances between every pair of points in a massive system, for example, a transportation grid, a communication backbone, or even a biological network such as protein or neural interaction networks.

View Article

AI gets a D: ChatGPT struggles with scientific true-or-false, study shows

Again and again, Washington State University professor Mesut Cicek and his colleagues fed hypotheses from scientific papers into ChatGPT and asked it to determine whether the statements had been upheld by research—whether they were true or false. They did this with more than 700 hypotheses, repeating each query 10 times.

View Article

---- End of list of Tech Xplore Computer Science News Articles on this page 1 of 2 total pages ----


GO SCIENCE!!
GO STEM STUDENTS!!

NEXT
HOME