Training a large artificial intelligence model is expensive, not just in dollars, but in time, energy, and computational resources. Traditionally, obtaining a smaller, faster model either requires training a massive one first and then trimming it down, or training a small one from scratch and accepting weaker performance.
Nano- and microplastics are increasingly being detected in the human body. However, their detection remains challenging, often relying on invasive techniques and specialized equipment. Researchers at the Institute of Computer Science at the University of Tartu are developing a device that can measure plastic in the human body. Their research is published in the journal Proceedings of the 27th International Workshop on Mobile Computing Systems and Applications.
The number of scientific papers is growing so rapidly that scientists are no longer able to keep track of all of them, even in their own research area. Researchers from the Karlsruhe Institute of Technology (KIT), in collaboration with scientific partners, have shown how new research ideas can still be obtained from this wealth of information. Using artificial intelligence (AI), they systematically analyzed materials science publications to identify potential new avenues of research. Their results have been published in Nature Machine Intelligence.
Although AI is not intentionally biased, it can inherit biases from the data fed into it, learning and repeating them until the system becomes inherently unfair. This is complicated by the problem of identifying where the AI system introduced the bias, as most AI systems display their final decision without showing the steps that made it. Unfair patterns may go unnoticed simply because they are hard to identify.
Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.
A major problem with quantum computers is memory, as the information they contain can be quickly lost. Quantum computers are not yet fully reliable—they are far too unstable. However, all around the world, people are trying to improve them—some of whom are based in Norway.
Even with all the recent advances in the ability of large language models (like ChatGPT) to help us think, research, summarize, and learn complex and technical texts, how do they fare in understanding storytelling and literature? These questions around interpretive nuance remain.
In high-stakes settings like medical diagnostics, users often want to know what led a computer vision model to make a certain prediction, so they can determine whether to trust its output. Concept bottleneck modeling is one method that enables artificial intelligence systems to explain their decision-making process. These methods force a deep-learning model to use a set of concepts, which can be understood by humans, to make a prediction. In new research, MIT computer scientists developed a method that coaxes the model to achieve better accuracy and clearer, more concise explanations.
Researchers at the University at Albany and Rutgers University have developed an early-warning framework that can predict harmful social media interactions before they erupt, paving the way for interventions that can minimize harm and make platforms safer for users. Using publicly available datasets from Reddit and Instagram, two social media platforms with distinct conversation dynamics, researchers trained models to predict from just the first 10 comments whether a thread would escalate into "concentrated waves of toxic interactions"—or what they have dubbed a "negative storm" or "neg storm."
For years, the guiding assumption of artificial intelligence has been simple: an AI is only as good as the data it has seen. Feed it more, train it longer, and it performs better. Feed it less, and it stumbles. A new study from the USC Viterbi School of Engineering was accepted at the IEEE SoutheastCon 2026, taking place March 12–15. It suggests something far more surprising: with the right method in place, an AI model can dramatically improve its performance in territory it was barely trained on, pushing well past what its training data alone would ever allow.
To stay up to date and work forward in their fields, scientists must have at their fingertips and in their minds thousands of published studies. Large language models (LLMs) show promise as a tool for exploring the vast scientific literature, but are they trustworthy when it comes to providing full and scientifically accurate answers to complex questions in specialized fields?
AI chatbots are standardizing how people speak, write, and think. If this homogenization continues unchecked, it risks reducing humanity's collective wisdom and ability to adapt, computer scientists and psychologists argue in an opinion paper published in Trends in Cognitive Sciences.
Over the past decades, computer scientists have developed numerous artificial intelligence (AI) systems that can process human speech in different languages. The extent to which these models replicate the brain processes via which humans understand spoken language, however, has not yet been clearly determined.
Most of you have used a navigation app like Google Maps for your travels at some point. These apps rely on algorithms that compute shortest paths through vast networks. Now imagine scaling that task to calculate distances between every pair of points in a massive system, for example, a transportation grid, a communication backbone, or even a biological network such as protein or neural interaction networks.
Again and again, Washington State University professor Mesut Cicek and his colleagues fed hypotheses from scientific papers into ChatGPT and asked it to determine whether the statements had been upheld by research—whether they were true or false. They did this with more than 700 hypotheses, repeating each query 10 times.
---- End of list of Tech Xplore Computer Science News Articles on this page 1 of 2 total pages ----