Scientists have achieved a breakthrough in analog computing, developing a programmable electronic circuit that harnesses the properties of high-frequency electromagnetic waves to perform complex parallel processing at light-speed.
Blind and low-vision programmers have long been locked out of three-dimensional modeling software, which depends on sighted users dragging, rotating and inspecting shapes on screen.
In the world around us, many things exist in the context of time: a bird's path through the sky is understood as different positions over a period of time, and conversations as a series of words occurring one after another.
Researchers from EPFL, AMD, and the University of Novi Sad have uncovered a long-standing inefficiency in the algorithm that programs millions of reconfigurable chips used worldwide, a discovery that could reshape how future generations of these are designed and programmed.
Probabilistic Ising machines (PIMs) are advanced and specialized computing systems that could tackle computationally hard problems, such as optimization or integer factorization tasks, more efficiently than classical systems. To solve problems, PIMs rely on interacting probabilistic bits (p-bits), networks of interacting units of digital information with values that randomly fluctuate between 0 and 1, but that can be biased to converge to yield desired solutions.
To train artificial intelligence (AI) models, researchers need good data and lots of it. However, most real-world data has already been used, leading scientists to generate synthetic data. While the generated data helps solve the issue of quantity, it may not always have good quality, and assessing its quality has been overlooked.
Everyone hates traffic. Big cities in particular are plagued by an overabundance of vehicles, turning a simple crosstown jaunt into an odyssey during rush hour. Part of the problem is that traffic is incredibly complex, and a small change in one part of the system can have ripple effects that alter traffic patterns throughout a city. City planners attempting to improve local traffic grids can often struggle to foresee all the effects their changes could have.
When researchers are building large language models (LLMs), they aim to maximize performance under a particular computational and financial budget. Since training a model can amount to millions of dollars, developers need to be judicious with cost-impacting decisions about, for instance, the model architecture, optimizers, and training datasets before committing to a model.
Developments in autonomous robotics have the potential to revolutionize manufacturing processes, making them more flexible, customizable, and efficient. But coordinating fleets of autonomous, mobile robots in a shared space—and helping them work with each other and with human partners—is an extremely complicated task.
Artificial intelligence is getting smarter every day, but it still has its limits. One of the biggest challenges has been teaching advanced AI models to reason, which means solving problems step by step. But in a new paper published in the journal Nature, the team from DeepSeek AI, a Chinese artificial intelligence company, reports that they were able to teach their R1 model to reason on its own without human input.
Train delays can cascade into stalled commutes, economic losses, and vacation snags. Scheduling trains is computationally complex, though: It can take hours or days to solve large transportation networks on traditional computers, when disruptions like train breakdowns or traffic accidents demand much quicker solutions.
Understanding how and when drivers change lanes is key to improving highway traffic flow, safety and autonomous vehicle performance, and a new approach developed at the University of Michigan outperforms current methods using only GPS data.
At the height of the COVID-19 pandemic, players flocked to Axie Infinity, a blockchain-based video game where users received cryptocurrency tokens for their time spent playing. In 2022, when the broader crypto market crashed and a massive hack erased players' earnings, most users fled. A new study by Cornell researchers has investigated why some players stuck around.
Researchers have developed a motion-compensation method that allows single-pixel imaging to capture sharp images of complex dynamic scenes. The new approach could expand the practical utility of this computational imaging method by enabling clearer images of moving targets and improving the quality of surveillance images.
Artists are always looking for new ways to create and express themselves. A growing trend is the use of multiple layers of see-through materials, such as Plexiglas, to create paintings that have real depth, transforming two-dimensional images into three-dimensional illusions that feel more realistic and lifelike. But can these layered works be made even more immersive?
---- End of list of Tech Xplore Computer Science News Articles on this page 1 of 2 total pages ----