Nvidia’s DLSS 4.5 uses a transformer-based upscaling model that can turn ultra-low-resolution games (even a 240p Oblivion render) into playable 720p visuals, and can push 720p titles like Hellblade II toward near-4K quality on RTX GPUs from 2060 onward, though fine details such as thin lines and foliage may smear with upscaling.
Nvidia announced DLSS 4.5 during CES, featuring an improved transformer model for better image quality and performance, along with updates to frame generation technology, Pulsar monitors, RTX Remix enhancements, and AI tools like Nvidia ACE and Neural Texture Compression, focusing on graphics and AI advancements without new GPU hardware.
NVIDIA's upcoming DLSS Transformer Model is expected to reduce VRAM usage by around 20%, improving performance and image quality, especially benefiting gamers with 8GB or lower GPUs. The new model, which replaces the previous CNN approach with a vision transformer, offers significant enhancements in upscaling and ray reconstruction, and is set to be officially released soon.
Researchers have discovered a striking similarity between AI memory processing in the Transformer model and the memory functions of the human brain's hippocampus. The study found that the Transformer model employs a gatekeeping process similar to the brain's NMDA receptor, which is crucial for memory consolidation. This research not only advances the development of Artificial General Intelligence (AGI) but also deepens our understanding of human memory mechanisms. The findings offer potential for developing more efficient, brain-like AI systems and shed light on the workings of the human brain through AI models.
Researchers have discovered a striking similarity between the memory processing of artificial intelligence (AI) models, specifically the Transformer model, and the hippocampus of the human brain. By applying principles of human brain learning, the team found that the Transformer model uses a gatekeeping process similar to the brain's NMDA receptor, which facilitates memory formation. Mimicking the NMDA receptor's gating process in the Transformer model led to enhanced memory, suggesting that AI models can learn using established knowledge in neuroscience. This research opens up possibilities for developing low-cost, high-performance AI systems that learn and remember information like humans, while also providing valuable insights into the workings of the brain through AI models.
Apple announced that iOS 17 will feature an upgraded autocorrect powered by an AI model that can more accurately predict the next words and phrases you might use. The model in question is a "Transformer" model that will personalize over time, learning your most frequently-used words, including swear words.