TurboQuant, meanwhile, could lead to efficiency gains and systems that require less memory during inference. But it wouldn’t necessarily solve the wider RAM shortages driven by AI, given that it only targets inference memory, not training — the latter of which continues to require massive amounts of RAM.
I didn’t realize the RAM shortage was mostly due to training—I would have thought inference was at least a big a factor.
I didn’t realize the RAM shortage was mostly due to training—I would have thought inference was at least a big a factor.