XDA Developers on MSN
Stop obsessing over your GPU's core clock — memory clock matters more for local LLM inference
Your self-hosted LLMs care more about your memory performance ...
Red Hat is pushing Kubernetes inference into the mainstream by contributing llm-d to the CNCF, as enterprises race to run AI ...
An open standard for AI inference backed by Google Cloud, IBM, Red Hat, Nvidia and more was given to the Linux Foundation for ...
Google has introduced TurboQuant, a compression algorithm that reduces large language model (LLM) memory usage by at least 6x ...
CNCF Sandbox project. This places the project under the Linux Foundation’s management and establishes an open standard for AI inference across any accelerator and any cloud environment. The Cloud ...
A research article by Horace He and the Thinking Machines Lab (X-OpenAI CTO Mira Murati founded) addresses a long-standing issue in large language models (LLMs). Even with greedy decoding bu setting ...
“Large Language Model (LLM) inference is hard. The autoregressive Decode phase of the underlying Transformer model makes LLM inference fundamentally different from training. Exacerbated by recent AI ...
A monthly overview of things you need to know as an architect or aspiring architect. Unlock the full InfoQ experience by logging in! Stay updated with your favorite authors and topics, engage with ...
The company tackled inferencing the Llama-3.1 405B foundation model and just crushed it. And for the crowds at SC24 this week in Atlanta, the company also announced it is 700 times faster than ...
A new technical paper titled “Pushing the Envelope of LLM Inference on AI-PC and Intel GPUs” was published by researcher at Intel. “The advent of ultra-low-bit LLM models (1/1.58/2-bit), which match ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results