Sunday, May 10, 2026Aggregating 2,418 sources · Updated 38 seconds agoNYC 54° · LON 47° · TOK 61°
Tech

$200 ‘socketed’ Nvidia AI GPU for servers hacked into a PCIe card with custom PCB and 3D-printed cooling — modded Tesla V100 SMX data center GPU runs AI LLMs and is more efficient than many modern midrange offerings in AI inference

TH·1h ago·3 min read
Photograph via Toms Hardware
RSS SUMMARY · AGGREGATED FROM TH

Turns out, Nvidia's older Turing-era V100 AI GPU is still pretty capable today, even with just 16GB of VRAM. A YouTuber got his hands on the SMX variant for just $100, converted it to a PCIe x16 interface for another $100 with an adapter, and got some pretty impressive results across AI inference and NVR benchmarks.

Turns out, Nvidia's older Turing-era V100 AI GPU is still pretty capable today, even with just 16GB of VRAM. A YouTuber got his hands on the SMX variant for just $100, converted it to a PCIe x16 interface for another $100 with an adapter, and got some pretty impressive results across AI inference and NVR…

Turns out, Nvidia's older Turing-era V100 AI GPU is still pretty capable today, even with just 16GB of VRAM. A YouTuber got his hands on the SMX variant for just $100, converted it to a PCIe x16 interface for another $100 with an adapter, and got some pretty impressive results across AI inference and NVR benchmarks.

Continue Reading

The full story continues on Toms Hardware.

Story Sentry shows a short summary aggregated via RSS. The complete article — original photography, charts, and reporting — lives with the publisher.

The Source

TH

Related

On this beat