General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Gilad Barlev
Brodie Robertson
comments
Comments by "Gilad Barlev" (@GSBarlev) on "Will AMD Open Source Their Firmware!?!" video.
Pat "80386" Gelsinger recently made some noise about how CUDA's proprietary API needed to die. My hope is that Intel starts contributing to ROCm and HIP and picking up AMD's slack when it comes to documentation and compatibility. Throwing some developers at ZLUDA wouldn't hurt, either.
17
@Beryesa. Oh yeah! I should have mentioned oneAPI as a viable standard! It even looks like academics and others have gotten it running on Nvidia and AMD GPUs. Ultimately, I haven't touched the low-level APIs since my PhD, so I'm not educated enough to say which framework is better--all that matters to me is that whatever standard dominates is FOSS.
3
@trajectoryunown That's literally AMD's business plan with their Instinct line.
2
CoreCtrl? IIRC that was a one-hoop affair on my 6800U and worked OOTB with my 5900X+6700XT setup.
2
Navi 32 is absolutely supported by ROCm—you just might have to grab builds for ROCm 6.
2
@Roxor128 You actually don't need to limit yourself to VRAM—there's going to be a performance cost, but the HuggingFace API allows you to offload assets to system RAM (or, theoretically, swap).
2
Counterpoint: AMD GPUs being difficult to use for AI is the only thing keeping them cheap and available for gaming
2
AMD hardware is more than capable when it comes to AI inference. I've run SDXL and Mistral 7B on my 6700XT, which these days you can pick up for like $200. And yes, getting the ROCm stack running (on Arch, btw) was a pain in the toucans—I still don't have tensorflow quite right, and my PyTorch venv is more unstable than Bing when she's calling herself "Sydney." Still, it's all worth it not to pay for Jensen's jacket tax.
2
@bitterseeds Ah, yup. I see the discussion that this is a firmware limitation. That really blows. Or doesn't, I guess, depending on what the fans are(n't) doing.
1
Casr in point: the best guide I've found for setting up Pytorch-ROCm is the Arch Wiki.
1
@Roxor128 pacman logs (and, more drastically, Timeshift) have really been coming in clutch for me in these situations.
1
I've been running SDXL directly through IPython and the HuggingFace API on a 6700XT. Takes about a minute per image, which I find perfectly reasonable. And, more importantly, power consumption is so low: using a kill-a-watt, I've never seen total wall power exceed ~450W.
1
@malcaniscsm5184 Garuda ships with PyTorch-ROCm??
1