Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

74% Positive

Analyzed from 4452 words in the discussion.

Trending Topics

#rocm#amd#nvidia#cuda#gpu#vulkan#support#more#don#years

Discussion (120 Comments)Read Original on HackerNews

lrvickabout 10 hours ago
Just spent the last week or so porting TheRock to stagex in an effort to get ROCm built with a native musl/mimalloc toolchain and get it deterministic for high security/privacy workloads that cannot trust binaries only built with a single compiler.

It has been a bit of a nightmare and had to package like 30+ deps and their heavily customized LLVM, but got the runtime to build this morning finally.

Things are looking bright for high security workloads on AMD hardware due to them working fully in the open however much of a mess it may be.

WhyNotHugoabout 8 hours ago
I also attempted to package ROCM on musl. Specifically, packaging it for Alpine Linux.

It truly is a nightmare to build the whole thing. I got past the custom LLVM fork and a dozen other packages, but eventually decided it had been too much of a time sink.

I’m using llama.cpp with its vulkan support and it’s good enough for my uses. Vulkan so already there and just works. It’s probably on your host too, since so many other things rely on it anyway.

That said, I’d be curious to look at your build recipes. Maybe it can help power through the last bits of the Alpine port.

lrvickabout 7 hours ago
Keep an eye out for a stable rocm PR to stagex in the next week or so if all goes well.
jauntywundrkindabout 9 hours ago
https://github.com/ROCm/TheRock/issues/3477 makes me quite sad for a variety of reasons. It shouldn't be like this. This work should be usable.
MrDrMcCoyabout 3 hours ago
So much about this confuses me. What do Kitty and ncurses have to do with ROCm? Why is this being built with GCC instead of clang? Why even bother building it yourself when the tarballs are so good and easy to work with?
jeroenhdabout 1 hour ago
The analysis was AI generated. This was Claude brute-forcing itself through building a library.
CamouflagedKiwiabout 2 hours ago
On the last one: OP said they were trying to get it working for a musl toolchain, so the tarballs are probably not useful to them (I assume they're built for glibc).

Agreed on the others though. Why's it even installing ncurses, surely that's just expected to be on the system?

lrvickabout 9 hours ago
Oh I fully abandoned TheRock in my stagex ROCm build stack. It is not worth salvaging, but it was an incredibly useful reference for me to rewrite it.
999900000999about 7 hours ago
Wait ?

You don't trust Nvidia because the drivers are closed source ?

I think Nvidia's pledged to work on the open source drivers to bring them closer to the proprietary ones.

I'm hopping Intel can catch up , at 32GB of VRAM for around 1000$ it's very accessible

jeroenhdabout 1 hour ago
Nvidia is opening their source code because they moved most of their source code to the binary blob they're loading. That's why they never made an open source Nvidia driver for Pascal or earlier, where the hardware wasn't set up to use their giant binary blobs.

It's like running Windows in a VM and calling it an open source Windows system. The bootstrapping code is all open, but the code that's actually being executed is hidden away.

Intel has the same problem AMD has: everything is written for CUDA or other brand-specific APIs. Everything needs wrappers and workarounds to run before you can even start to compare performance.

lrvickabout 7 hours ago
Nvidia has been pledging that for years. If it ever actually happens, I am here for it.
shaklee3about 5 hours ago
cmxchabout 6 hours ago
> Intel

For some workloads, the Arc Pro B70 actually does reasonably well when cached.

With some reasonable bring-up, it also seems to be more usable versus the 32gb R9700.

MrDrMcCoyabout 3 hours ago
I have both of those cards. Llama.cpp with SYCL has thus far refused to work for me, and Vulkan is pretty slow. Hoping that some fixes come down the pipe for SYCL, because I have plenty of power for local models (on paper).
salawatabout 6 hours ago
>Just spent the last week or so porting TheRock to stagex in an effort to get ROCm built with a native musl/mimalloc toolchain and get it deterministic for high security/privacy workloads that cannot trust binaries only built with a single compiler.

...I have a feeling you might not be at liberty to answer, but... Wat? The hell kind of "I must apparently resist Reflections on Trusting Trust" kind of workloads are you working on?

And what do you mean "binaries only built using a single compiler"? Like, how would that even work? Compile the .o's with compiler specific suffixes then do a tortured linker invo to mix different .o's into a combined library/ELF? Are we talking like mixing two different C compilers? Same compiler, two different bootstraps? Regular/cross-mix?

I'm sorry if I'm pushing for too much detail, but as someone whose actually bootstrapped compilers/user spaces from source, your usecase intrigues me just by the phrasing.

0xbadcafebeeabout 8 hours ago
AMD has years of catching up to do with ROCm just to get their devices to work well. They don't support all their own graphics cards that can do AI, and when it is supported, it's buggy. The AMDGPU graphics driver for Linux has had continued instability since 6.6. I don't understand why they can't hire better software engineers.
xethosabout 6 hours ago
> I don't understand why they can't hire better software engineers.

Beyond the fact they're competing with the most valuable companies in the world for talent while being less than a decade past "Bet the company"-level financial distress?

onlyrealcuzzoabout 7 hours ago
Because they aren't willing to pay for them?
oofbeyabout 7 hours ago
Years. They neglected ROCm for soooo long. I have friends who worked there 5+ years ago who tried desperately to convince execs to invest more in ROCm and failed. You had to have your head stuck pretty deep in the sand back then to not see that AI was becoming an important workload.

I would love AMD to be competitive. The entire industry would be better off if NVIDIA was less dominant. But AMD did this to themselves. One hundred percent.

tux1968about 7 hours ago
It would be very helpful to deeply understand the truth behind this management failing. The actual players involved, and their thinking. Was it truly a blind spot? Or was it mistaken priorities? I mean, this situation has been so obvious and tragic, that I can't help feeling like there is some unknown story-behind-the-story. We'll probably never really know, but if we could, I wouldn't spend quite as much time wearing a tinfoil hat.
throwawayrgbabout 6 hours ago
if you asked AMD execs they'd probably say they never had the money to build out a software team like NVIDIA's. that might only be part of the answer. the rest would be things like lack of vision, "can't turn a tanker on a dime", etc.
oofbeyabout 6 hours ago
My guess is it’s just incompetence. Imagine you’re in charge of ROCm and your boss asks you how it’s going. Do you say good things about your team and progress? Do you highlight the successes and say how you can do all the major things CUDA can? I think many people would. Or do you say to your boss “the project I’m in charge of is a total disaster and we are a joke in the industry”? That’s a hard thing to say.
mstaoruabout 3 hours ago
I'm team "taking on CUDA with OpenVINO" (and SYCL*), Intel seems really upped their game on iGPU and dGPU lately, with sane prices and fairly good software support and APIs.

I'm not talking gaming CUDA, but CV and data science workloads seem to scale well on Arc and work well on Edge on Core Ultra 2/3.

rdevillaabout 8 hours ago
ROCm is not supported on some very common consumer GPUs, e.g. the RX 580. Vulkan backends work just fine.
chao-about 7 hours ago
I purchased my RX 580 in early 2018 and used it through late 2024.

I am critical of AMD for not fully supporting all GPUs based on RNDA1 and RDNA2. While backwards compatibility is always better than less for the consumer, the RX 580 was a lightly-updated RX 480, which came out in 2016. Yes, ROCm technically came out in 2016 as well, but I don't mind acknowledging that it is a different beast to support the GCN architecture than the RDNA/CDNA generations that followed (Vega feels like it is off on an island of its own, and I don't even know what to say about it).

As cool as it would be to repurpose my RX 580, I am not at all surprised that GCN GPUs are not supported for new library versions in 2026.

I would be MUCH more annoyed if I had any RDNA1 GPU, or one of the poorly-supported RDNA2 GPUs.

daemonologistabout 6 hours ago
ROCm usually only supports two generations of consumer GPUs, and sometimes the latest generation is slow to gain support. Currently only RDNA 3 and RDNA 4 (RX 7000 and 9000) are supported: https://rocm.docs.amd.com/projects/install-on-linux/en/lates...

It's not ideal. CUDA for comparison still supports Turing (two years older than RDNA 2) and if you drop down one version to CUDA 12 it has some support for Maxwell (~2014).

0xbadcafebeeabout 4 hours ago
Worse, RDNA3 and RDNA4 aren't fully supported, and probably won't be, as they only focus on chips that make them more money. If we didn't have Vulkan, every nerd in the world would demand either a Mac or an Intel with Nvidia chip. AMD keeps leaving money on the table.
lpcvoidabout 3 hours ago
Up until recently they didn't even support their cashcow Ryzen 395+ MAX properly. Idk about the argument that they only care about certain chips.
terriblepersonabout 4 hours ago
It's pretty crazy that a 6900XT/6950XT aren't supported.
kombineabout 2 hours ago
I have RX 6700XT, damn. AMD is shooting themselves in the foot
maxlohabout 7 hours ago
I have the same experience with my RX 5700. The supported ROCm version is too old to get Ollama running.

Vulkan backend of Ollama works fine for me, but it took one year or two for them to officially support it.

BobbyTables2about 8 hours ago
Did it used to be different?

A few years ago I thought I had used the ROCm drivers/libraries with hashcat on a RX580

Now it’s obsolete ?

hurricanepootisabout 8 hours ago
RX 580 is a GCN 4 GPU. I'm pretty sure the bare minimum for ROCm is GCN 5 (Vega) and up.
daemonologistabout 6 hours ago
Among consumer cards, latest ROCm supports only RDNA 3 and RDNA 4 (RX 7000 and RX 9000 series). Most stuff will run on a slightly older version for now, so you can get away with RDNA 2 (6000 series).
adev_16 minutes ago
A little feedback to AMD executives about the current status of ROCm here:

(1) - Supporting only Server grade hardware and ignoring laptop/consumer grade GPU/APU for ROCm was a terrible strategical mistake.

A lot of developers experiments first and foremost on their personal laptop first and scale on expensive, professional grade hardware later. In addition: some developers simply do not have the money to buy server grade hardware.

By locking ROCm only to server grade GPUs, you restrict the potential list of contributors to your OSS ROCm ecosystem to few large AI users and few HPC centers... Meaning virtually nobody.

A much more sensible strategy would be to provide degraded performance for ROCm on top of consummer GPUs, and this is exactly what Nvidia with CUDA is doing.

This is changing but you need to send a clear message there. EVERY new released device should be properly supported by ROCm.

- (2) Supporting only the two last generations of architecture is not what customers want to see.

https://rocm.docs.amd.com/projects/install-on-linux/en/docs-...

People with existing GPU codebase invests significant amount of effort to support ROCm.

Saying them two years later: "Sorry you are out of update now!" when the ecosystem is still unstable is unacceptable.

CUDA excels to backward compatibility. The fact you ignore it entirely plays against you.

(3) - Focusing exclusively on Triton and making HIP a second class citizen is non-sensical.

AI might get all the buzz and the money right now, we go it.

It might look sensible on the surface to focus on Python-base, AI focused, tools like Triton and supporting them is definitively necessary.

But there is a tremendous amount of code that is relying on C++ and C to run over GPU (HPC, simulation, scientific, imaging, ....) and that will remain there for the multiple decades to come.

Ignoring that is loosing, again, custumers to CUDA.

It is currently pretty ironic to see such a move like that considering that AMD GPUs currently tend to be highly competitive over FP64, meaning good for these kind of applications. You are throwing away one of your own competitive advantage...

(4) - Last but not least: Please focus a bit on the packaging of your software solution.

There has been complained on this for the last 5 years and not much changed.

Working with distributions packagers and integrating with them does not cost much... This would currently give you a competitive advantage over Nvidia..

jmward01about 6 hours ago
I really want to get to the point that I am looking online for a GPU and Nvidia isn't the requirement. I think we are really close to there. Maybe we are there and my level of trust just needs to bump up.
m-schuetzabout 2 hours ago
Problem is, NVIDIA has so many quality of life features for developers. It's not easy getting especially smaller scale developers and academia to use other vendors that are 1) much more difficult to use while 2) also being slower and not as rich in features.

Personally I opted in to being NVIDIA-vendor-locked a couple of years ago because I just couldn't stand the insanely bonkers and pointless complexity of APIs like Vulkan. I used OpenGL before which supported all vendors, but because newer features weren't added to OpenGL I eventually had to make the switch.

I tried both Vulkan and CUDA, and after not getting shit done in Vulkan for a week I tried CUDA, and got the same stuff done in less than a day that I could not do in a whole week in Vulkan. At that moment I thought, screw it, I'm going to go NV-only now.

taherchhabraabout 3 hours ago
Genuine question. After claude code, codex etc, can't this be speedup ?
Gasp0de35 minutes ago
I believe this is what that teamlead in the article comments on as next steps?
p1eskabout 9 hours ago
Someone from AMD posted this a few minutes ago, then deleted it:

"Anush's success is due to opting out of internal bureaucracy than anything else. most Claude use at AMD goes through internal infrastructure that can take hundreds of seconds per response due to throttling. Anush got us an exemption to use Anthropic directly. he is also exempt from normal policies on open source and so I can directly contribute to projects to add AMD support. He's an effective leader and has turned ROCm into a internal startup based in California. Definitely worth joining the team even if you've heard bad things about AMD as a whole."

This kind of bullshit is why I don't want to join AMD, even if this particular team is temporarily exempt from it.

nlabout 8 hours ago
> he is also exempt from normal policies on open source and so I can directly contribute to projects to add AMD support.

It's crazy that this is a big deal.

I understand the need for some kind of governance around this but for it to require a special exemption just shows how far the AMD culture needs to shift.

0xbadcafebeeabout 8 hours ago
Liability is always a big deal.
nlabout 6 hours ago
Sure, but it's not like other large companies don't have policies that address this.
noidentabout 5 hours ago
Policies like these are widespread in most companies with >1000 employees
brcmthrowawayabout 8 hours ago
So join NVIDIA instead
bruce343434about 8 hours ago
In my experience fiddling with compute shaders a long time ago, cuda and rocm and opencv are way too much hassle to set up. Usually it takes a few hours to get the toolkits and SDK up and running that is, if you CAN get it up and running. The dependencies are way too big as well, cuda is 11gb??? Either way, just use Vulkan. Vulkan "just works" and doesn't lock you into Nvidia/amd.
Arechabout 1 hour ago
Haha. People have already said what is Vulkan in practice - it's very convoluted low-level API, in which you have to write pretty complicated 200+LoC just to have simplest stuff running. Also doing compute on NVIDIA in Vulkan is fun if you believe the specs word for word. If you don't, you switch a purely compute pipeline into a graphical mode with a window and a swapchain, and instantly get roughly +20% of performance out of that. I don't know if this was a bug or an intended behavior (to protect CUDA), but this how it was a couple years ago.
cmovqabout 8 hours ago
Vulkan is a pain for different reasons. Easier to install sure, but you need a few hundred lines of code to set up shader compilation and resources, and you’ll need extensions to deal with GPU addresses like you can with CUDA.
rdevillaabout 7 hours ago
Ah yes, but those hundred lines of code are basically free to produce now with LLMs...
cylemonsabout 4 hours ago
Whatabout the extensions? is it widely supported
suprjamiabout 5 hours ago
Just in time for Vulkan tg to be faster in almost all situations, and Vulkan pp to be faster in many situations with constant improvements on the way, making ROCm obsolete for inference.
kimixaabout 5 hours ago
ROCm vs Vulkan has never been about performance - you should be able to represent the "same" shader code in either, and often they back onto the same compilers and optimizers anyway. If one is faster, that often means something has gone /wrong/.

The advantages for ROCm would be integration into existing codebases/engineer skillsets (e.g. porting an existing C++ implementation of something to the GPU with a few attributes and API calls rather than rewriting the core kernel in something like GLSL and all the management vulkan implies).

m-schuetzabout 4 hours ago
Vulkan has abysmal UX though. At one point I had to chose between Vulkan and Cuda for future projects, and I ended up with Cuda because a feasibilty study I couldn't get to work in Vulkan for an entire week, easily worked in Cuda in less than a day.
roenxiabout 9 hours ago
> Challenger AMD’s ability to take data center GPU share from market leader Nvidia will certainly depend on the success or failure of its AI software stack, ROCm.

I don't think this is true. ROCm is a huge advantage for Nvidia but as far as I can tell it is more a set of R&D libraries than anything else, so all the Hot New Stuff keeps being Nvidia first and only (to start with) as the library ecosystem for the hotness doesn't exist yet. Then eventually new libraries are created that are CUDA independent and AMD turns out to make pretty good graphics cards.

I wouldn't be surprised of ROCm withered on the vine and AMD still does fine.

Advertisement
hurricanepootisabout 9 hours ago
I've been using ROCm on my Radeon RX 6800 and my Ryzen AI 7 350 systems. I've only used it for GPU-accelerated rendering in Cycles, but I am glad that AMD has an option that isn't OpenCL now.
pjmlpabout 5 hours ago
They need lots of steps, hardware support, IDE and graphical debugging integrations , the polyglot ecosystem, having a common bytecode used by several compiler backends (CUDA is not only C++), the libraries portfolio.
superkuhabout 10 hours ago
AMD hasn't signaled in behavior or words that they're going to actually support ROCm on $specificdevice for more than 4-5 years after release. Sometimes it's as little as the high 3.x years for shrinks like the consumer AMD RX 580. And often the ROCm support for consumer devices isn't out until a year after release, further cutting into that window.

Meanwhile nvidia just dropped CUDA/driver support for 1xxx series cards from their most recent drivers this year.

For me ROCm's mayfly lifetime is a dealbreaker.

mindcrimeabout 10 hours ago
Last year, AMD ran a GitHub poll for ROCm complaints and received more than 1,000 responses. Many were around supporting older hardware, which is today supported either by AMD or by the community, and one year on, all 1,000 complaints have been addressed, Elangovan said. AMD has a team going through GitHub complaints, but Elangovan continues to encourage developers to reach out on X where he’s always happy to listen.

Seems like they're making some effort in that direction at least. If you have specific concerns, maybe try hitting up Anush Elangovan on Twitter?

djsjajahabout 3 hours ago
> or by the community

Hmmm

SwellJoeabout 9 hours ago
Is it really that short? This support matrix shows ROCm 7.2.1 supporting quite old generations of GPUs, going back at least five or six years. I consider longevity important, too, but if they're actively supporting stuff released in 2020 (CDNA), I can't fault them too much. With open drivers on Linux, where all the real AI work is happening, I feel like this is a better longevity story than nvidia...where you're dependent on nvidia for kernel drivers in addition to CUDA.

https://rocm.docs.amd.com/en/latest/compatibility/compatibil...

Karlissabout 3 hours ago
You missed the note at the top "GPUs listed in the following table support compute workloads (no display information or graphics)". It doesn't mean that all CDNA or RDNA2 cards are supported. That table is very is very misleading it's for enterprise compute cards only - AMD Instinct and AMD Radeon Pro series. For actual consumer GPUs list is much worse https://rocm.docs.amd.com/projects/radeon-ryzen/en/latest/in... , more or less 9000 and select 7000 series. Not even all of the 7000 series.
SwellJoeabout 3 hours ago
I think that speaks to them not understanding at the time the opportunity they were missing out on by not shipping a CUDA-like thing to everyone, including consumer tech. The question is what'll it look like in a few years now that they do understand AI is the biggest part of the GPU industry.

I suspect, given AMDs relative openness vs. nvidia, even consumer-level stuff released today will end up with a longer useful life than current nvidia stuff.

I could be wrong, of course. I've taken the gamble...the last nvidia GPU I bought was a 3070 several years ago. Everything recent has been AMD. It's half the price for nearly competitive performance and VRAM. If that bet turns out wrong, I'll just upgrade a little sooner and still probably end up ahead. But, I think/hope openness will win.

Also, nvidia graphics drivers on Linux are a pain in the ass that I didn't want to keep dealing with. I decided it wasn't worth the hassle, even if they're better on some metrics. I've been able to run everything I've tried on an AMD Strix Halo and an old Radeon Pro V620 (not great, but cheap, compared to other 32GB GPUs and still supported by current ROCm).

lrvickabout 10 hours ago
ROCm is open source and TheRock is community maintained, and in a minute the first Linux distro will have native in-tree builds. It will be supported for the foreseeable future due to AMDs open development approach.

It is Nvidia that has the track record of closed drivers and insisting on doing all software dev without community improvements to expected results.

KennyBlankenabout 9 hours ago
> expected results

The defacto GPU compute platform? With the best featureset?

lrvickabout 9 hours ago
And the worst privacy, transparency, and FOSS integration due to their insistence on a heavily proprietary stack.

Also pretty hard to beat a Strix Halo right now in TPS for the money and power consumption.

Even that aside there exist plenty like me that demand high freedom and transparency and will pay double for it if we have to.

canpanabout 10 hours ago
I was thinking to get 2x r9700 for a home workstation (mostly inference). It is much cheaper than a similar nvidia build. But still not sure if good value or more trouble.
stephlowabout 10 hours ago
I own a single R9700 for the same reason you mentioned, looking into getting a second one. Was a lot of fiddling to get working on arch but RDNA4 and ROCm have come a long way. Every once in a while arch package updates break things but that’s not exclusive to ROCm.

LLM’s run great on it, it’s happily running gemma4 31b at the moment and I’m quite impressed. For the amount of VRAM you get it’s hard to beat, apart from the Intel cards maybe. But the driver support doesn’t seem to be that great there either.

Had some trouble with running comfyui, but it’s not my main use case, so I did not spent a lot of time figuring that out yet

canpanabout 9 hours ago
Thanks for the answer. Brings my hope up. Looking in my local shops, I can get 3 cards for the price of one 5090.

May I ask, what kind of tok/s you are getting with the r9700? I assume you got it fully in vram?

chao-about 10 hours ago
Talking to friends who have fought more homelab battles than I ever will, my sense is that (1) AMD has done a better job with RDNA4 than the past generations, and (2) it seems very workload-dependent whether AMD consumer gear is "good value", "more trouble", or both at the same time.

Edit: I misread the "2x r9700" as "2 rx9700" which differs from the topic of this comment (about RNDA4 consumer SKUs). I'll keep my comment up, but anyone looking to get Radeon PRO cards can (should?) disregard.

KennyBlankenabout 9 hours ago
Given RDNA3 was a pathetic joke, it wouldn't be hard for them to do a better job.
cyberaxabout 10 hours ago
I have this setup, with 2x 32Gb cards. It's perfect for my needs, and cheaper than anything comparable from NV.
Shitty-kittyabout 4 hours ago
The splist CDNA/RDNA architecture is a problem for AMD. The upcoming unified UDMA will solve the issue.
hotstickyballsabout 10 hours ago
Driver support eats directly into driver development
ameliusabout 1 hour ago
How long until we can use AI to simply translate all the CUDA stuff to another (more open) platform? I'm getting the feeling we're getting closer. AI won't be working in nVidia's advantage in this case.
aleccoabout 10 hours ago
Apple got it right with unified memory with wide bus. That's why Mac Minis are flying for local models. But they are 10x less powerful in AI TOPS. And you can't upgrade the memory.

I really wish AMD and Intel boards get replaced by competent people. They could do it in very short time. Both have integrated GPUs with main memory. AMD and Intel have (or at least used to have) serious know-how in data buses and interconnects, respectively. But I don't see any of that happening.

ROCm? It can't even support decent Attention. It lacks a lot of features and NVIDIA is adding more each year. Soon they will reach escape velocity and nobody will catch them for a decade. smh

KeplerBoyabout 1 hour ago
Aren't mac minis flying for "local models" because people have no clue what they are doing?

All those people who bought them for openclaw just bought them because it was the trendy thing to do. No one of those people is running local models on there.

caycepabout 10 hours ago
Granted, I feel like NVIDIA GPU pricing is such that Mac minis will be way less than 10x cheaper if not already, so one might still get ahead purchasing a bulk order of Mac minis....
KennyBlankenabout 10 hours ago
A 5090 will cost you about the same amount of money as a Mac Studio M3 Ultra with eight times the RAM.

It's pretty insane how overpriced NVIDIA hardware is.

kimixaabout 4 hours ago
The 256GB Mac Studio (the one with "eight times the RAM") is listed for ~$2000 more than the current 5090 prices, and another additional $1500 for the 80-core GPU variant. Only the "base" model with 96gb is a remotely similar price, $3600-$4000.

And a 5090 has a little over 2x the memory bandwidth - ~820GB/s vs ~1790GB/s. And significantly higher peak FLOPS on the 5090 too.

Sure, if the goal is to get the "Cheapest single-device system with 256GB ram" it looks pretty good, but there's lots of other axes it falls down on. Great if you know you don't care about them, but not "Better In Every Way". Arguably, better in only a single way - but that single way may well be the one you need.

And the current 5090 price might be a transient peak - only three months ago they were closer to $2500 - significantly less than half the $6000 base-spec 256GB Mac Studio. While the Mac Studio has been constant.

cjbgkaghabout 4 hours ago
It seems like general improvements in ram efficiency, such as that used in Gemma 4, means it’s back to memory bandwidth as the bottleneck and less about total available memory size. I’m also curious to see how much more agent autonomy will reduce less need for low latency and shift the focus to more throughput. Meaning it’s easier to spread the model out over multiple smaller GPUs and use pipeline parallelism to keep them busy. This would also mean using ram for market discrimination becomes less effective.
LoganDarkabout 9 hours ago
Yes but the 5090 can run games.

Running games on my loaded M4 Max is worse than on my 3090 despite the over-four-year generational gap.

Like, Pacific Drive will reach maybe 30fps at less than 1080p whereas the 3090 will run it better even in 4K.

That could just be CrossOver's issue with Unreal Engine games, but "just play different games" is not a solution I like.

corndogeabout 9 hours ago
But the 5090 can run Crysis
bsderabout 9 hours ago
> I really wish AMD and Intel boards get replaced by competent people.

Intel? Agreed. But AMD is making money hand over fist with enterprise AI stuff.

Right now, any effort that AMD or NVIDIA expend on the consumer sector is a waste of money that they could be spending making 10x more at the enterprise level on AI.

DeathArrowabout 2 hours ago
Do we get better perf or tokens per second with AMD and its software stack than with Nvidia?
formerly_provenabout 3 hours ago
We’ve been talking about this for a good ten years at least and AMD is still essentially in the “concepts of a plan” phase. The AMD GPGPU software org has to be one of the most inconsequential ones at this rate.
mmis1000about 3 hours ago
At least they finally do something this time. Now torch and whatever transformer stuff runs normally on windows/linux as long as you installed correct wheel from amd's own repository.

It's a huge step though.

ycui1986about 9 hours ago
For many LLM load, it seems ROCm is slower than vulkan. What’s the point?
mmis1000about 2 hours ago
Compatibility so foundation packages like torch onnx-runtime can run on AMD GPU without massive change in architecture. It's the biggest reason for those stuff that "only works on nvidia gpu". It's not faster if vulkan alternative exists, but at least it runs.
neuroelectron35 minutes ago
Now that the AI bubble is starting to burst, it's a great time for AMD to reveal their AI ambitions. They've set the tone by hiring low cost, outsourced labor.

Of course everybody knows what's really going on here. It's not an open discussion, however.

shmerlabout 10 hours ago
Side question, but why not advance something like Rust GPU instead as a general approach to GPU programming? https://github.com/Rust-GPU/rust-gpu/

From all the existing examples, it really looks the most interesting.

I.e. what I'm surprised about is lack of backing for it from someone like AMD. It doesn't have to immediately replace ROCm, but AMD would benefit from it advancing and replacing the likes of CUDA.

LegNeatoabout 7 hours ago
One of the rust-gpu maintainers here. Haven't officially heard from anyone at AMD but we've had chats with many others. Happy to talk with whomever! I would imagine AMD is focusing on ROCm over Vulkan for compute right now as their pure datacenter play, which makes sense.

We've started a company around Rust on the GPU btw (https://www.vectorware.com/), both CUDA and Vulkan (and ROCm eventually I guess?).

Note that most platform developers in the GPU space are C++ folks (lots of LLVM!) and there isn't as much demand from customers for Rust on the GPU vs something like Python or Typescript. So Rust naturally gets less attention and is lower on the list...for now.

MobiusHorizonsabout 10 hours ago
From the readme:

> Note: This project is still heavily in development and is at an early stage.

> Compiling and running simple shaders works, and a significant portion of the core library also compiles.

> However, many things aren't implemented yet. That means that while being technically usable, this project is not yet production-ready.

Also projects like rust gpu are built on top of projects like cuda and ROCm they aren’t alternatives they are abstractions overtop

shmerlabout 9 hours ago
I think Rust GPU is built on top of Vulkan + SPIR-V as their main foundation, not on top of CUDA or ROCm.

What I meant more is the language of writing GPU programs themselves, not necessarily the machinery right below it. Vulkan is good to advance for that.

I.e. CUDA and ROCm focus on C++ dialect as GPU language. Rust GPU does that with Rust and also relies on Vulkan without tying it to any specific GPU type.

markisusabout 7 hours ago
The article mentions Triton for this purpose. I don’t think you will get maxed out performance on the hardware though because abstraction layers won’t let you access the fastest possible path.
HarHarVeryFunnyabout 10 hours ago
If you don't want/need to program at lowest level possible, then Pytorch seems the obvious option for AMD support, or maybe Mojo. The Triton compiler would be another option for kernel writing.
shmerlabout 9 hours ago
I don't think that's something that can be pitched as a CUDA alternative. Just different level.
Advertisement
blovescoffeeabout 10 hours ago
Naive question, could agents help speed up building code for ROCm parity with CUDA? Outside of code, what are the bottlenecks for reaching parity?
WorldPeasabout 10 hours ago
to be honest, outside of fullstack and basic MCU stuff, these agents aren't very good. Whenever a sufficiently interesting new model comes out I test it on a couple problems for android app development and OS porting for novel cpu targets and we still haven't gotten there yet. I'd be happy to see a day where it was possible however
catgaryabout 7 hours ago
I’ve found they’re quite good when you’re higher in the compiler stack, where it’s essentially a game of translating MLIR dialects.
m-schuetzabout 4 hours ago
Agents work great for tasks that thousands of developers have done before. This isn't one of those tasks.
hypercube33about 7 hours ago
Maybe this is dumb but at the moment through windows (and WSL?) you get: rocm DirectML Vulkan OpenML?
jiggawattsabout 10 hours ago
Lack of focus from AMD management. See the sibling comment: https://news.ycombinator.com/item?id=47745611

They just don't care enough to compete.

nnevatieabout 7 hours ago
Why is it called "ROCm” (with the strange capitalization) in the first place? This may sound silly, but in order to compete, every detail matters, including the name.
slongfieldabout 6 hours ago
It used to stand for "[R]adeon [O]pen [C]o[m]pute", but since it's not affiliated with the Open Compute Project, they dropped the meaning of it a little while ago, and now it doesn't stand for anything.
dnauticsabout 6 hours ago
presumably a reference to rocm/socm robots?
WanderPandaabout 7 hours ago
This is so true! Shows a lack of care that usually doesn’t stop at just the naming