My 2 (inflated) cents on the matter: I never replace the thermal pads on these old cards. Now if this was a modern late GDDR5 or GDDR6 card, where every fraction of a Watt counts because the manufacturer already stretched the legs of the card as far as they'd go to get the most performance out it, then this might be different matter. But for old cards, this is not the case. Thermal pads offer a rather poor thermal transfer to the GPU heatsink. But that's not an issue, because the RAM chips themselves don't dissipate that much heat (maybe a Watt or two tops), so the thermal pads will slowly but surely be able to pump the heat away from the thermal pad onto the heatsink (or the other way around when the heatsink gets hotter than the RAM chips.) In fact, if you read the RAM datasheets, you will see that BGA RAM chips rarely suggest active cooling via thermal pads. Instead, they are meant to cool through their solder BGA ground pads to the board by design. Any cooling through the top (be it via thermal pads or just simply air passing over) is just a bonus. That's why some GPUs have thermal pads, while others use air from the cooler to pass over them. In any case, it all comes down to how cool the video card PCB runs. Thus, if you keep the GPU chip (and its VRM cool), the card's PCB will remain cooler too, and that will allow the RAM chips to also run cooler.
In some cases like the stock/reference Radeon HD4850 cooler, the RAM actually runs hotter because the GPU heatsink is undersized (and ATI/AMD made the "wise" decision to keep the fan speed low until the card hits 80 Celsius), so extra heat from the GPU is pumped into the RAM chips and dissipated through the PCB - a terrible terrible design overall. Though it should suffice to say the HD4850 is not the only card with this flaw. Actually, there's more cards from that era that are like this than ones that aren't. Usually the ones that aren't are the 3rd part cards with non-reference and bigger coolers.
With all of that said, if the old pads still make contact when you put the cooler back on, then you have nothing to worry about or replace. And if seems like they don't, add a tiny bit of thermal compound on one side. Yes, we all know a thick(-ish in this case) layer of thermal compound is not good for thermal transfer... but again, these older RAM chips don't really need to cool through their top, if the PCB is properly designed to sink heat away from their solder pads anyways. In any case, even a thick layer of compound will be OK to transfer the heat away (or to?) the RAM chips, because the overall TDP of each chips is low.
swaaye wrote on 2024-09-27, 15:14:
How about pads direct from an electronics supplier? Probably better pricing.
https://www.digikey.com/en/products/filter/pa … A4EMAmBnEBdAvkA
+1 on using Digikey or Mouser. Usually way cheaper than "gaming gear" brands. Same reason I usually don't consider buying Noctua fans - it's not that they are bad, but they are kinda overpriced for what they are. With some searching, I can find equivalently-performing quality brands of fans on Digikey or Mouser (Panaflo, Nidec, Delta, and etc.) Just have to do more digging through datasheets to look at noise figures, pressures, airflow, and etc... but overall it's worth it, because it allows me to get the right fan for the application. Some coolers with dense fins need higher pressure and not necessarily lots of airflow. Others with more sparsely-placed fins could use lower pressure and higher airflow.
swaaye wrote on 2024-09-27, 15:14:
Also, when you pull cards apart and reassemble them, you are putting stress on the card and the solder. That's probably something to avoid.
Now when did this myth come about?
If this was true, then simply walking in the room more frequently would destroy GPUs quicker as the floor vibrations from that will far exceed any stress you put on taking apart the cooler...
Seriously, solder is not that gentle, especially at room (cold) temperatures. And with most lead-free solders, you could be sitting there just 2 degrees below the melt point of the solder and it still won't be budging.
StriderTR wrote on 2024-09-27, 18:53:
If they're good, I don't bother, if I don't like what I see, then I change them.
Replacing RAM pads on a hot-running card won't make a dent in the temperatures.
If it's running too hot, it's usually because:
a) the GPU thermal paste is dry or starting to dry
b) the cooler is clogged with dust
c) the cooler was improperly matched to the TDP of the GPU (i.e. it is undersized) - the most common issue, in fact, as manufacturers always try to cut down on costs, and this is one easy place to do it.
d) fan is going bad
e) a combo of any of the above
StriderTR wrote on 2024-09-27, 18:53:
For paste, if I remove the cooler, I always clean and replace the paste.
That's wasteful in some cases.
If the paste is not dry and not contaminated with dust or dirt where the GPU die contacts the cooler (rarely the case with old video cards), then I'll re-use it and/or simply add a small amount to the existing paste. This is especially the case when I apply new thermal paste to a cooler that I've already cleaned and installed on a GPU, but removed to install onto another GPU (typically for testing.) So long as the paste wasn't overheated (and for a long period of time), then I scoop up the "old" paste from the previous GPU and move it onto the next (clean) one. I've even done tests specifically for this, to see if reusing the paste will degrade cooling performance. The conclusion that I've come to is that if not dealing with crazy-high TDP density (i.e. modern GPUs that have high TDP in a really small surface area), then difference is close to none (less than 1 degree C) between all re-use applications until the paste either gets contaminated with dust/dirt or starts to dry.
StriderTR wrote on 2024-09-27, 18:53:
It all depends on a per-situation basis.
Exactly, that's really the best way to treat ever video card restoration.
Always check everything and never assume the manufacturer did everything right to get you the best cooling solution. In many cases, they did give you the best cooling solution... for the money they put in that cooler. But that's not the same as picking the proper cooler for the TDP of the card. In other words, they only care to give you what will last the warranty and then maybe a bit more to be on the safe side. Past that, it's a waste to them (and really to most people that regularly replace their computer hardware every 3-5 years.)
ratfink wrote on 2024-09-29, 13:47:
Isn't that 7900GTO from the days of failing solder BGAs on nvidia cards.
All GF7 and GF8 series with G8x GPU chips are from that "bumpgate" era. Even the GF6 series are. GF 6150 chipsets are some of the most notorious ones for failing, both in laptops and on desktop motherboards. Though part of their problem is they are typically only fitted with a passive cooler. Add a small 40 mm fan on top, and they can actually work a lot more reliably (even reflown ones.)