GeForce RTX 3080 users report crashes while gaming

Published by

Click here to post a comment for GeForce RTX 3080 users report crashes while gaming on our message forum
https://forums.guru3d.com/data/avatars/m/238/238795.jpg
WhiteLightning:

I was eager to get one. Not anymore. Doubt AMD will have anything interesting , so i guess i will wait for the refresh of this card.
Same here. Going to just play console stuff until a refresh comes out, even a 20gb version of 3080 - as long as this issue is worked out. I don't think it's a huge issue. But then again, first gen bugs are always a question mark.
https://forums.guru3d.com/data/avatars/m/269/269781.jpg
Silva:

And this is based on your experience or? I have Asus Dual RX580 4Gb and the card has bad coil whine. By removing the factory OC and underclock a bit as also undervolt I have the card running cool, silent and with minimal performance loss. Running her at 1330Mhz (AMD ref is 1340, factory OC is 1360) and 1025mV (from factory 1150mV). You do have a point with OC, pushing components to the limits might sound fun, but also pushes the life expectancy to oblivion. 10 years ago would be ok, hardware was made to last a decade. Now things are just not made to last.
For sure it's based on my experience, and i still recommend to buy a factory satisfactory overclocked card and never touch the frequencies, but in your case with the 580 i can understand that you had no choice.
https://forums.guru3d.com/data/avatars/m/156/156348.jpg
Alex13:

My last 2 nvidia cards have had the same issue. 970 / 2070s, in some games/certain engines the factory OC wasn't stable. Sounds like this time with the very limited headroom the factory OC cards have a high % of crashing as they are too close to the limit even without OC. Either that or it's just driver issues. 🙂
Yep happened to my 1070 but after 2 years. It was stable for 2 years and then went unstable. Had to remove the factory overlock (which was super small).
https://forums.guru3d.com/data/avatars/m/269/269781.jpg
asturur:

Who has the card has 2 choices, RMA or underclock. Underclocking is also part of a decent debugging of the problem, i m not sure why you say is dumb. You may want to wait a new firmware and not RMA yet figure out if is your PSU at limit because is or isn't a monorail and this kind of things.
I can understand to underclock temporary to solve the problem, but in general playing with cards frequencies is not a good idea, specialy for a long period of time.
https://forums.guru3d.com/data/avatars/m/225/225084.jpg
I wonder if the 20gb version will have any real world benefits over the 10gb version. I mean will the Vram run cooler because theres much more of it or will it cause more issues because it will require even more juice to run in the first place. If there's a way of running some sort of algorithm in the drivers to share the Vram load out over all the 2gb chips instead of maxing those 1gb chip out.
https://forums.guru3d.com/data/avatars/m/224/224952.jpg
Reddoguk:

I wonder if the 20gb version will have any real world benefits over the 10gb version. I mean will the Vram run cooler because theres much more of it or will it cause more issues because it will require even more juice to run in the first place. If there's a way of running some sort of algorithm in the drivers to share the Vram load out over all the 2gb chips instead of maxing those 1gb chip out.
I read comment the higher memory versions will have higher GPU clocks which seems counter intuitive. I'm struggling to see how this is possible unless they either increase TGP or use the very highest binned cores that dont use so much power to get high speeds. ie the extra memory will consume power that needs to be accounted for somehow. Only if they increase TGP will overall heat produced increase. Otherwise it will just shift from the core to the new memory chips. Their might even be binning of memory chips, using ones that use less power, and/or reduce the clock of memory a little to keep overall power use in line. It makes more sense that the 20GB cards are better binned but do not run faster, keeping the same TGP.
data/avatar/default/avatar07.webp
TheDeeGee:

Is this the new GTX 480 generation?
No. Best gpu coolers ever, AND performance per watt is great.
performance per watt 3090.PNG
data/avatar/default/avatar04.webp
DeskStar:

I remember buying my GTX 580 3Gb and it needed more voltage right out of the gate. Evga told me to install afterburner and raise the voltage .015v and I should be fine. And guess what.....I was. Just weird that a seven hundred plus dollar piece of equipment needed tinkering right out the gate to work properly. Crysis 2 was the game that brought it to its knees as well. Still have me some YouTube videos up about that game. Ah that nostalgic feeling.... Anyone scoop up a 3090 yet?!?!
Ordered 2x 3090 strix oc, waterblocks and nvlink from nvidia. Ready for Blender OptiX 😀
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
JonasBeckman:

Some of them were, my initial display with the 5700XT Pulse didn't even have free-sync or adaptive sync support.
display engine only refers to nvidia, no idea what AMD calls their scanout block. Remember the early 2080 issues with some monitors?, the display engine was the issue there too.
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
Astyanax:

display engine only refers to nvidia, no idea what AMD calls their scanout block. Remember the early 2080 issues with some monitors?, the display engine was the issue there too.
Yeah I remember though I think it was something about a defect rate and RMA's but I might be thinking of the earlier 1080 GPU's here and a separate problem...memory module related? AMD's having similar issues too so I'm probably confusing it with some of that too they still have a couple of outstanding variable rate refresh and FreeSync problems plus multi-monitor issues and a black screen issue recoverable by either turning of and on the display or using the Win+Ctrl+Shift+B hotkey to restart the display driver stack of the OS. (Kinda hard to track what's been fixed specifically here since it's a bit ubiquitous in description as a catch-all for several separate problems and fixes or workarounds for these.) RDNA1 as a new step from GCN and all new issues, RDNA2 as a further step and it's looking like the initial software and driver state might once again be a bit rough as a result though I'd love to be wrong about that. NVIDIA at least has the engineers to work on these issues more quickly. Eventually perhaps the shift to CDNA and RDNA and gradual end of support for GCN will also benefit AMD from the amount of code buildup since the 7000 series here and maintaining it. But that's AMD and this here's about NVIDIA. Will be interesting to see more info as to what this issue stems from, guess if there's no system instabilities hitting a power spike or issues with lower quality or older PSU's isn't it. Maybe just heat related after all, memory sensitivity above certain temps leading to instability perhaps but still capable of sustained functionality after the application crashes. Well some thorough testing will reveal it in due time if it's hardware or just some manufacturing issue with some of the GPU cooler designs and how it's fitted. (Plus user specific conditions like ambient temperature and overall case cooling efficiency and airflow or lack thereof.)
data/avatar/default/avatar30.webp
nizzen:

Ordered 2x 3090 strix oc, waterblocks and nvlink from nvidia. Ready for Blender OptiX
Hope you have the power and cooler ready. 2x 400-500W GPUes and i think you had a overclocked 18 core Intel, so that will be atleast 500w, that would be a 2000W PSU and atleast 3x480 rads to tame the beast. I have a solar water heater system with 6m2 panel size, but a computer like that would produce more heat then my solar panels are able to. You need a dedicated room aircon or remote mounted radiators, to be able to keep that room comfortable, for more then 1 hour at the time.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
AMD right now...
tenor.gif
data/avatar/default/avatar39.webp
I won't blame the cards just yet, need more info. It's just like games, a lot of people with faulty computers always jump to blame the game. This may or may not be in the same class, bad computer or computer habits, but blame the card anyway.
data/avatar/default/avatar21.webp
TLD LARS:

Hope you have the power and cooler ready. 2x 400-500W GPUes and i think you had a overclocked 18 core Intel, so that will be atleast 500w, that would be a 2000W PSU and atleast 3x480 rads to tame the beast. I have a solar water heater system with 6m2 panel size, but a computer like that would produce more heat then my solar panels are able to. You need a dedicated room aircon or remote mounted radiators, to be able to keep that room comfortable, for more then 1 hour at the time.
No problem 🙂 I had Evga GTX 780ti classified in SLI with Evbot, and they drew 650w per card. System drew a total of 1700w from the wall. Used 2x 1200w psu's. I am ready 😀 Living in Norway, so cold enough outside. The house need som heat 10 months of the year. Atleast!
https://forums.guru3d.com/data/avatars/m/225/225084.jpg
I think a 650W is enough if you have a 3080 and 95w or less cpu not ideal mind but it shouldn't be a cause of issues or crashing. Having only a single wire for both 8pins is bad i'd say but that too shouldn't matter if it's a single rail with enough amps. Hopefully it's just driver issues.
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
https://www.overclockers.co.uk/forums/threads/msi-geforce-rtx-3080-ventus-3x-oc-problems.18899764/ https://linustechtips.com/main/topic/1248333-rtx-3080-crash/ https://www.nvidia.com/en-us/geforce/forums/geforce-graphics-cards/5/398910/new-rtx-3080-hitching-and-crashing-across-all-titl/ Traditionally, the cards nVidia sends out for review are cherry picked and often cherry configured, too. Their review cards are not supposed to crash and of course these cards receive a lot of time and attention beforehand to make sure they don't. The best reviews of the hardware come from off-the-shelf retail cards by AIBs, or standard production FEs. Probably in a couple of weeks or so results from mass-production cards will be showing up. Looks like nVidia is pushing things. Also, it could be the new architecture is going to require a lot of time, like RTX originally did, before the drivers become stable, etc. @ Netherwind I agree with you--shouldn't ever daisy chain 8-pin connectors. Even my AMD 5700XT uses a separate circuit connector for the 8-pin and the 6-pin connectors. I was surprised to find lots of people do this as well as use *extensions* even! So many times people report problems with a new GPU which they blame on "drivers" when the culprit is a poor configuration of some kind in their systems.
https://forums.guru3d.com/data/avatars/m/121/121558.jpg
I do have the feeling that - at least in part - this situation is caused by a portion of the 3080 buyers not having a good-enough power supply for it. But it could also be related to drivers, this card has been out for barely a single week, the drivers might have to mature a bit (and I suspect that when it comes to drivers it'll be the same with AMD's cards as well for 1 or 2 weeks after release). It's really a case-by-case scenario. Imagine NVIDIA trying to troubleshoot any potential problems internally when all they see are 'reports' of users saying "My games are crashing!". Hum... ok, where do we start... Is your system 100% overclocked to start with? CPU? Memory? GPU? Enough power? Good enough PSU? Do you meet the PSU recommendation? Have you installed the latest GPU drivers? How many games did you test? Have they ALL crashed? Or are the crashes happening in 1 or 2 particular games out of your 300+ games library? Did you re-install your previous GPU to check if everything is suddenly back to normal? (stable) I mean, I can keep on going. It's typical troubleshooting with a process of elimination of potential scenarios that could take an entire day for each individual persons reporting the crashes. The best thing to do right now is to wait, for sure. Come mid or late October the dust will fall and things will be much clearer as to potential known issues, and the 'right' cards to get based on performance and price ratio when both companies have their cards covering all the ranges (low, medium, high-end) on the field. For now it's a baby, new-born 3080 that is not even 2 weeks-old with people buying this thing in a rush just to get it, without necessarily making 100% that their PSU can even cope with it.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
OG_Publicspanking:

Bad drivers don't often cause games to crash tho , its more likely that the games run poorly on bad drivers. I also think that ppl seem to think it's ok to blame the psu which is ridiculous as a well-branded 650W should be enough for it, nobody should be asked to buy a new PSU just for this card.
nvidia said 750 for a reason, whatever you think on the subject is irrelevant.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
some idiots are claiming the studio driver is more stable. its the same driver code.
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
OG_Publicspanking:

Bad drivers don't often cause games to crash tho , its more likely that the games run poorly on bad drivers. I also think that ppl seem to think it's ok to blame the psu which is ridiculous as a well-branded 650W should be enough for it, nobody should be asked to buy a new PSU just for this card.
Generally agree re the PSU. A quality 650w is actually enough and has been tested with an i9-10900k OC'd to 5.1ghz (Corsair SF600). The reason Nvidia states 750w is when some users do not have quality PSUs capable of running at continuous rated power, so extra margin for safety understandable.