Download: Radeon Adrenalin Edition 18.5.2 Drivers

Published by

Click here to post a comment for Download: Radeon Adrenalin Edition 18.5.2 Drivers on our message forum
https://forums.guru3d.com/data/avatars/m/243/243867.jpg
I'm hoping someone else has had this issue. OS: Windows 7 GPUs: 290x and currently rx580. Any driver past 18.3.4 i've been having issues clean installing. I've done the same install procedure for a very long time -- DDU clean in safemode, reboot, install new driver. But anything, so far, past 18.3.4 results in a "flickering" monitor (losing signal constantly). I can remote into my computer and what seems to be happening is, on a clean install now with these newer drivers, it doesn't seem to be setting my memory clocks properly. This was happening on my 290x and I've since replaced it with an rx580. I've tried "resetting" wattman to defaults while I'm remoted in, I can't even set the memory clock manually as it's greyed out. The only solution I have is to install 18.3.4 first (the last driver I never had issues with), then install over those. While not much of a hassle and probably doesn't cause any issues doing an install over, essentially, a clean install of drivers, it is a bit irksome to have to do this every time. Has anyone else experienced this? TLDR Can't clean install >18.3.4 drivers because it doesn't set my GPU clocks properly and causes black/flickering/resetting screen. I can remote in, so it's just a display driver issue. Have to install 18.3.4 first, then install over to get things working properly.
data/avatar/default/avatar15.webp
radeon-software-adrenalin-18.5.2-win10-64bit-may31 RELEASE DATE 6/6/2018
https://forums.guru3d.com/data/avatars/m/250/250676.jpg
mystik:

I'm hoping someone else has had this issue. TLDR Can't clean install >18.3.4 drivers because it doesn't set my GPU clocks properly and causes black/flickering/resetting screen. I can remote in, so it's just a display driver issue. Have to install 18.3.4 first, then install over to get things working properly.
I have the same problem for sometime now and found a temp fix for the black screen part when uninstalling a driver. switch to HDMI and not DisplayPort then uninstall the driver, after you use DDU and install the new driver then switch back to DisplayPort.
https://forums.guru3d.com/data/avatars/m/238/238369.jpg
IMaysky:

radeon-software-adrenalin-18.5.2-win10-64bit-may31 RELEASE DATE 6/6/2018
Indeed...did they re-release 18.5.2?
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
WHQL'd I guess? EDIT: No? Oh well I can just compare. Maybe the known and fixed issues section got updated. EDIT: 4th June now instead of 29th May for the packages. EDIT: Different package on the driver too, but not WHQL? (Doesn't have any changes in the INF for example besides the package version difference.) C0329361.inf and now it's C0329457.inf 1805291729-18.10.19.01-180529a-329361E to 1806041450-18.10.19.01-180529a-329457C
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
The OpenCL files differ in file size too as does the amdave .dll files. EDIT: Actually several files differ by a few kilobyte. Wonder what was ahem, screwed up, in the initial 18.5.2 then resulting in this re-release.
https://forums.guru3d.com/data/avatars/m/260/260114.jpg
-> https://pcgamingwiki.com/wiki/Vampyr My CFG in -> C:\Users\*****\AppData\Local\AVGame\Saved\Config\WindowsNoEditor engine.ini [TextureStreaming] PoolSize=4096 MemoryMargin=20 MemoryLoss=0 HysteresisLimit=30 DropMipLevelsLimit=20 StopIncreasingLimit=20 StopStreamingLimit=12 MinEvictSize=10 MinFudgeFactor=1
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
Interesting, so they actually remained with UE3 instead of shifting to UE4 for this game, explains some of the issues and limitations then due to the age of Unreal Engine 3 even if it's likely been customized somewhat.
https://forums.guru3d.com/data/avatars/m/216/216235.jpg
JonasBeckman:

Interesting, so they actually remained with UE3 instead of shifting to UE4 for this game, explains some of the issues and limitations then due to the age of Unreal Engine 3 even if it's likely been customized somewhat.
If you are talking about Vampyr, It's UE4 not 3
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
Ah I see, I thought overrides were mainly used for scalability and usersettings .ini for Unreal Engine 4 but I guess it must be possible to override the other files too then by copying over the same settings and specifying custom values in these. And I had to look it up but it seems the texture cache settings are also still used for Unreal Engine 4 instead of being managed by GPU VRAM amount or similar. Thought using a static value would have been deprecated since things have changed a lot since Unreal Engine 3 was introduced. Arkham Knight being a good example since it really struggles with texture streaming and memory management with the default settings and the patch changing it to load different sections instead makes it so you get hitching especially if using the car and crossing through multiple areas. That's good to know, going to be useful for a few other UE4 games too then. πŸ™‚ EDIT: And still no info for what the re-release of 18.5.2 did, some speculate it might have fixed the version problem but the INF wasn't updated and that's where this is stored so it must have been something else. Though it has both the AMD and Microsoft signature now but the name and location of the driver didn't change nor the INF since WHQL drivers usually retain the on-die GPU install information whereas beta drivers do not though this could just have been overlooked I guess. 18.6.1 for next week perhaps, well it could be any time but AMD seems to usually stick to Wednesday driver updates unless it's some hotfix or support for a upcoming game they've worked with.
https://forums.guru3d.com/data/avatars/m/260/260114.jpg
Vampyr working like a charm (64-70 Chilled πŸ˜‰) But game is not some Mind Blowing FX house IMO, just another good game πŸ˜€
https://forums.guru3d.com/data/avatars/m/150/150085.jpg
JonasBeckman:

Afterburner would likely overtake Wattman's settings by forcing a absolute value for fan speed depending on how you set that up, I am however surprised that affected the OSD but it might have been querying Wattman and not the override that Afterburner can set for having a custom fan curve or minimum speed enforced though if not set up carefully this can also be a constant speed which can be a problem if the GPU starts heating up and not accelerating to max fan speeds to cool down. (Instead of setting target GPU temps and max speeds though this varies depending on GPU model for Wattman.) Although newer GPU's should throttle after certain thresholds but that might not be fast enough to bring down temps so monitoring the various sensor points comes in real handy if you're trying to set a lower fan speed or less aggressive curve. πŸ™‚ And yes the values in Afterburner applies even after shutting the program down, it's retained in memory to put it in a simplified manner so a reboot clears it or disabling the option, Unwinder has likely explained how this works in the forums for the program in some of the threads for new releases since I'd imagine it can pop up as a question every now and again same as how the injection works and it taking a bit of time to unload from processes for the Rivatuner component if installed. On the other hand this allows controlling speeds and other values more completely and with less limits from Wattman though bios limits would still apply. (Allowing for setting a fan speed for Vega GPU's for example without it being automatic until it hits the power states the user can adjust.) And the release notes for the latest drivers and what users are posting about AMD's OSD makes it clear it's not entirely bug free either, prone to errors and glitches particularly for recording but hopefully that's something AMD can fix up sooner rather than later in the next few driver releases whenever those come out.
Thanks for the info JonasBeckman I apperciate that. Everything seems to be fix. I have to keep the Target Temp at 85. Anything lower downclock's the GPU. That's both in AB and Wattman whenever I just want to use Wattman. But I thought I could lower it in previous driver version without effecting the GPU clock rate. Oh well. At least everything seems to be back to the way it was in working order. I wonder if Radeon Settings needs HPET to be left on default? I know that AMD's Master Software for Ryzen needs HPET when it's used for the 1st time. I wonder if that's the case whenever we install new drivers? Just a thought πŸ™‚.
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
85 degrees? Yeah the GPU and HBM would definitively start throttling once it hits those thermals, I think the HBM throttles even sooner actually since it's more sensitive. I assume that's one key reason behind water cooling being a default option for the full Fury X model and for the air cooled versions and the Fury the custom models often extend the length of the GPU to dissipate heat better but even these can get close in a few titles particularly during the hotter summer months what with ambient room temperature affecting cooling capacity. For the Tri-X I used 80 degrees as the max temperature but I also had a pretty high default fan speed since the Tri-X would keep around 20% until it hits the thermal target and then boost to near max which gets annoying rather than keeping a slightly louder constant and ensuring temps keep around 70 - 75 degrees. (Dust buildup affecting things a bit but a bit of cleaning usually removes the biggest offenders.) Long term usage could also see the thermal paste affected but I'm not very good at tearing down the cooler and redoing that stuff plus the GPU die is pretty sensitive too. πŸ™‚ Idle and load temps are definitively on the higher end of the scale now now but this is a older house or apartment actually and isolation isn't the best so while winters are awesome for temps summers are pretty damn hot even with multiple fans and trying to funnel the hot air outside ha ha. It's around 30 degrees Celsius at the moment, not the worst it's been but that's going to affect temperatures a bit. (Metal and stone, lots of windows facing the sun as it rises and then sets so that builds heat nicely. AC definitively has it's work cut out.) (Heard they even had to close down a few older schools and other buildings for a few days due to similar concerns due to the heat wave these last two weeks but it's getting a bit windier which I guess means we're hitting a thunder storm sooner or later, hooray. Or not since the power cabling is pretty poor too so it takes very little for a outage to occur and that can last for a few hours at worst. πŸ˜€ )
https://forums.guru3d.com/data/avatars/m/150/150085.jpg
@JonasBeckman Yeah, I can understand what you are going through. But IMO, Wattman Temperature setting should be renamed to something like AMD's ole Cool N Quiet or something like that IMO. Because even if you have Power Efficiency off in Radeon Settings you force it on when you change the target temp from default 85 to whatever you want it to be. For me I was using 40C target temp.. So while playing BF1 the GPU clocks would consistently fluctuate but kept a rock solid 40C temps. This is great for games like rocket league, etc. But for BF1 (and the fact I didn't know that's what it did until now) that's something I've found that I don't want to change. Strange too because I've been lowering temps in wattman since last year. Not until recently did it do that.
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
Hmm so it actually tries to keep to that temperature range instead of just scaling fan speed in accordance to what the user has specified here, that's going to be good to know that's different from what I had assumed. πŸ™‚ Plus it's often based via a scale or curve from the bios and that's why there's a lot of tweaks for adjusting how fast the fan ramps up in regards to temperature changes, Wattman and it's changes even broke a few custom setups like the zero fan function Sapphire uses during idle temps where one of the fans can turn off entirely. And Fury and older GPU's were also updated from OverDrive6 to OverDriveN and that has been a bit hit&miss though this also holds true for Polaris and Vega, layout and naming is also a bit confusing and overall a few bug fixes couldn't hurt here. For the Fury GPU the fan settings in the Wattman can be off by a little bit (4 - 6% was it?) and this also holds true for other settings such as voltage which can impact tuning a bit when you can't say for sure what the actual value will be compared to what's been set. Then you have bugs with certain power states or spiking where it can drop to a lower state momentarily which at worst can lead to instability if it drops voltage unless you override a specific state as the minimum one allowed. And come to think of it I believe the Fury GPU only has a single target temp, later cards have a target and a max temperature as two separate sliders allowing for some more fine tuning. (Though with Vega you also can only adjust the last few power states for 3d clock speeds which also affects fan speeds so settings for these and then minimum and target fan speeds only take effect when the GPU is under enough load to hit these states.)
https://forums.guru3d.com/data/avatars/m/150/150085.jpg
@JonasBeckman I found one little snag. Once you use MSI AB to set fan curve Wattman can no longer control fan speed. I haven't figure out how to get wattman to control fan speed even after you exit out of MSI AB. MSI A/B only goes to 25% fan speed. No option for 0%. Fan Curve graph goes to 0 but on main GUI the lowest tha slider will go is 25% for fan speed. Hmm, not sure what to do with that. Edit: Important, keep the fan curve update period the same as Monitoring Polling rate? When I used different values the AB Hardware Monitor GUI stop working.. Use same values IE: 2000 for both everything is fine.
https://forums.guru3d.com/data/avatars/m/150/150085.jpg
Cnstntgrdnr:

would someone please be kind enough to let me know if the power efficiency randomly turning itself off bug (for Fury cards) has been fixed ? *assuming anyone here runs their card with power efficiency ON πŸ˜›
Why not try it and see?
https://forums.guru3d.com/data/avatars/m/260/260114.jpg
Cnstntgrdnr:

would someone please be kind enough to let me know if the power efficiency randomly turning itself off bug (for Fury cards) has been fixed ? *assuming anyone here runs their card with power efficiency ON πŸ˜›
I have this to OFF like default (tested and this can cause severe performance problems) I prefer FPS Cap + Chill (mostly im on 1008/560) I wonder when i need to give my Fury default 1050/560 πŸ˜€ or my Stable OC 1120/560 @ only 1218mV
https://forums.guru3d.com/data/avatars/m/243/243867.jpg
NvidiaFreak650:

I have the same problem for sometime now and found a temp fix for the black screen part when uninstalling a driver. switch to HDMI and not DisplayPort then uninstall the driver, after you use DDU and install the new driver then switch back to DisplayPort.
It's a different issue that I'm having. It's not during the uninstall process that I "lose" video, it's after the new drivers are installed. The new driver isn't setting the memclock properly and is causing constant signal loss. The only way I can combat this is to install over the 18.3.4 drivers. On the same vein, I have tried to use different outputs (Different DP and HDMI plugs) and it didn't fix anything for me.