DirectStorage 1.1 Coming Soon

Published by

Click here to post a comment for DirectStorage 1.1 Coming Soon on our message forum
data/avatar/default/avatar04.webp
schmidtbag:

While I think it's good to have DS as a means to fine-tune games, I think it's best if there's a more general approach that can work with any game, where it basically just acts as an on-AIB cache for recent game assets. Same kind of idea as Steam's pre-compiled shader cache - it improves game performance without depending on either the game or driver developers. Anything that can yield a performance boost that works automatically is a very worthwhile feature, even if it isn't optimized.
Yep. "it just works" always trumps "here is the huge list of requirements to use it". Imagine if SLI was significantly more integrated meaning that your OS, drivers and games had no way to differentiate between 2 GFX cards and 1 card with double the resources.
https://forums.guru3d.com/data/avatars/m/255/255510.jpg
@mbk1969 I see, said the blind man. 🙂
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
nosirrahx:

Imagine if SLI was significantly more integrated meaning that your OS, drivers and games had no way to differentiate between 2 GFX cards and 1 card with double the resources.
That's where AFR really came in handy, because you could just force-enable it and get pretty good results so long as your GPUs could maintain the same clock speed consistently. At least I did back when Crossfire was still a thing. With modern PCIe specs and rBAR, mGPU setups should be a lot easier. The big exception is how GPUs vary so much in boost clocks, even if you buy 2 of the exact same model. Since AMD is supposedly releasing an MCM GPU, I wonder if that means there will now be the option to make modularly scalable GPU configurations again.
https://forums.guru3d.com/data/avatars/m/163/163068.jpg
Any day now... any day...
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
asturur:

to make direcstorage look better they added a bug in windows 11 file copy, so that normal copy looks slower.
you have no idea what you're on about.
https://forums.guru3d.com/data/avatars/m/282/282657.jpg
ivanosky:

This is the traditional way that has been used by game developers. The problem is that decompressing on the CPU is slower and keeps the CPU busy decompressing assets while it could be doing other things, like Physics, NPCs' AI, etc. This is why Spiderman on PC has high CPU loads when traversing the city and needs a significantly more powerful CPU than the PS5's CPU to maintain 60 FPS. Because Spiderman heavily uses the PS5's bespoke decompression chip instead of the CPU. Decompressing on the GPU should be orders of magnitude faster, due to its highly parallel architecture, and since it's so fast it should only keep the GPU busy a very short amount of time (in the order of milliseconds), thus not affecting graphics performance. Also, it should decrease the high usage of System RAM by games, since data can move directly to the GPU VRAM instead of being loaded on System RAM an after decompression being copied to VRAM.
Horus-Anhur:

Because a GPU is much faster than a CPU at this task.
Yes, I can understand the way of "traditional" MS Xbox mainstream thinking, and I like these pretty neat and funny "synthetic" charts that no one can check. After all, you have to sell your products, there's nothing wrong with that... But from a HEDT point of view, I want to add this quickly home made diagram "just for fun":
RTXIO_NIC.jpg
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
been saying it since the next gen (guess current gen now) consoles came out, PC is offically behind now somehow that hasn't changed in 2 years, ps5 loading times for games are as close to instant as I would like, just makes PC feel slow and sluggish, doesn't matter if you use the best pcie4 NVME. Microsoft should of been more firm with w11 and allowed only SSD's to use it, would of then helped developers later down the line develop games for w11 titles but no they still let old slow HDDs in that should of died out years ago
https://forums.guru3d.com/data/avatars/m/50/50906.jpg
Would be interesting to see if 7-zip could take advantage of DirectStorage 1.1 capabilities somehow. GPU accelerated 7-zip sounds like an interesting feat.
https://forums.guru3d.com/data/avatars/m/258/258955.jpg
Ricepudding:

been saying it since the next gen (guess current gen now) consoles came out, PC is offically behind now somehow that hasn't changed in 2 years, ps5 loading times for games are as close to instant as I would like, just makes PC feel slow and sluggish, doesn't matter if you use the best pcie4 NVME. Microsoft should of been more firm with w11 and allowed only SSD's to use it, would of then helped developers later down the line develop games for w11 titles but no they still let old slow HDDs in that should of died out years ago
I'd still much rather wait that extra OMG 10 secs to play the better version of a game.
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
PapaJohn:

I'd still much rather wait that extra OMG 10 secs to play the better version of a game.
Depends on the game, some have far more loading screens than others those 10 seconds can add up real fast. I partly agree it does tend to be a better version in general it's just amusing that PC which has pushed the boundary for so long it now getting held up by poor software. Have to see if all games can make it to pc that said I am curious if rachet and clank a rift apart would even work on PC without something like direct storage or you get loading screens
data/avatar/default/avatar36.webp
Astyanax:

you have no idea what you're on about.
To both the dumbasses that took my post seriously. Is just funny that while they are trying to release the new frontier of storage, they are still abel to cripple disk operations in the OS with bugs. That was all.
data/avatar/default/avatar19.webp
Ricepudding:

Depends on the game, some have far more loading screens than others those 10 seconds can add up real fast. I partly agree it does tend to be a better version in general it's just amusing that PC which has pushed the boundary for so long it now getting held up by poor software. Have to see if all games can make it to pc that said I am curious if rachet and clank a rift apart would even work on PC without something like direct storage or you get loading screens
Pc pushed the boundaries for so long, because hardware was growing faster and console had their own hardware and was hard to keep up or compare. Consoles dropped custom hardware but gained the pc hardware, slightly better and leaner software. So now the poor software of pc is showing up for what it is, but we always had it.
https://forums.guru3d.com/data/avatars/m/163/163068.jpg
That and UE 4 shader compilation issues...
data/avatar/default/avatar38.webp
schmidtbag:

That's where AFR really came in handy, because you could just force-enable it and get pretty good results so long as your GPUs could maintain the same clock speed consistently. At least I did back when Crossfire was still a thing. With modern PCIe specs and rBAR, mGPU setups should be a lot easier. The big exception is how GPUs vary so much in boost clocks, even if you buy 2 of the exact same model. Since AMD is supposedly releasing an MCM GPU, I wonder if that means there will now be the option to make modularly scalable GPU configurations again.
Even if they could come up with a way to leverage the iGPU to do post-processing image quality improvements with not dGPU performance hit, that would be pretty cool.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
nosirrahx:

Even if they could come up with a way to leverage the iGPU to do post-processing image quality improvements with not dGPU performance hit, that would be pretty cool.
I agree - I've always been annoyed about the wasted potential of an iGPU. In my perfect world, I would have an Nvidia iGPU and an AMD dGPU. Nvidia has a lot more game-focused technologies that require a finite amount of processing power, while AMD tends to offer better long-term performance for the price. iGPUs could have been great for pre and post processing tasks. For example, they'd have been a perfect use for PhysX, since you want to calculate what the bodies/particles are doing before rendering them. Or, they could be used for things like anti-aliasing, since all the heavy lifting of the rendering has already been done. There are of course latency issues but: 1. If you're using an iGPU as an accessory processor, chances are, you're on a budget. Sometimes getting higher FPS at the price of worse latency is worth it, depending on the game. 2. Ideally, the iGPU will be operating in parallel with the dGPU, so the added latency ought to be minimal. 3. In the even the iGPU is doing post processing, it should be the display driver since that means the image doesn't have to be transferred back to the dGPU.