BitFenix Launches Ronin PC Chassis

Published by

Click here to post a comment for BitFenix Launches Ronin PC Chassis on our message forum
https://forums.guru3d.com/data/avatars/m/55/55855.jpg
So I've been prepping a framepacing article for and with AMD's 13.8 B1 driver and they asked me to release the article once the driver does live on August 1st to give other editors a chance to test the driver as well. I'm wrong how ?
As Roy tweeted they are releasing them shortly, unless hes wrong :P
data/avatar/default/avatar24.webp
No. You might see improvement with single GPU setups once the GCN memory manager is updated. Funny I remember AMD promising new driver "soon" when it was an issue 6 months ago.. Now there I hasn't seen a single comment about it.
What's with everybody talking about the GCN memory management? Did AMD even confirm that this is in the works?
data/avatar/default/avatar35.webp
Waiting for the nerd rage when people shouting they cannot notice a difference! (I will probably be one of them) 🙂
data/avatar/default/avatar35.webp
Waiting for the nerd rage when people shouting they cannot notice a difference! (I will probably be one of them) 🙂
Do you use VSync or cap FPS?
data/avatar/default/avatar07.webp
Do you use VSync or cap FPS?
FPS cap.
data/avatar/default/avatar36.webp
FPS cap.
Then you won't notice a difference if you are maintaining your FPS at that FPS cap.
It would be nice if AMD could finally match nVidia and render frames smoothly without longer delays.
I honestly don't know who to believe on this subject. Hardware vs. Software doesn't matter AFAIK, the method should largely be the same. You also have to question this "hardware" idea as that hardware has existed since G80, according to Nvidia. Why then has microstutter only been almost solved with Kepler rather than the 3 generations (and a refresh) before Kepler? Nvidia's Tom Petersen has stated that their frame metering decreases input lag compared to an unmetered setup in this video ~43 minutes AFAIK. He explains, through drawing, at 60FPS, that an unmetered setup has between 1-2 frames of latency. A metered setup, since it moves the second frame to the middle, reduces this latency to 1 - 1.5 frames. If a metered frame starts rendering later than an unmetered frame would, which is what is supposed to happen, this means that input is still received even after the frame from the first card is displayed. This is what causes a reduction in latency. It makes perfect sense. He also goes on to state that they don't offer an option in the driver to turn frame-metering off since there is no point (advantage) in doing so. AMD's Robert Hallock, on the other hand, has stated in this thread (I believe your username is Locky in that thread?) that frame pacing adds a slight delay (less than 10ms), but only gives the explanation that:
It is always the case that modulating frame pacing introduces a small input delay at the user end. You smooth out frame pacing by establishing a consistent cadence between when the frame is rendered, and when the DirectX present() call puts that frame on the screen.
His argument is that you are adding a slight delay between when the frame is rendered and when it is displayed. I have an issue with this argument for the following reason: We suppose, in the worst case of microstutter, that both cards are rendering at the same time, as fast as possible, the same scenario. This indeed happens, you'll notice in several articles that (using tomshardware's terminology here) in several games, "Practical" FPS is around half the "Hardware" FPS, implying microstutter worst case scenario. Establishing a cadence here between when the frame is rendered and when it is displayed will STILL give you a "Practical" FPS equal to around half the "Hardware" FPS since you are now practically showing almost the same frame twice. So far, I believe Nvidia's explanation.
data/avatar/default/avatar27.webp
Until AFR is gotten rid of!
data/avatar/default/avatar24.webp
As Roy tweeted they are releasing them shortly, unless hes wrong :P
Clearly Roy has a different definition of shortly as he said that 11 hours ago.
data/avatar/default/avatar06.webp
Then you won't notice a difference if you are maintaining your FPS at that FPS cap. I honestly don't know who to believe on this subject. Hardware vs. Software doesn't matter AFAIK, the method should largely be the same. You also have to question this "hardware" idea as that hardware has existed since G80, according to Nvidia. Why then has microstutter only been almost solved with Kepler rather than the 3 generations (and a refresh) before Kepler? Nvidia's Tom Petersen has stated that their frame metering decreases input lag compared to an unmetered setup in this video ~43 minutes AFAIK. He explains, through drawing, at 60FPS, that an unmetered setup has between 1-2 frames of latency. A metered setup, since it moves the second frame to the middle, reduces this latency to 1 - 1.5 frames. If a metered frame starts rendering later than an unmetered frame would, which is what is supposed to happen, this means that input is still received even after the frame from the first card is displayed. This is what causes a reduction in latency. It makes perfect sense. He also goes on to state that they don't offer an option in the driver to turn frame-metering off since there is no point (advantage) in doing so. AMD's Robert Hallock, on the other hand, has stated in this thread (I believe your username is Locky in that thread?) that frame pacing adds a slight delay (less than 10ms), but only gives the explanation that: His argument is that you are adding a slight delay between when the frame is rendered and when it is displayed. I have an issue with this argument for the following reason: We suppose, in the worst case of microstutter, that both cards are rendering at the same time, as fast as possible, the same scenario. This indeed happens, you'll notice in several articles that (using tomshardware's terminology here) in several games, "Practical" FPS is around half the "Hardware" FPS, implying microstutter worst case scenario. Establishing a cadence here between when the frame is rendered and when it is displayed will STILL give you a "Practical" FPS equal to around half the "Hardware" FPS since you are now practically showing almost the same frame twice. So far, I believe Nvidia's explanation.
In your worst case scenario for AMDs opinion, you are getting a bit mixed up. To keep interval consistent, the latency of the frame must be altered. http://images.anandtech.com/doci/6857/Microstutter_575px.png In this chart, if we take the first frames (31ms, 19ms, 34ms) and, hypothetically, want them to produce at a near 30ms consistently, we must increase latency of the second frame, thus adding the "10ms delay" that was recently quoted. You still get the same FPS, just now instead of that third frame being "dropped" (or partially drawn because of the quick to display 4th frame(19ms)) and creating the the lower "practical" FPS, it is consistent and the "Hardware" FPS can be properly displayed in full. Watching the video, here is a photo that will sum it up: http://www.pcper.com/files/imagecache/article_max_width/review/2013-04-22/Vsync-3.png Amd being the top, nvidia being the bottom. As a note, ignore the actual amount of "frames" that are in the top row of each. It will seem that amds FPS would be artificially higher but that is not the case, just a poor representation of what is actually happening. Refer above as why fps will not change. Now having a look, amd clearly was shooting for minimal input lag and just shoving frames into the pipeline and not caring when one or another was displayed. To combat this, the metered style "paces" these frames. Now, to get these frames placed where you want in the pipe, you must play with the latency of when that second,third,fourth... frame is started/sent. If the user input came right after the first image is displayed but before the delayed start of the 2nd frame,(since we are pacing it to begin at a controlled interval), you will get induced input lag. Where as nvidias video demonstration only specified latency as frames (.5-1.5 frame latency) and not the time between user input and frame display, its clear they (amd and nvidia) are talking about the same thing in different respects. Amd could easily say their latency is .5-1.5 frames as well (once the metering is put in effect) but chose to actually state the inherent problem with metering: the increase in input latency forced by the timing of frame display. Hope any of this made sense. Both are talking about 2 different outcomes of the same process; AMD chose the bad side effect, Nvidia the good. Not surprising since they are both justifying their previous position on the matter.
https://forums.guru3d.com/data/avatars/m/133/133611.jpg
Arghhhhhhhhhhhh the suspense is killing me arrghhhhhhhh
data/avatar/default/avatar02.webp
There's still other disadvantages to having a multi card setup as I'm sure you know. Power draw, temps, noise just to name a few. but... you never know.
It's performance / watt that should be taken as the metric. If you're getting 50% scaling, then your cards are drawing ~50% more power than a single GPU. With following generations, you'd have a card maybe performs as fast with half the power draw. But then again two of that card perform double as fast with the same power draw. It's when you have an ancient multi-GPU setup that this starts becoming a problem. Temps and noise, yeah they go up. Not if you have a water loop though.
data/avatar/default/avatar01.webp
In your worst case scenario for AMDs opinion, you are getting a bit mixed up. To keep interval consistent, the latency of the frame must be altered. *snip* In this chart, if we take the first frames (31ms, 19ms, 34ms) and, hypothetically, want them to produce at a near 30ms consistently, we must increase latency of the second frame, thus adding the "10ms delay" that was recently quoted. You still get the same FPS, just now instead of that third frame being "dropped" (or partially drawn because of the quick to display 4th frame(19ms)) and creating the the lower "practical" FPS, it is consistent and the "Hardware" FPS can be properly displayed in full. Watching the video, here is a photo that will sum it up: *snip* Amd being the top, nvidia being the bottom. As a not, ignore the actual amount of "frames" that are in the top row of each. It will seem that amds FPS would be artificially higher but that is not the case, just a poor representation of what is actually happening. Refer above as why fps will not change.
I know all of this, it's "what's being pushed" that's the question. AMD are implying that the frame is still being rendered at the same time but presented with a delay, which doesn't make sense, since the worst case scenario I have described might mean that they are being displayed in perfect intervals but still rendered as fast as possible. Take VSync as an example. with VSync on, the output is a perfect 60FPS, smooth as silk. The second card is actually rendering now with a delay (a delay that should be present naturally if frames were to be perfectly paced). This even happens in cases where a game / benchmark microstutters so badly that 60FPS looks like 30FPS (Heaven Benchmark). If the frames were delayed so that they'd display with VSync on, the bench would still look like 30FPS since almost the exact frame is being displayed. If the frames were rendered like the frame metering illustration shows, then it'd look like a perfect 60FPS, as expected.
Now having a look, amd clearly was shooting for minimal input lag and just shoving frames into the pipeline and not caring when one or another was displayed. To combat this, the metered style "paces" these frames. Now, to get these frames placed where you want in the pipe, you must play with the latency of when that second,third,fourth... frame is started/sent. If the user input came right after the first image is displayed but before the delayed start of the 2nd frame,(since we are pacing it to begin at a controlled interval), you will get induced input lag.
That's just AMD's excuse. If I wanted minimal input lag and would face microstutter as a result, I'd just turn off the second GPU and get EXACTLY the same input lag and same practical FPS. The second GPU here is useless when microstutter is in its worst case scenario. It's just drawing additional power. Plus, a perfectly working multi-GPU setup is one where frames are expected to take the same time to be rendered on each GPU, but they are displayed in succession. That delay is not one that could be cut out without throwing the whole multi-GPU AFR idea down the drain. Here's what Nvidia says about this:
One additional detail worth noting is that while input latency is the same on SLI AFR configurations as it is on single GPU configurations (each frame will take as long to be complete), inter-frame latency is reduced due to parallelism, so the application appears more responsive. For example, if a typical frame takes 30ms to render, in a 2-GPU AFR configuration with perfect scaling the latency between those frames is only 15ms. Thus, increasing the number of frames buffered in SLI does not increase input lag.
That's the whole point of multi-GPU. Reducing inter-frame latency due to parallelism (i.e. increasing FPS). Input latency is the same if you're simply doubling your FPS and reducing inter-frame latency. However, if you run a single card at 60FPS vs. a dual-GPU setup at 60FPS, the second will have additional input latency since the frametimes per card are twice as long. Worst case for single GPU is 1-2 frames, that becomes 1-1.5 frames that are twice as long, i.e. 2-3 frames (single-GPU frametime considered here). That's the additional frame of input latency.
Where as nvidias video demonstration only specified latency as frames (.5-1.5 frame latency) and not the time between user input and frame display, its clear they (amd and nvidia) are talking about the same thing in different respects. Amd could easily say their latency is .5-1.5 frames as well (once the metering is put in effect) but chose to actually state the inherent problem with metering: the increase in input latency forced by the timing of frame display.
They're definitely talking about the same thing. However, Nvidia have specified that they are talking about 60FPS, so that means one could easily multiply frames by 16.67ms / frame and get the input latency results in ms. All I'm honestly confused about is that AMD are implying that they are rendered at the same time as an unmetered setup but only PRESENTED with a delay. Nvidia are implying that they are RENDERED with a delay relative to the first GPU. But how I understand this is, that delay does not increase input lag as it allows for additional input, while in the unmetered setup, the second GPU is mirroring the first and practically doing nothing.
Hope any of this made sense. Both are talking about 2 different outcomes of the same process; AMD chose the bad side effect, Nvidia the good. Not surprising since they are both justifying their previous position on the matter.
Definitely, that is what they are doing. Thank you for your input! Would be glad if you could chime in again. EDIT: To clarify something, my question is this: How could you render now what's supposed to happen a few milliseconds later? How would you predict my input? Is the second GPU actually rendering the frame at exactly the same animation timestep it should be rendering at? How would this be possible if it has yet to receive my newest keyboard and mouse input that have happened JUST before this timestep has been reached?
https://forums.guru3d.com/data/avatars/m/55/55855.jpg
Im just waiting for Hilberts take on them, and an updated set for us single card users, not that i need any new drivers, as everythings running great, just been a while since i installed some 😀
data/avatar/default/avatar01.webp
They gotta be kidding, right? Lol
https://forums.guru3d.com/data/avatars/m/242/242395.jpg
It's allright to cry 😀
https://forums.guru3d.com/data/avatars/m/55/55855.jpg
Lmfao, i knew it 😀
data/avatar/default/avatar13.webp
The nerdrage will be strong with this one.
data/avatar/default/avatar19.webp
Im surprised Asder hasnt got his grubby mitts on it already....