BitFenix Launches Ronin PC Chassis
Click here to post a comment for BitFenix Launches Ronin PC Chassis on our message forum
Rich_Guy
yasamoka
Newbie2012
Waiting for the nerd rage when people shouting they cannot notice a difference! (I will probably be one of them) 🙂
yasamoka
Newbie2012
yasamoka
video ~43 minutes AFAIK.
He explains, through drawing, at 60FPS, that an unmetered setup has between 1-2 frames of latency. A metered setup, since it moves the second frame to the middle, reduces this latency to 1 - 1.5 frames.
If a metered frame starts rendering later than an unmetered frame would, which is what is supposed to happen, this means that input is still received even after the frame from the first card is displayed. This is what causes a reduction in latency. It makes perfect sense.
He also goes on to state that they don't offer an option in the driver to turn frame-metering off since there is no point (advantage) in doing so.
AMD's Robert Hallock, on the other hand, has stated in this thread (I believe your username is Locky in that thread?) that frame pacing adds a slight delay (less than 10ms), but only gives the explanation that:
His argument is that you are adding a slight delay between when the frame is rendered and when it is displayed. I have an issue with this argument for the following reason:
We suppose, in the worst case of microstutter, that both cards are rendering at the same time, as fast as possible, the same scenario. This indeed happens, you'll notice in several articles that (using tomshardware's terminology here) in several games, "Practical" FPS is around half the "Hardware" FPS, implying microstutter worst case scenario.
Establishing a cadence here between when the frame is rendered and when it is displayed will STILL give you a "Practical" FPS equal to around half the "Hardware" FPS since you are now practically showing almost the same frame twice.
So far, I believe Nvidia's explanation.
Then you won't notice a difference if you are maintaining your FPS at that FPS cap.
I honestly don't know who to believe on this subject. Hardware vs. Software doesn't matter AFAIK, the method should largely be the same.
You also have to question this "hardware" idea as that hardware has existed since G80, according to Nvidia. Why then has microstutter only been almost solved with Kepler rather than the 3 generations (and a refresh) before Kepler?
Nvidia's Tom Petersen has stated that their frame metering decreases input lag compared to an unmetered setup in this yasamoka
Never go midrange multi-GPU. Please.
EDIT: I love how all eyes are on AMD now :P
http://imgur.com/d5I9ll7.png
yasamoka
Until AFR is gotten rid of!
The Mac
jomama22
http://images.anandtech.com/doci/6857/Microstutter_575px.png
In this chart, if we take the first frames (31ms, 19ms, 34ms) and, hypothetically, want them to produce at a near 30ms consistently, we must increase latency of the second frame, thus adding the "10ms delay" that was recently quoted. You still get the same FPS, just now instead of that third frame being "dropped" (or partially drawn because of the quick to display 4th frame(19ms)) and creating the the lower "practical" FPS, it is consistent and the "Hardware" FPS can be properly displayed in full.
Watching the video, here is a photo that will sum it up:
http://www.pcper.com/files/imagecache/article_max_width/review/2013-04-22/Vsync-3.png
Amd being the top, nvidia being the bottom. As a note, ignore the actual amount of "frames" that are in the top row of each. It will seem that amds FPS would be artificially higher but that is not the case, just a poor representation of what is actually happening. Refer above as why fps will not change.
Now having a look, amd clearly was shooting for minimal input lag and just shoving frames into the pipeline and not caring when one or another was displayed. To combat this, the metered style "paces" these frames. Now, to get these frames placed where you want in the pipe, you must play with the latency of when that second,third,fourth... frame is started/sent. If the user input came right after the first image is displayed but before the delayed start of the 2nd frame,(since we are pacing it to begin at a controlled interval), you will get induced input lag.
Where as nvidias video demonstration only specified latency as frames (.5-1.5 frame latency) and not the time between user input and frame display, its clear they (amd and nvidia) are talking about the same thing in different respects. Amd could easily say their latency is .5-1.5 frames as well (once the metering is put in effect) but chose to actually state the inherent problem with metering: the increase in input latency forced by the timing of frame display.
Hope any of this made sense. Both are talking about 2 different outcomes of the same process; AMD chose the bad side effect, Nvidia the good. Not surprising since they are both justifying their previous position on the matter.
In your worst case scenario for AMDs opinion, you are getting a bit mixed up. To keep interval consistent, the latency of the frame must be altered.
xodius80
Arghhhhhhhhhhhh the suspense is killing me arrghhhhhhhh
yasamoka
yasamoka
Here's what Nvidia says about this:
That's the whole point of multi-GPU. Reducing inter-frame latency due to parallelism (i.e. increasing FPS). Input latency is the same if you're simply doubling your FPS and reducing inter-frame latency. However, if you run a single card at 60FPS vs. a dual-GPU setup at 60FPS, the second will have additional input latency since the frametimes per card are twice as long.
Worst case for single GPU is 1-2 frames, that becomes 1-1.5 frames that are twice as long, i.e. 2-3 frames (single-GPU frametime considered here). That's the additional frame of input latency.
They're definitely talking about the same thing. However, Nvidia have specified that they are talking about 60FPS, so that means one could easily multiply frames by 16.67ms / frame and get the input latency results in ms.
All I'm honestly confused about is that AMD are implying that they are rendered at the same time as an unmetered setup but only PRESENTED with a delay. Nvidia are implying that they are RENDERED with a delay relative to the first GPU. But how I understand this is, that delay does not increase input lag as it allows for additional input, while in the unmetered setup, the second GPU is mirroring the first and practically doing nothing.
Definitely, that is what they are doing.
Thank you for your input! Would be glad if you could chime in again.
EDIT: To clarify something, my question is this: How could you render now what's supposed to happen a few milliseconds later? How would you predict my input? Is the second GPU actually rendering the frame at exactly the same animation timestep it should be rendering at? How would this be possible if it has yet to receive my newest keyboard and mouse input that have happened JUST before this timestep has been reached?
I know all of this, it's "what's being pushed" that's the question. AMD are implying that the frame is still being rendered at the same time but presented with a delay, which doesn't make sense, since the worst case scenario I have described might mean that they are being displayed in perfect intervals but still rendered as fast as possible. Take VSync as an example. with VSync on, the output is a perfect 60FPS, smooth as silk. The second card is actually rendering now with a delay (a delay that should be present naturally if frames were to be perfectly paced). This even happens in cases where a game / benchmark microstutters so badly that 60FPS looks like 30FPS (Heaven Benchmark). If the frames were delayed so that they'd display with VSync on, the bench would still look like 30FPS since almost the exact frame is being displayed. If the frames were rendered like the frame metering illustration shows, then it'd look like a perfect 60FPS, as expected.
That's just AMD's excuse. If I wanted minimal input lag and would face microstutter as a result, I'd just turn off the second GPU and get EXACTLY the same input lag and same practical FPS. The second GPU here is useless when microstutter is in its worst case scenario. It's just drawing additional power.
Plus, a perfectly working multi-GPU setup is one where frames are expected to take the same time to be rendered on each GPU, but they are displayed in succession. That delay is not one that could be cut out without throwing the whole multi-GPU AFR idea down the drain. Rich_Guy
Im just waiting for Hilberts take on them, and an updated set for us single card users, not that i need any new drivers, as everythings running great, just been a while since i installed some 😀
Faruk
They gotta be kidding, right? Lol
Octopuss
It's allright to cry 😀
Rich_Guy
Lmfao, i knew it 😀
The Mac
The nerdrage will be strong with this one.
Newbie2012
The Mac
Im surprised Asder hasnt got his grubby mitts on it already....