Editorial: GeForce RTX 2080 and 2080 Ti - An Overview Thus far

Published by

Click here to post a comment for Editorial: GeForce RTX 2080 and 2080 Ti - An Overview Thus far on our message forum
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
nz3777:

2080 ti for $800 Bucks sounds like a damm good deal! I still cant figure out what that raytracing is if it slapped me in the face haha. The graphics looked really nice of course in the video but again in plain english what am I looking for lol?
I'll pass, thanks. I wouldn't worry too much about "Real-Time Ray Tracing" as it is an advertising gimmick hauled out every year or two to generate interest in vaporware and in capabilities that aren't even relevant to 3d games. nVidia isn't the first by any means--Intel actually had it done to them royally by mostly Internet pundits at the time (HH excused, IIRC) who kept telling everyone that Larabee was a "real-time ray tracing" GPU even though Intel stated at every single opportunity that Larabee was *not* a real-time ray tracer at all! I can recall some pretty lengthy, ridiculous articles written about Larabee the "real-time ray tracer" that you could not talk them out of--Anandtech was especially stubborn on the subject as I recall, right up until the day that Intel forever cancelled the Larabee project...! Of course, they cancelled it--nothing on Earth could live up to the witless hype the know-nothings had generated about Larabee! It was both amusing and sad to see the brainless ranting that went on about Larabee just for the sake of page hits. And--note that it's always the same sort of reflective chrome surfaces used over and over again to "demonstrate" the "RTRT" capabilities that aren't actually there. Ray tracing is an extremely computationally demanding method of rendering that simulates actual light bouncing off of objects in a scene exactly as real light behaves in the real environment in relation to your peepers. I used to render with a farm of Amiga/Toasters back in the 80's early 90's with Lightwave, and a single frame could take many hours if not days to render, and at very low resolutions to boot--compared to today (a 10000x10000 res trace might take weeks for a single frame.) Even with today's behemoth CPUs, ray tracing a single frame can take seconds to minutes--and that depends upon the scene itself and the resolution desired. So you will not be seeing 60 fps to 144 fps in *any* real-time ray traced game--not gonna' happen--fogetabboudit...;) Bah, humbug, and all of that. 3d APIs and games use "rasterization" for rendering--which is basically *simulated* ray-tracing--it's designed to produce effects as close to ray-tracing as possible but in tiny, tiny fractions of the time it takes to actually see those effects ray-traced in a scene. That's what GPUs/APIs do. Every so often new possibilities emerge for rasterization hardware and 3d-gaming APIs and the simulation gets better---and better as time moves on, and playing 3d games becomes steadily more realistic graphically. This is very simplified, of course, but the bottom line is that for 3d gaming @ 60-144 fps and up, better rasterization is clearly the way to go. Duh, which is why it exists in the first place! So right now any claims for RTRT you can chalk up to *marketing* and total hyperbole. I've seen RTRT dragged out of the closet and dusted off so many times since Larabee that when I see marketing around the term today all I can do is hold my nose.
data/avatar/default/avatar25.webp
waltc3:

I'll pass, thanks. I wouldn't worry too much about "Real-Time Ray Tracing" as it is an advertising gimmick hauled out every year or two to generate interest in vaporware and in capabilities that aren't even relevant to 3d games. nVidia isn't the first by any means--Intel actually had it done to them royally by mostly Internet pundits at the time (HH excused, IIRC) who kept telling everyone that Larabee was a "real-time ray tracing" GPU even though Intel stated at every single opportunity that Larabee was *not* a real-time ray tracer at all! I can recall some pretty lengthy, ridiculous articles written about Larabee the "real-time ray tracer" that you could not talk them out of--Anandtech was especially stubborn on the subject as I recall, right up until the day that Intel forever cancelled the Larabee project...! Of course, they cancelled it--nothing on Earth could live up to the witless hype the know-nothings had generated about Larabee! It was both amusing and sad to see the brainless ranting that went on about Larabee just for the sake of page hits. And--note that it's always the same sort of reflective chrome surfaces used over and over again to "demonstrate" the "RTRT" capabilities that aren't actually there. Ray tracing is an extremely computationally demanding method of rendering that simulates actual light bouncing off of objects in a scene exactly as real light behaves in the real environment in relation to your peepers. I used to render with a farm of Amiga/Toasters back in the 80's early 90's with Lightwave, and a single frame could take many hours if not days to render, and at very low resolutions to boot--compared to today (a 10000x10000 res trace might take weeks for a single frame.) Even with today's behemoth CPUs, ray tracing a single frame can take seconds to minutes--and that depends upon the scene itself and the resolution desired. So you will not be seeing 60 fps to 144 fps in *any* real-time ray traced game--not gonna' happen--fogetabboudit...;) Bah, humbug, and all of that. 3d APIs and games use "rasterization" for rendering--which is basically *simulated* ray-tracing--it's designed to produce effects as close to ray-tracing as possible but in tiny, tiny fractions of the time it takes to actually see those effects ray-traced in a scene. That's what GPUs/APIs do. Every so often new possibilities emerge for rasterization hardware and 3d-gaming APIs and the simulation gets better---and better as time moves on, and playing 3d games becomes steadily more realistic graphically. This is very simplified, of course, but the bottom line is that for 3d gaming @ 60-144 fps and up, better rasterization is clearly the way to go. Duh, which is why it exists in the first place! So right now any claims for RTRT you can chalk up to *marketing* and total hyperbole. I've seen RTRT dragged out of the closet and dusted off so many times since Larabee that when I see marketing around the term today all I can do is hold my nose.
By far the best Answer so far. Thank you sir, I just hope I didnt start an internal War here in the treads but I was really curious about the RT thing cause of all the hype lately.
https://forums.guru3d.com/data/avatars/m/199/199386.jpg
waltc3:

I'll pass, thanks. I wouldn't worry too much about "Real-Time Ray Tracing" as it is an advertising gimmick hauled out every year or two to generate interest in vaporware and in capabilities that aren't even relevant to 3d games. nVidia isn't the first by any means--Intel actually had it done to them royally by mostly Internet pundits at the time (HH excused, IIRC) who kept telling everyone that Larabee was a "real-time ray tracing" GPU even though Intel stated at every single opportunity that Larabee was *not* a real-time ray tracer at all! I can recall some pretty lengthy, ridiculous articles written about Larabee the "real-time ray tracer" that you could not talk them out of--Anandtech was especially stubborn on the subject as I recall, right up until the day that Intel forever cancelled the Larabee project...! Of course, they cancelled it--nothing on Earth could live up to the witless hype the know-nothings had generated about Larabee! It was both amusing and sad to see the brainless ranting that went on about Larabee just for the sake of page hits. And--note that it's always the same sort of reflective chrome surfaces used over and over again to "demonstrate" the "RTRT" capabilities that aren't actually there. Ray tracing is an extremely computationally demanding method of rendering that simulates actual light bouncing off of objects in a scene exactly as real light behaves in the real environment in relation to your peepers. I used to render with a farm of Amiga/Toasters back in the 80's early 90's with Lightwave, and a single frame could take many hours if not days to render, and at very low resolutions to boot--compared to today (a 10000x10000 res trace might take weeks for a single frame.) Even with today's behemoth CPUs, ray tracing a single frame can take seconds to minutes--and that depends upon the scene itself and the resolution desired. So you will not be seeing 60 fps to 144 fps in *any* real-time ray traced game--not gonna' happen--fogetabboudit...;) Bah, humbug, and all of that. 3d APIs and games use "rasterization" for rendering--which is basically *simulated* ray-tracing--it's designed to produce effects as close to ray-tracing as possible but in tiny, tiny fractions of the time it takes to actually see those effects ray-traced in a scene. That's what GPUs/APIs do. Every so often new possibilities emerge for rasterization hardware and 3d-gaming APIs and the simulation gets better---and better as time moves on, and playing 3d games becomes steadily more realistic graphically. This is very simplified, of course, but the bottom line is that for 3d gaming @ 60-144 fps and up, better rasterization is clearly the way to go. Duh, which is why it exists in the first place! So right now any claims for RTRT you can chalk up to *marketing* and total hyperbole. I've seen RTRT dragged out of the closet and dusted off so many times since Larabee that when I see marketing around the term today all I can do is hold my nose.
[youtube=IyUgHPs86XM] (See Time Index 1h7m16s) John Carmack's prediction in 2013 (Quakecon), was basically that ambient occlusion would be the key to ray tracing as a 'solver'...and he was right. The techniques for what nvidia Êt all have used to give us RTRT is a branch of mathematics used for ambient occlusion, which (as I understand it) is primarily concerned with the intersections of geometry, using physically-based materials. Again, John Carmack said this was really the key. Nvidia have got the GDC2018 videos linked here from June. RTRT doesn't have to be RT to give us RT, because that is insane computation, instead, we got a hack - but it gives us the same result. Genius. Absolute god-damn genius.
data/avatar/default/avatar09.webp
Loobyluggs:

Nvidia have got the GDC2018 videos linked here from June. RTRT doesn't have to be RT to give us RT, because that is insane computation, instead, we got a hack - but it gives us the same result.
Not exactly a hack because RT cores are really solving the rendering equation, but ofc we shouldn't expect movie production quality either As far as we know Nvidia's ingame Ray-tracing will consist of several packages, like RT Shadows, RT AO, RT Reflections, ie. partial solution of full-scene RT solution. So instead of full-scene RT solution developer will pick and choose ray-traced parts depending on how much resources they can fit in RTX hardware and still get away with decent frames. How good is this going to look in playable frames is anyone's guess. What makes this possible are RT cores:
The RT core essentially adds a dedicated pipeline (ASIC) to the SM to calculate the ray and triangle intersection. It can access the BVH and configure some L0 buffers to reduce the delay of BVH and triangle data access. The request is made by SM. The instruction is issued, and the result is returned to the SM's local register. The interleaved instruction and other arithmetic or memory io instructions can be concurrent. Because it is an ASIC-specific circuit logic, performance/mm2 can be increased by an order of magnitude compared to the use of shader code for intersection calculation. Although I have left the NV, I was involved in the design of the Turing architecture. I was responsible for variable rate coloring. I am excited to see the release now.
https://www.zhihu.com/question/290167656/answer/470311731
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
Luc:

The final price will depend on chips sizes... if they don't cut them, they will be expensive, at least based on Quadro price list. And Nvidia loves sofware and hardware crippled performance based on segmentation, so I can expect the usual anti consumer tactics (PhysX, Gameworks, Gtx 970, ...). But won't be a problem for them, they will keep their throne because two games uses the magic raytraced light efects, and will sell 2080tis in pairs. But nice technology indeed 😛
As a Quadro user i can tell that the price of the quadro are not based on how complex the GPU is (most of the time they are slower than the "gamer" one) the main costy point is the memory and the PCB conception, Quadro should be working H 24 and D 7 all their life without issue and driver should have zero fault since day 1... it's like any pro material and this is why they cost so much.
https://forums.guru3d.com/data/avatars/m/118/118854.jpg
Looking good but one problem, The 2080ti is only 11gb?? Ergh, Was hoping to be around 16gb, Cause I do 8k testing as well. I can tell you now, 1 will be more then plenty for 4k gaming with max in game settings, If your using Sparsegrid on top of 4k(yes some games desperately needs aa, even at 4k) or 8k with ingame custom settings, 2 cards of these will benefit greatly, 8k is totally different animal altogether though, otherwise stick with one card. Looking foward for the 2080ti Hybrid edition from Evga, wish it was 16gb through, hopefully there will be a 16gb edition, yum yum. For the most part, my 1080ti's runs 8k at very playable frame rate, Not silky smooth, of course not, Very playable as in playing a game, smooth enough to play. 8k is something else all together anyways, some games in the ingame settings needs to be set to custom, otherwise you will have problems, also, have to have good amounts of system ram as a buffer, otherwise game won't move due to vram being all used up, So if your doing 8k testing like myself, Fair warning, you will have problems with the 2080ti being only 11gb, good amounts of system ram will help Greatly, so just heads up on it if your going very crazy on res and settings. Might hold off a bit to see if 16gb version might be released which I doubt. Holding back for few months. Cards looking good though, Looking foward for the full review.
https://forums.guru3d.com/data/avatars/m/118/118854.jpg
DW75:

There is no way you will get a 16gb version. The card would then also have a 512 bit memory bus as well. Nvidia will never offer this in a top Ti model of this new generation. It will be at least 2 more years until that happens.
Something just hit me just very recently concerning the nvlink technology. Correct me if I am wrong here but if you use nvlink, 2 cards behaves as 1, thus, if using 2 2080ti's, yields 22gb of vram, also the cuda cores are doubled??
https://forums.guru3d.com/data/avatars/m/145/145154.jpg
It's looking like I'll skip a generation and I really should. My 1070 is an ox. Strong and reliable. I also fear pricing is going to suck here in the States. Perhaps trade wars will be over by the 21 series and there will be games needing more graphical horsepower then (Cyberpunk, etc.).
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
Sadly the 2070 does not seem like the powerhouse other x70 cards were. No Ti killer this gen. Plus that it may not be a TU104 card, but a TU106 card which seems like Nvidia is 'tiering it down'. And since AMD will be out with 7nm cards (not interested unless they pull off a perf/power miracle), Nvidia will likely be out with a response within next 12 months (7nm Turing refresh, etc). Above certain price points, gaming matters less to me, so not averse to skipping gens for the first time in 15 years.
https://forums.guru3d.com/data/avatars/m/199/199386.jpg
jura11:

Hi there AO or Ambient Occlusion is faking indirect lighting or Global Illumination and therefore boosting the render times, for still render I prefer not to use it, because it looks fake which in gaming or rather games is not so important and think proper implementation of GI would kill the framerates AI denoiser looks good on paper, seen video and should be awesome for myself or for rendering, just quality of renders hasn't been best, great if you are looking at distance but from close you see, there is missing bump or displacement mapping Hope this helps Thanks, Jura
I know. Using an offshoot of the math of AO to get RTRT in gaming, was JC's point.
https://forums.guru3d.com/data/avatars/m/63/63170.jpg
RooiKreef:

Nice write-up. Very interesting looking cards from Nvidia. I'm not pushing AMD down, but hell they have their work cut out for themselves if they still want to be relevant in the high end market. I personally think they had a good Idea on paper with HBM & HBM2, but in the end it never worked out as good as what it looked like on paper. Interesting times though.
I would think AMD has a reply for this somewhere down the line. From the MS announcement, ALL current DX12 compliant GPU's will be DXR capable, its just that they have deliberately left the door open for Hardware implementation/acceleration of all or parts of it in the Future. " You may have noticed that DXR does not introduce a new GPU engine to go alongside DX12’s existing Graphics and Compute engines. This is intentional – DXR workloads can be run on either of DX12’s existing engines. The primary reason for this is that, fundamentally, DXR is a compute-like workload. It does not require complex state such as output merger blend modes or input assembler vertex layouts. A secondary reason, however, is that representing DXR as a compute-like workload is aligned to what we see as the future of graphics, namely that hardware will be increasingly general-purpose, and eventually most fixed-function units will be replaced by HLSL code. The design of the raytracing pipeline state exemplifies this shift through its name and design in the API. With DX12, the traditional approach would have been to create a new CreateRaytracingPipelineState method. Instead, we decided to go with a much more generic and flexible CreateStateObject method. It is designed to be adaptable so that in addition to Raytracing, it can eventually be used to create Graphics and Compute pipeline states, as well as any future pipeline designs." https://blogs.msdn.microsoft.com/directx/2018/03/19/announcing-microsoft-directx-raytracing/ I'm just waiting for AMD's next driver. One good thing is that this tech could well be in the next console refresh. edit : It also states in their brief announcement, that for games at least, we aren't going to be seeing the DXR tech being used for complete visual rendering, but mostly for light rendering techniques, and other supplements to the scene that can be accelerated by Rays conveniently (Maybe 3D audio is back in play).
https://forums.guru3d.com/data/avatars/m/63/63170.jpg
As for the new cards, I think the Core increase with respect to the current cards is pretty small, and the difference in some Games will be just as small. The memory bandwidth has gone up, which will help greatly in those games that can use it. 1080 to 2080 doesn't seem like a good idea yet, neither does 1080Ti to 2080Ti, unless those Tensor/RT cores can be used meaningfully. As with all new tech, its great epeen, but generally best to wait for v2 of the tech so they iron the good bits out and remove the bad. Can't wait for all the new Benchmarks this is going to bring though 😉
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Evildead666:

As for the new cards, I think the Core increase with respect to the current cards is pretty small, and the difference in some Games will be just as small. The memory bandwidth has gone up, which will help greatly in those games that can use it. 1080 to 2080 doesn't seem like a good idea yet, neither does 1080Ti to 2080Ti, unless those Tensor/RT cores can be used meaningfully. As with all new tech, its great epeen, but generally best to wait for v2 of the tech so they iron the good bits out and remove the bad. Can't wait for all the new Benchmarks this is going to bring though 😉
yes, indeed. imo(!), this is the clumsy aftermath of getting caught behind in the process-shrink, as i've said extensively. we all know Nvidia did a few things: 1) were late to crypto, 2) depended on crypto too long, 3) possess superior architecture (atm), and 4) became complacent. in the meantime TSMC in partnership with Qualcomm, Apple, and AMD is making 7nm IC's right now. pity the Intel I-5. not only has ryzen+ made it irrelevant, but Qualcomm's new 7nm ARM cpu will have all day laptop battery performance and easily run all applications that an I-5 would be tasked for. And of course, in September the new 7nm iPhone and in 2019 Ryzen 2 and Navi so Nvidia and Intel were the odd men out and (mixed metaphor) caught behind the eight ball of process
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
Evildead666:

As for the new cards, I think the Core increase with respect to the current cards is pretty small, and the difference in some Games will be just as small. The memory bandwidth has gone up, which will help greatly in those games that can use it. 1080 to 2080 doesn't seem like a good idea yet, neither does 1080Ti to 2080Ti, unless those Tensor/RT cores can be used meaningfully. As with all new tech, its great epeen, but generally best to wait for v2 of the tech so they iron the good bits out and remove the bad. Can't wait for all the new Benchmarks this is going to bring though 😉
I generally think it's better to go for v1 of a new GPU architecture, eg getting the GTX 680 rather than GTX 780 and then upgrading to GTX 980 - in other words skipping the v2. But, I think in this particular case where this new architecture seems very new to me with all the Tensor & Ray Tracing Cores, that fundamental difference means it's all exceedingly new & untested & probably therefore unoptimised, so I think in this architecture's case it would be better to go with v2 (if they ever do a v2 that is). So for this specific architecture I certainly agree with your point.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
Twitch livestream seems to be already active, with a countdown ( Currently at 21 hours, 30 mins, with over 2400 people watching a clock go down o_O ) https://www.twitch.tv/nvidia
https://forums.guru3d.com/data/avatars/m/115/115462.jpg
Nice summary, I'm definitely more curious now. Seems it's more than a refresh after all.
https://forums.guru3d.com/data/avatars/m/271/271877.jpg
rl66:

As a Quadro user i can tell that the price of the quadro are not based on how complex the GPU is (most of the time they are slower than the "gamer" one) the main costy point is the memory and the PCB conception, Quadro should be working H 24 and D 7 all their life without issue and driver should have zero fault since day 1... it's like any pro material and this is why they cost so much.
Sure, completely right (and funny "risitas" quotation, alanm), but I even understand that at the end everything depends on marketing and financials. Nvidia pushed launch prices on Quadro cards when they introduced tensor cores: P6000 was 1000$ more expensive than M6000, but GV100 was 3300$ more expensive than GP100. RTX8000 will be 1000$ more expensive than GV100. In gaming, betwen Maxwel and Pascal there was an increase of 50$. But Titan V (semi-pro) costed three times more than last generation Titan XP. The introduction of RTX and tensor cores will be expensive because of many things mostly explained here and there. Rumors talks about 1000$ for the RTX 2080Ti, but Hilbert predicts it will be about 100$ less, wich is 200$ more expensive than a 1080Ti. And it looks plausible. - But what really interests me it's the possibility of alleviating workload from the older cores to the new ones, maybe in a future. Thanks and excuse me for my bad repetitive english.
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
Got this gnawing feeling that the 2080ti joint release with the 2080 is to deflect from the lesser cards (2080/2070) less than expected gains when reviewed by tech sites. So no one will say Turing fail when the 2080ti gets all the attention and praise. If 2080/2070 were released without Ti, the Turing line may lose much of the excitement when we see the reviews. I mean we all know we will not see Pascal size gains with the 2080/2070, only modest gains vs their predecessors. As AMD compared Vega 64 with Fury X on release (+25% ?), so Nvidia will focus 2080/2070 vs 1080/1070 with the Ti out of the picture to make it more appealing. Hope I'm wrong, but lets see how Nvidias marketing magic deals with this.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
BangTail:

I concur and I suspect this is why there is no Titan this time round, they are going to try and drive all the gamers who would normally buy Titan because there was no Ti until later on to buy the Ti right away and I suspect the Ti will be the only card that offers any real performance advantage over the current Ti.
https://www.nvidia.com/en-us/titan/titan-v/ Granted, it's not "turing", but it's also not pascal, since its volta. Whose to say if there won't be a turing titan, but realistically speaking, since there's a volta, there's not much of a reason for a turing titan currently. My bet is though the titan for turing will come out, and i'm not sure why you say there is no titan this time around, since the Titan X(pascal) came out a couple months after the 1080, and the Titan Xp came out a month after the 1080 ti, and the titan X(maxwell) came out many months after the 980. To say there's no titan this time around due to what we know or suspect is going to release first, when the previous titans have come out months after the initial release of the product, just doesn't make sense.
https://forums.guru3d.com/data/avatars/m/189/189438.jpg
ÂŖ1400 ebuyer pre-order for an Asus 2080ti dual fan....ÂŖ850 for an Evga 2080 sc, so the 2070 is gonna be around 650-700, not worth the upgrade imo, might just just keep my eye out for a second 1080.