GeForce RTX 2070 in November with 2304 shader cores

Published by

Click here to post a comment for GeForce RTX 2070 in November with 2304 shader cores on our message forum
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
It would seem to me Nvidia would kill the chances of studios really bothering to put the HW ray tracing to much use if they removed the support from the 2070. Aren't that and the 2060 going to be the most popular GPUs among gamers at large, unlike the more expensive and more high-end 2080 and 2080 Ti? I don't know how easy or difficult the ray tracing is to implement in the engine, but considering games are typically released with glaring bugs, it's quite obvious few studios want to spend a single second of extra time on software. Would the game makers go through the trouble if only a small percentage of players could benefit from it, yet all game buyers would still need to pay for the feature, regardless of if they can actually use it.
https://forums.guru3d.com/data/avatars/m/260/260048.jpg
Kaarme:

I don't know how easy or difficult the ray tracing is to implement in the engine, but considering games are typically released with glaring bugs, it's quite obvious few studios want to spend a single second of extra time on software.
It will be part of Nvidia's GameWorks which is designed to be easily implemented. In games, you will have RT technology toggle on/off in the same way you can turn on/off PhysX. Will we see widespread implementation of it? Potentially yes as it's the next logical advancement of PC graphics, however, in what form - that will be the main question here. I'm sure AMD will catch up, personally what I don't want to see is stupid "manufacturer only" type of Technology where you will have - RayTracing for Nvidia and some form of BobTracing for AMD, resulting in incompatibilities and one-sides titles.
https://forums.guru3d.com/data/avatars/m/252/252334.jpg
knowing nvidia, most likely 2070 Ti will follow, and perhaps 2060 Ti wil see the lights this time around.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Lucifer:

knowing nvidia, most likely 2070 Ti will follow, and perhaps 2060 Ti wil see the lights this time around.
Wasn't the 1070 Ti only released because the Vega 56 outperformed the 1070? Considering AMD hasn't apparently got anything to release in the foreseeable future, Nvidia wouldn't necessarily see the need to complicate their GPU portfolio. They might, of course, do it a year from now just to have something fresh to offer to the market.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
cryohellinc:

I'm sure AMD will catch up, personally what I don't want to see is stupid "manufacturer only" type of Technology where you will have - RayTracing for Nvidia and some form of BobTracing for AMD, resulting in incompatibilities and one-sides titles.
The raytracing is implemented via Microsoft's DXR - RTX accelerates DXR. AMD will have its own acceleration method.
https://forums.guru3d.com/data/avatars/m/232/232504.jpg
Anyone knows if the RTX 2070 and especially RTX 1060 will have lower power consumption that GTX 2070 and GTX 1060 respectively?
https://forums.guru3d.com/data/avatars/m/263/263271.jpg
Not even sure if that could be considered an upgrade from my 1070...
data/avatar/default/avatar24.webp
If you consider a 1080 an upgrade, maybe. Looks to be right around the lvl of a 1080... maybe a bit more.
data/avatar/default/avatar30.webp
NiColaoS:

Anyone knows if the RTX 2070 and especially RTX 1060 will have lower power consumption that GTX 2070 and GTX 1060 respectively?
They won't. We're fked when it comes to power consumption because all perf/Watt savings will go toward higher perf. Luckily Nvidia knows their job, and since they are at the good place atm, instead of MOAR CORES approach which is bound to run into the power and scaling wall, they chose this release to revolutionize their entire GPU rendering strategy.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Kaarme:

Wasn't the 1070 Ti only released because the Vega 56 outperformed the 1070? Considering AMD hasn't apparently got anything to release in the foreseeable future, Nvidia wouldn't necessarily see the need to complicate their GPU portfolio. They might, of course, do it a year from now just to have something fresh to offer to the market.
yes, the 1070ti was a stopgap, but one that became popular despite cannibalizing the 1080 sales. it is extremely doubtful they will put anything between the 2070 and the 2080, as they started w/ the 2080ti at launch
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Noisiv:

They won't. We're fked when it comes to power consumption because all perf/Watt savings will go toward higher perf. Luckily Nvidia knows their job, and since they are at the good place atm, instead of MOAR CORES approach which is bound to run into the power and scaling wall, they chose this release to revolutionize their entire GPU rendering strategy.
lolz if you think Nvidia doesn't have "moar cores" in your future. there is no problem with scaling and power is reduced by the process. Nvidia has been working on this since AMD's original design with Vega... which is fully scalable. and don't get all fanboy on the last statement, it is fact.
data/avatar/default/avatar15.webp
tunejunky:

lolz if you think Nvidia doesn't have "moar cores" in your future. there is no problem with scaling and power is reduced by the process. Nvidia has been working on this since AMD's original design with Vega... which is fully scalable. and don't get all fanboy on the last statement, it is fact.
Actually I know for a fact that Nvidia already has less of traditional shader cores on their RTX lineup than it could have. And that is because they chose to add Ray-tracing specific ASIC. Which eats into both power and area. Although it it's an order of magnitude more efficient in RT-specific collision calculations than the general purpose Cuda Core. So yeah, forward looking, IQ bettering, RT specialized architecture combining both GP shader + tensor cores, instead of brute forcing MOAR CORES. At the time when they can afford to dabble in new techniques. I call that smart. And is exactly the opposite of what AMD did with Vega, investing in future techniques that either do not work, or are not supported, at the time when they had been lagging to begin with.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Noisiv:

Actually I know for a fact that Nvidia already has less of traditional shader cores on their RTX lineup than it could have. And that is because they chose to add Ray-tracing specific ASIC. Which eats into both power and area. Although it it's an order of magnitude more efficient in RT-specific collision calculations than general the general purpose Cuda Core. So yeah, forward looking, IQ bettering, RT specialized architecture combining both GP shader + tensor cores, instead of brute forcing MOAR CORES. At the time when they can afford to dabble in new techniques. I call that smart. And is exactly the opposite of what AMD did with Vega, investing in future techniques that either do not work, or are not supported, at the time when they had been lagging to begin with.
all of which is true, but misses my point entirely. the name of the game is Yield. simply because that is directly related to profit, cost, and ability to sell. larger chips have lower yields - that is just the way manufacturing works as the silicon wafers are all the same size to start with. and it's why jumping to a smaller process always increases yield. Nvidia is "tick-tocking" atm, RT (2080,2080ti turing at least) is a new architecture on the most refined process (but not the smallest). and they're jumping the gun a little bit to quiet the waters and have their next gen available (in the market) by the time AMD releases Navi/whatever the hell they want to call it. and btw... Nvidia IS going for scalable modules, they're just far behind in that area.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
tunejunky:

and btw... Nvidia IS going for scalable modules, they're just far behind in that area.
How are they far behind?
data/avatar/default/avatar12.webp
@tunejunky The name of the game is not Yield. At all. Case in point Volta's V100 - a GPU which is barely producible, yet has been raking in profits and lifting the company value ever since its creation. if the name of the game was Yield, AMD would be AT THE VERY LEAST equal to Nvidia. The name of the game is Profit = Addressing as wide as possible market, with as competitive as possible products, with as high as possible margins. And margins are only partially influenced by yield, because the BoM is only a tiny fraction of Nvidia's spending, being dwarfed by R&D and salaries.
data/avatar/default/avatar16.webp
tunejunky:

yes, the 1070ti was a stopgap, but one that became popular despite cannibalizing the 1080 sales. it is extremely doubtful they will put anything between the 2070 and the 2080, as they started w/ the 2080ti at launch
If you think about it, they didn't actually change much. They couldn't launch a new Titan because the current Titan V would be more powerful and expensive still. So they launched the Ti at the OG Titan price point at launch. We will probably see a 2090 or some-such launch in the slot where the Ti currently sits. I'll be very interested in the performance of these cards. Various reviewers were saying that the 2080 Ti was having issues hitting smooth frame rates at 1080p in the hands-on demos at the event, with noticeable dropped frames. Likely things will improve because of drivers, but it's concerning. I'm also disappointed that they are relying on studios to implement Tensor assisted anti-aliasing (the feature that's being implemented on the vast majority of "RTX Ready" titles) rather than putting it into the drivers. It seems like that could be low hanging fruit that could have given a universal boost to performance and image quality, but apparently it's not as easy to do as it seems.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Well we don't know... maybe it's not as bad as people think, if it's faster than a 1080 and simply misses out on RTX it's still a valid package. Remember, RTX is in it's baby shoes, and at this point now nobody needs it... it helps sell the biggest chips, but not much more in it's current state. If they can position the 2070 right compared to Pascal at a reasonable price, this again will be the most popular card of their lineup, and probably the most profitable as well.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Fox2232:

Right, nobody needs it today. But you are paying premium for it in 2070. And at time owners of 2070 will have games which use RTX, 2070 will likely not be enough.
I agree. That's why I meant it needs to be compared to Pascal, to a card that doesn't care or have RTX at all. If the price is right there, who cares, nobody forces people to run DX12. If it's expansive, people won't buy it anyway, every buyer should just get his mind straight and think about a purchase that costs a couple of hundreds of $/€/pounds. If it's priced right, just don't use DX12 and enjoy a good performing card still. If it's expansive and you indeed pay too much, don't buy a 2070 but a 1080 / 1080Ti instead. Nobody holds a gun to people's heads forcing them to buy a 2070 for the same money they could get a 1080ti for (or even more).
data/avatar/default/avatar29.webp
Really depends ow many 1070, 1080 and 1080ti's NV has left to try and sell as far as pricing the 20## cards. If you really do not "need" and upgrade, I would skip this batch personally. RT is coming, I do not doubt it .... but it seems a good 2-3 years away before it may be at all used much.