GeForce GTX 2070 and 2080 Could Launch Summer 2018

Published by

Click here to post a comment for GeForce GTX 2070 and 2080 Could Launch Summer 2018 on our message forum
https://forums.guru3d.com/data/avatars/m/29/29917.jpg
Moderator
Hm. If it's Pascal then it means no Tensor so perhaps no raytracing capabilities? Unless that DX12 feature doesn't need tensor or tensor like units. I would expect them to have a card with that feature on since they showed it working and engines are trying to get ready for it. AMD said that they are also working on that if i remember correctly.
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
DX12 Raytracing is supported, the RTX Library as well. However, some features could be accelerated by Tensor cores. So no, Tensor cores are not mandatory for DX RT.
data/avatar/default/avatar29.webp
I was under the impression that the ability to use AI to approximate a large chunk of the raytracing was the only thing that would make it possible to even do in realtime. Without the tensor cores and FP64, I didn't think Volta was really very different from Pascal apart from being on a newer fab process.
data/avatar/default/avatar22.webp
Volta's SMs are ~50% more efficient than Pascal\s. And Volta itself is almost a year old(!) So even if assume they have been doing nothing, but twiddling their thumbs, new GTX series will be at least as efficient as Volta. And this (50-60%), is pretty much the amount of efficiency improvement necessary for little-big core (GTX 1180) to win against the last-gen BIG core (1080Ti)
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
1180Ti / 2080Ti / 1185Ti - 70-80% performance gain over a 1080Ti? Where's my wallet...
data/avatar/default/avatar26.webp
Missed the /s there on Gb/s
data/avatar/default/avatar03.webp
Noisiv:

Volta's SMs are ~50% more efficient than Pascal\s. And Volta itself is almost a year old(!) So even if assume they have been doing nothing, but twiddling their thumbs, new GTX series will be at least as efficient as Volta. And this (50-60%), is pretty much the amount of efficiency improvement necessary for little-big core (GTX 1180) to win against the last-gen BIG core (1080Ti)
I'm pretty sure that the majority of efficiency gains made by Volta were solely due to a newer fab process and lower clockspeeds.
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
I wish they would stop making bigger dies and make GPU's more affordable. I know it won't happen unless someone releases a more powerful card for cheaper. What about AMD? They should be working on their own GDDR6 controller, maybe Polaris refresh again?
data/avatar/default/avatar14.webp
ttnuagmada:

I'm pretty sure that the majority of efficiency gains made by Volta were solely due to a newer fab process and lower clockspeeds.
Nah... 12nm is built on the same node as 16FF+, and TSMC calls it 16/12. 12nm is just an improvement of 16FF+ You won't get nowhere near 50% better efficiency from improvements made on the same node. ~20-25% tops Same as from slightly lower clocks (only for Titan vs Titan, mezzanine cards are similarly clocked) And even then, who cares about clocks if perf. and efficiency is there. It can ran at 1MHz for all I care. Oh and... almost forgot... scratch that clock advantage completely, because Titan V comes with 1/2 FP64 and with tensor cores, which certainly weighs more in terms of efficiency than measly 120MHz.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Nvidia could probably afford to charge more for the new architecture. Ethereum asics are apparently appearing, so the miner GPU demand should be losing steam somewhat. If the market is flooded with cheap second-hand, it seems like selling new stuff would be harder. So, if it's harder, they might as well sell less for more profit. It should be possible if the new generation beats the old handily in power, at least towards the upper end. Plus new architecture is always more exciting. Who knows what's going to happen to AMD. If they haven't got anything new to offer, it's hard to see how they'd sell anything much. HBM2 probably still isn't making things easy for them. Perhaps they will just weather a near extinction of their GPU side until they are ready to make a sudden comeback like they did with Ryzen in the CPU market.
data/avatar/default/avatar24.webp
sammarbella:

AMD? HBM1 and 2 was not enough fail for them in the consumer market. Probably they are working hard to use HBM3 in their next mining GPU, i mean "Gaming" GPU. LOL
Is going to be super tough for AMD. More so because Vega and Pascal are not even contemporaries. It's Vega and Volta Uff... I dunno. Find a way to differentiate themselves from nvidia. Work with Monitor/panel manufacturers to bring range of high quality FreeSync2 products. Bring back Avivo 🙂 Put 2x 200W APUs on the same motherboard. fuk do I know... 😀
data/avatar/default/avatar28.webp
Hilbert AngerDoom keeps us all up to date with pc info the least we can do is spell his name correctly. Is that irish,scotish?
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Idk everyone said 'GTX1080' wouldn't exist either because consumers might get confused by 1080p resolution, but it does so..
https://forums.guru3d.com/data/avatars/m/262/262995.jpg
Umm, sorry but "Ampere" is where we get the word Amps, which comes from André-Marie Ampère, the father of electrodynamics, not some obscure website. I suppose you think Tesla comes from the car name? LOL
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Irenicus:

Umm, sorry but "Ampere" is where we get the word Amps, which comes from André-Marie Ampère, the father of electrodynamics, not some obscure website. I suppose you think Tesla comes from the car name? LOL
Uh, I think it's saying the first use of Ampere as an Nvidia architecture name comes from that website.. not why the architecture is called Ampere.
sammarbella:

The AMD reply for the next Nvidia Gaming GPUs and raytracing tech was already announced. It's just aother OpenGPU/Vulkan gimmick no gaming dev will use for Windows (98.5% of PC market): https://gpuopen.com/announcing-real-time-ray-tracing/ It looks like AMD "rebadged" an (already) revamped gimmick from two years ago: https://gpuopen.com/firerays-2-0-open-sourcing-and-customizing-ray-tracing/
To be fair RTX is just a continuation of Nvidia's work on OptiX. The idea behind these new raytracing systems is they use traditional GPU optimized raytracing but they don't require as many rays for good quality because they denoise the final image.
https://forums.guru3d.com/data/avatars/m/225/225084.jpg
It's because we want a card that finally gives unbeatable bang for buck. 1160 with 8gigs of GDDR6 beating a 1080 for 1/3 of the price. 3rd party 1160's factory OC'd closing in on 1080Ti. Xmas release gg thanks nVidia. 1170 will come in two flavors an 8gb and a 16gb version, 1180 will be 16gb only.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
nz3777:

Hilbert AngerDoom keeps us all up to date with pc info the least we can do is spell his name correctly. Is that irish,scotish?
Almost sounds like some fictional incredibly powerful fantasy wizard. I like the sound of Hilbert AngerDoom, stresser of the hardware and writer of the compendia, keeper of the famae 😀
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
Waiting on GDDR6 for new graphics cards to release? Yawn~ If GDDR6 wasn't such a disappointment i could see waiting, but currently, it's just so boring and barely even better (and in some cases not even better)
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
blake86:

hilbert, think it is time for you to know, for so much time i was sure your name was Hilbert ANGERDOOM!!! I should stop playng wow or at least buy a new pair of eyeglasses LOL
So, like you've been around since 2013, my name is in every post in the forums, on any news item and pretty much 80% of the articles and now you notice it is Hagedoorn ... Mate, you need sum glasses 🙂 Although Angerdoom is quite catchy, ... 😀
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
That cook picture would be ideal then 😀 Edit: nvm I've read the article again, saw it at the end 🙂