Nvidia Talks About Higher OC clocks on the Founder 2080 cards - also PCB Photo
Click here to post a comment for Nvidia Talks About Higher OC clocks on the Founder 2080 cards - also PCB Photo on our message forum
Denial
Turing has double the L1/L2 cache as Pascal which is going to alleviate hits to memory along with Variable Shading and Texture-Space Shading, both of which should make the overall process more efficient for memory bandwidth. And then yeah, whatever changes they made to delta compression.
Denial
fantaskarsef
Noisiv
Denial
https://pbs.twimg.com/media/DkqLJVjUcAAjJOj.jpg
Titan V's numbers are 9.1 / 9.7 / 18.8 - Turing's theoretical speed should be faster than what it is here but according to Morgan McGuire (Engineer for Nvidia) some shaders see less of a speedup from RT cores compared to Volta due to HBM vs GDDR6. So it seems like Raytracing is somewhat bottlenecked by memory bandwidth moreso than traditional workloads.
Also I found the other slide I was talking about:
https://pbs.twimg.com/media/DkqK-93UYAEGq8B.jpg:large
So out of the box Nvidia is claiming the RTX 6000 is 1.5x faster than a Titan V in raster workloads. So you figure 2080Ti is slightly cut down but will have faster clocks. I guess 50% is what we should expect for regular workloads. Idk, I expect less but WE'LL SEE.
There are other things in the architecture that can be leveraged for more performance. For example Vega shipped with RPM for FP16 calcs, which AFAIK, was only utilized in one game (Farcry 5) but Nvidia has a similar function now (they actually had it with GP100's SM but not in consumer Pascal).. so hopefully more games will utilize that now that both vendors support it.
Well didn't the RTX Quadro slides say 1.5x from the architecture itself? I can't find the slide now.. I know someone posted it here showing ATAA vs whatever for UE but I'm pretty sure it showed a base 1.5x increase in performance with AA disabled entirely.
Well this release may be different due to the RTX stuff though, which AFAIK is extremely bandwidth dependent. For example this slide:
Texter
Denial
Agent-A01
wavetrex
http://www.felixcloutier.com/x86/FMUL:FMULP:FIMUL.html - That's not exactly what they are doing, but pretty close.
The CUDA cores on the other hand are quite close to a CPU's floating point unit, running all kind of operations, addition, multiplication, inverse (1/x), square root, trigonometry, and of course... memory access, decision (IF, CASE), jump... and so on and so on not getting too much into detail here.
It is also the reason why bitcoin has moved from CPU's to GPU's to specialized ASICs, as simpler electronics can do the same few operations MUCH faster than more complex electronics which need to adapt to the incoming code.
Nvidia could probably increase number of "Tensor Cores" 10 times with only adding 10% to the total transistor budget, but not very useful if the other parts of the chip can't feed those cores.
It's all about balance (which makes this advanced micro-engineering so hard)
Fixed function binary electronics require very few transistors to implement, compared to general purpose and logic running electronics, which need to adapt to whatever code is being pushed through them.
Those "tensor cores" are array addition+multiplication circuits, basically running the same operation over and over again ad-infinitum.
They are, in a sense, very similar to the days of old when first "hardware accelerated graphics" were implemented, in which the 3D chip was basically just doing lots of identical calculations very fast (for that time), leaving the CPU to push the stream of numbers into them and interpret the results.
If you want a bit of brain explosion, look at this: TheDeeGee
First the 10 series price bumping and now this...
I liked Nvidia you know.
Noisiv
metagamer
Agent-A01
metagamer
metagamer
tsunami231
wo wait, refernece card that isnt blower style??
Im sure they will use say that is do to the tax on things coming from china now. actual I think it more then that cause prices went up way more then 50$ more like 100$ easy
pimpineasy
https://www.amd.com/en-us/products/graphics/desktop/r9 both just refreshed r5.garbage
vega 56 should of been the 14nm 570 but i got scammed by amd marketing again. now this 2080 is out and some 1080ti gonna hit that used market. And AMD has no answer to their entire line up being trash compared to legacy 2nd hand used china products nor can they release a proper mid range card that would be worth buying vs the lowest end 1050ti toaster with cuda cores and physx..
my $40 860k hit 4.7ghz on a 3dmark verified it is basically a HT dual core and runs 4.5 stable it more cpu then 1050ti is evga ssc can push? even switching to a true ht quad core i7-2600 didnt not improve scores and my system was a lot smother with new feature like ms nvme usb c 3.1 2400 ddr3 even rx570 is trash with 2600 in titles i play. i also have like 6 apu. and my 570 isnt much of a improvemnt vs a 1050ti or the apu. the heat and power lmao.. buggy wattman grey screening not even applying stable voltages. next i will try it with coffee lake 8400 and probably another am4 apu. what ill get will probably be better frame times and only like 5-10 min fps.
And yes the rx was rebrand lol rx 570 4gb polaris is nothing but a die shrink r9 and some newer api gimicks