Gigabyte confirms GeForce RTX 4070 Ti graphics cards
Click here to post a comment for Gigabyte confirms GeForce RTX 4070 Ti graphics cards on our message forum
geogan
MS Flight Sim - How can the DLSS3 numbers be more than twice the rendered frames number? I thought DLSS3 can only insert one frame per real frame? So how does 66 frames become 147 DLSS3 frames?
Actually now I think about it, that also includes the lower resolution trick of DLLS2 doesn't it... so the 4K frame is only rendered at 1440P, so it must be then rendering about 74 frames at 1440P and then doubling to 147.
Speaking of DLSS I was playing Cyberpunk last night and found horrible "flickering" with DLSS 2 on, compared to it off. Seems it does some sort of smoothing/anti-aliasing when you don't move, but as soon as frame moves by one pixel you get a horrible aliased frame appear and then disappear every time. Looks terrible in night scenes. Had to turn DLSS 2 off.
cucaulay malkin
https://www.purepc.pl/akceleracja-sprzetowa-w-programach-do-renderingu-i-obrobki-materialow-video-test-wydajnosci-kart-graficznych?page=0,6
https://www.purepc.pl/akceleracja-sprzetowa-w-programach-do-renderingu-i-obrobki-materialow-video-test-wydajnosci-kart-graficznych?page=0,1
https://www.purepc.pl/akceleracja-sprzetowa-w-programach-do-renderingu-i-obrobki-materialow-video-test-wydajnosci-kart-graficznych?page=0,3
even 3060ti smashes amd gpus or cpus.
that is a good point if you think about it.
rtx line is pretty strong for compute acceleration
this can't come for no extra cost
geogan
fantaskarsef
H83
schmidtbag
mikeysg
And thus, the RTX 4080 12GB FE (Fake Edition) is reintroduced with a new, and not unexpected moniker of the RTX 4070 Ti.
tunejunky
tunejunky
geogan
schmidtbag
Venix
cucaulay malkin
tunejunky
tunejunky
cucaulay malkin
tunejunky
cucaulay malkin
https://www.pcgamingwiki.com/wiki/List_of_games_that_support_ray_tracing
imagine 150 games in the last 4 years that require more than 10gb of vram to run ultra, you'd be fine with buying a 3080 then,right ? cause still the majority runs fine, incl. the most popular steam games, and for the 150 you can just lower the gi,ao,reflection and shadow quality.
well if you start counting in the 2000s then maybe, but not now and in the near future.can't imagine buying a $1000 gpu that lags behind in rt. can't even imagine buying a $650 one that is too weak to run it tbh since 1000 is a stupidly gouged price to pay for a gpu imo and I'm not even remotely interested in such products from any brand. thing is,if I was, it better run everything ultra, including rt, I don't need it to win in overwatch.
150 games with rt starting from fall 2018, not enough eh ?
cucaulay malkin
winning.exe
CDNA is a re-hash of GCN; it's lacking many of the things that big data et al. look for these days. While CDNA supports things like INT8 and FP16, Nvidia has much better mixed precision and matrix performance (i.e. through FP8, INT8 and below, "sparse formats" to accelerate AI workloads). Then you add niceties like CUDA-X with a massive breadth of software support, and the story becomes similar to the desktop: if you have the money, you buy Nvidia. ROCm exists on the AMD side (for better or for worse), but that really isn't a serious competitor. Hence why Nvidia ships 9 in 10 accelerators in this space.
There is no paradox: when you buy Nvidia, you get a much better product and software stack which you'll be glad to lock yourself into, because the alternative is AMD's ongoing compute catastrophe 😀
At this point, even legacy OpenCL software runs much better on Nvidia hardware.
This is coming from a company that invents entire instruction sets to lock competitors out (see: SSE1-4, AVX, AVX2, AVX512). Intel is interested in making software that runs well on their products, and if — by coincidence — it also runs well on someone else's, its only by coincidence 😀
I have personally seen this time and time again in the open source space, with things like Intel ISPC, Embree, Clear Linux, OneAPI and so on. OneAPI is a hilarious example of this, because it masquerades as an open standard, but it's very clear that their intention is that you'll be using OneAPI with Intel CPUs and GPUs (i.e. Sapphire Rapids and Ponte Vecchio). If it works on AMD hardware, that will be purely by coincidence, and Intel is certainly not investing any time or effort on this front 😛
That claim really has no legs to stand on.