Gigabyte confirms GeForce RTX 4070 Ti graphics cards

Published by

Click here to post a comment for Gigabyte confirms GeForce RTX 4070 Ti graphics cards on our message forum
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
MS Flight Sim - How can the DLSS3 numbers be more than twice the rendered frames number? I thought DLSS3 can only insert one frame per real frame? So how does 66 frames become 147 DLSS3 frames? Actually now I think about it, that also includes the lower resolution trick of DLLS2 doesn't it... so the 4K frame is only rendered at 1440P, so it must be then rendering about 74 frames at 1440P and then doubling to 147. Speaking of DLSS I was playing Cyberpunk last night and found horrible "flickering" with DLSS 2 on, compared to it off. Seems it does some sort of smoothing/anti-aliasing when you don't move, but as soon as frame moves by one pixel you get a horrible aliased frame appear and then disappear every time. Looks terrible in night scenes. Had to turn DLSS 2 off.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
winning.exe:

Something that can’t be casually ignored is the software stack that comes with buying an Nvidia card. For gaming, you’ve got RTX, DLAA, DLSS, PhysX, G-Sync (the last two are less relevant) and so on. For compute, you’ve got OptiX, CUDA, and TensorRT just to name a small fraction. Add to that things like historically MUCH better DX11 drivers, beating AMD to market with CUDA, and many people’s personal experience with AMD’s launch day drivers, plenty of folks won’t consider an AMD card regardless of price 😀 AMD has released some competitive options like FSR, their Freesync, and so on, but Nvidia has beaten them to market at virtually every point. Compute on AMD is still a mess since OpenCL was largely abandoned, and neither HIP, nor ROCm, nor SYCL can approach CUDA in terms of flexibility, support, or performance. Nvidia has a more mature software stack for every market segment (gaming, compute, deep learning, etc.), and an ecosystem of software tied into it. This is why Jensen can say “we don’t have to compete on price, we compete on quality” 😎 That’s all to say, Nvidia is able to price gouge because consumers perceive a superior product. And when you look at the software, there isn’t an argument against that 😛
that is a good point if you think about it. rtx line is pretty strong for compute acceleration this can't come for no extra cost https://www.purepc.pl/akceleracja-sprzetowa-w-programach-do-renderingu-i-obrobki-materialow-video-test-wydajnosci-kart-graficznych?page=0,6 https://www.purepc.pl/akceleracja-sprzetowa-w-programach-do-renderingu-i-obrobki-materialow-video-test-wydajnosci-kart-graficznych?page=0,1 https://www.purepc.pl/akceleracja-sprzetowa-w-programach-do-renderingu-i-obrobki-materialow-video-test-wydajnosci-kart-graficznych?page=0,3 even 3060ti smashes amd gpus or cpus.
In all programs that allow you to confront the top AMD Ryzen 9 5950X processor and the NVIDIA GeForce RTX 3090 Ti card, I even noted a several-fold GPU advantage. The record holder is the popular Blender, where the execution time in the test scene was 22 vs 320 seconds (!), and it didn't even make sense to put the CPU on the charts next to the other GPUs.
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
Babel-17:

The form factor of the RTX 4080 is daunting to think of dealing with. I'm puzzled at the decisions to not offer anything except jumbo size for them.
Yes, huge sizes (length) for 4080/4090. I had a full sized Cooler-Master HAF932 tower which has served me well for over ten years, and even that would not fit the new 4080 cards, so I also had to go and buy an entire new HAF700 EVO case also when I upgraded, which cost me another €661 (£571) on top of price of 4080. Total for just the case and a 4080 came to €2,404 (2,489 USD) 😱
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
geogan:

Yes, huge sizes. I had a full sized Cooler-master HAF932 tower, and even that would not fit the new 4080 cards, so I also had to go and buy an entire new HAF700 EVO case also when I upgraded, which cost me another €661 (£571) on top of price of 4080. Total for just the case and a 4080 came to €2,404 (2,489 USD) 😱
Damn that's harsh... I had a similar situation with my GTX580 back in the day, but at least in the end all it was about was the length of the graphics card. And my case was small before I bought a Define 5. But the depth... that needs a big case, and big cases still cost about the 5-10x of normal ones, depending on quality. That emoji's more than justified.
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
geogan:

Yes, huge sizes (length) for 4080/4090. I had a full sized Cooler-Master HAF932 tower which has served me well for over ten years, and even that would not fit the new 4080 cards, so I also had to go and buy an entire new HAF700 EVO case also when I upgraded, which cost me another €661 (£571) on top of price of 4080. Total for just the case and a 4080 came to €2,404 (2,489 USD) 😱
The absurd size of the coolers of the 4090/4080 is something i really hate, specially in the case of the 4080. The cards are so big that they don`t fit my (compact) Lancool 205 case...
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Kaarme:

As far as I know, Intel still hasn't delivered the first pure Intel supercomputer (CPU+GPU). I could be wrong, though, if they did it very recently. AMD has been shipping them "all the time" these days, so I doubt AMD's reputation has dropped in that market. Nvidia is still the market leader, I believe, and now that it relies on its own ARM-based CPUs, it doesn't benefit AMD or Intel anymore on the side, either.
Yeah I don't think they've released it yet either, and even when they do, it'll still take some time before they're widely adopted. My point is that when they do, I predict it'll be a big blow to Nvidia's revenue. AMD is absolutely killing it in the server CPU market but as far as I understand they're not all that successful in the server GPU market. It makes sense - AMD has arguably better hardware and price tags than Nvidia but who cares when their driver support is lacking and when the most popular compute platform (CUDA) doesn't work? AMD is only a sensible choice if you intend to develop something from the ground up (which to be fair, is common for such servers) or if you know for a fact your software runs well for them. With Nvidia, you don't really have to question such things. The "paradox" of the matter is, when you use Nvidia's technologies, you're locking yourself into their ecosystem, yet, to use Nvidia in general means you are not limiting your options. That's why I think Intel will do particularly well, because I believe they will provide the support, documentation, and libraries that Nvidia did with CUDA, except with an open standard. CUDA was successful because Nvidia kept dumping money and developers at it to ensure it would not fail. This does imply AMD will benefit from Intel's work, but Intel will be the one with their name plastered all over the place, and I wouldn't be surprised if they make a whole bunch of API extensions with their name attached to it (granted, both Nvidia and AMD do this too).
https://forums.guru3d.com/data/avatars/m/56/56004.jpg
And thus, the RTX 4080 12GB FE (Fake Edition) is reintroduced with a new, and not unexpected moniker of the RTX 4070 Ti.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
cucaulay malkin:

overwatch type games run on a potato laptop gpu. it's not what is demanded from a strong value gpu, that's supposed to be both well priced and able to handle a lot more demanding scenarios than competitive shooters. you think people buy 6900xt's for that ? is that better logic ? that they play low requirements games on high end cards ? is that a relevant point to make in a thread on 4070Ti that's probably going to beat 3090 in rt, easily. or should it maybe be reserved for 4050 class gpus that will indeed struggle with heavy games. and btw pretty much the majority of triple a games have rt these days,even amd sponsored ones. ray tracing is how you do max quality GI,AO, shadows and reflections these days. I don't think this can be casually ignored.
you talk right past the point made and reference different ones. i'm not feeling argumentative - if the conversation was strictly about performance (it is not) you might have a leg to stand on. the conversation is about pricing and value delivered.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
schmidtbag:

Yeah I don't think they've released it yet either, and even when they do, it'll still take some time before they're widely adopted. My point is that when they do, I predict it'll be a big blow to Nvidia's revenue. AMD is absolutely killing it in the server CPU market but as far as I understand they're not all that successful in the server GPU market. It makes sense - AMD has arguably better hardware and price tags than Nvidia but who cares when their driver support is lacking and when the most popular compute platform (CUDA) doesn't work? AMD is only a sensible choice if you intend to develop something from the ground up (which to be fair, is common for such servers) or if you know for a fact your software runs well for them. With Nvidia, you don't really have to question such things. The "paradox" of the matter is, when you use Nvidia's technologies, you're locking yourself into their ecosystem, yet, to use Nvidia in general means you are not limiting your options. That's why I think Intel will do particularly well, because I believe they will provide the support, documentation, and libraries that Nvidia did with CUDA, except with an open standard. CUDA was successful because Nvidia kept dumping money and developers at it to ensure it would not fail. This does imply AMD will benefit from Intel's work, but Intel will be the one with their name plastered all over the place, and I wouldn't be surprised if they make a whole bunch of API extensions with their name attached to it (granted, both Nvidia and AMD do this too).
99.99% agree 0.01 differential - intel doing open source
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
fantaskarsef:

That emoji's more than justified.
Sure is... and if you want another one... the delivery charge of that case from UPS cost me £123 (€142/$147) 😱 😱
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
tunejunky:

99.99% agree 0.01 differential - intel doing open source
Intel does quite a lot with open source, it's just not so obvious if Windows is all you use since hardly anything they do for Windows is open. Intel either hugely contributes toward or is the primary developer of the following open-source projects: The Linux kernel Mesa, Wayland, and X11 (open source graphics stack) DAOS (an open-source filesystem) Audio and network drivers for Linux and BSD OSes iwd (Linux's primary wireless network daemon) Clear Linux (a demo exemplifying how much more performance you can squeeze out of software if you bother to optimize it - offers pretty impressive results) Various open source CPU schedulers mediasdk (hardware transcoder) I'm not just talking a couple commits here and there - many open source projects would have failed or been a whole decade behind without Intel's contributions. And this is just what comes to the top of my head. The list grows much quicker when you account for things that Intel has made small contributions toward, such as Blender, libva, vaapi, Vulkan, OpenCL, Kubernetes, etc.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
geogan:

Sure is... and if you want another one... the delivery charge of that case from UPS cost me £123 (€142/$147) 😱 😱
Soooo getting a water block for your GPU + rad + pump+ reservoir would have meant you keep the haf 932 and it is a bit cheaper 😛
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
tunejunky:

you talk right past the point made and reference different ones. i'm not feeling argumentative - if the conversation was strictly about performance (it is not) you might have a leg to stand on. the conversation is about pricing and value delivered.
what constitues a good choice for value in the ~$1000 gpu range in your opinion ? are you seriously going the apply the same criteria as for entry level gpus here, just with 300 fps in overwatch instead of 60 ?
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
schmidtbag:

Intel does quite a lot with open source, it's just not so obvious if Windows is all you use since hardly anything they do for Windows is open. Intel either hugely contributes toward or is the primary developer of the following open-source projects: The Linux kernel Mesa, Wayland, and X11 (open source graphics stack) DAOS (an open-source filesystem) Audio and network drivers for Linux and BSD OSes iwd (Linux's primary wireless network daemon) Clear Linux (a demo exemplifying how much more performance you can squeeze out of software if you bother to optimize it - offers pretty impressive results) Various open source CPU schedulers mediasdk (hardware transcoder) I'm not just talking a couple commits here and there - many open source projects would have failed or been a whole decade behind without Intel's contributions. And this is just what comes to the top of my head. The list grows much quicker when you account for things that Intel has made small contributions toward, such as Blender, libva, vaapi, Vulkan, OpenCL, Kubernetes, etc. well it seems i was living in the past
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
cucaulay malkin:

what constitues a good choice for value in the ~$1000 gpu range in your opinion ? are you seriously going the apply the same criteria as for entry level gpus here ?
a 6900xt @ $700 or a 6950xt @ $800 until the 13th when we all have the information to decide the $1k standard and the criteria you mention wasn't for the gpu but the games that are most popular
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
tunejunky:

a 6900xt @ $700 or a 6950xt @ $800 until the 13th when we all have the information to decide the $1k standard and the criteria you mention wasn't for the gpu but the games that are most popular
you're being intentionally hesitant to answer a simple question here. I think it's not necessary to wait for amd cards to launch to determine what one should expect from a $1K gpu. do you think anyone buying such card looks at freaking overwatch performance? lol I think both will suck for the conscious consumer, 4080 being too expensive and 7900xtx lacking rt performance against lovelace.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
cucaulay malkin:

you're being intentionally hesitant to answer a simple question here. I think it's not necessary to wait for amd cards to launch to determine what one should expect from a $1K gpu. do you think anyone buying such card looks at freaking overwatch performance? lol I think both will suck for the conscious consumer, 4080 being too expensive and 7900xtx lacking rt performance against lovelace.
>sigh< you continue to posit things that have no evidence. while i agree it's unlikely for RDNA 3 to outperform RT on Lovelace, that doesn't mean that it's true until the new cards are benched. furthermore, you posit RT performance as the standard when there's still far too few games with RT. this is a fallacy that you love. $1k cards used to be halos for everyone. now it's just an AMD halo. and that halo on paper makes the rtx 4080 weak sauce. not only is it far more advanced from a manufacturing perspective, it's also less expensive to make -categorically. but i'm not here trumpeting it and acting as it's the mother-of-all-gpus (clearly it's not).
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
tunejunky:

you posit RT performance as the standard when there's still far too few games with RT. this is a fallacy that you love.
well if you start counting in the 2000s then maybe, but not now and in the near future.can't imagine buying a $1000 gpu that lags behind in rt. can't even imagine buying a $650 one that is too weak to run it tbh since 1000 is a stupidly gouged price to pay for a gpu imo and I'm not even remotely interested in such products from any brand. thing is,if I was, it better run everything ultra, including rt, I don't need it to win in overwatch. 150 games with rt starting from fall 2018, not enough eh ? https://www.pcgamingwiki.com/wiki/List_of_games_that_support_ray_tracing imagine 150 games in the last 4 years that require more than 10gb of vram to run ultra, you'd be fine with buying a 3080 then,right ? cause still the majority runs fine, incl. the most popular steam games, and for the 150 you can just lower the gi,ao,reflection and shadow quality.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
Espionage724:

Encoder performance until AMD can at least meet a minimum to run a Quest 2. A 6900XT still needs almost 1440p encode resolution around 200Mbps to try to do 120Hz, whereas a RTX 3060 does 120 at near-4K with 300 Mbps no problem. That matters as soon as you increase the refresh rate from the default 72Hz; on any AMD GPU currently that results in a worse overall image, but it isn't a concern on NVIDIA. The game can render at 4K with all the graphical effects and DLSS or whatever all it wants, but the end-result displayed to your eyes relies on the encoder. Paying $1K for a GPU that (potentially) can't handle a soon-outdated VR headset is silly.
don't know about any of that.
data/avatar/default/avatar37.webp
schmidtbag:

AMD has arguably better hardware
That claim really has no legs to stand on. CDNA is a re-hash of GCN; it's lacking many of the things that big data et al. look for these days. While CDNA supports things like INT8 and FP16, Nvidia has much better mixed precision and matrix performance (i.e. through FP8, INT8 and below, "sparse formats" to accelerate AI workloads). Then you add niceties like CUDA-X with a massive breadth of software support, and the story becomes similar to the desktop: if you have the money, you buy Nvidia. ROCm exists on the AMD side (for better or for worse), but that really isn't a serious competitor. Hence why Nvidia ships 9 in 10 accelerators in this space.
schmidtbag:

The "paradox" of the matter is, when you use Nvidia's technologies, you're locking yourself into their ecosystem, yet, to use Nvidia in general means you are not limiting your options.
There is no paradox: when you buy Nvidia, you get a much better product and software stack which you'll be glad to lock yourself into, because the alternative is AMD's ongoing compute catastrophe 😀 At this point, even legacy OpenCL software runs much better on Nvidia hardware.
schmidtbag:

That's why I think Intel will do particularly well, because I believe they will provide the support, documentation, and libraries that Nvidia did with CUDA, except with an open standard.
This is coming from a company that invents entire instruction sets to lock competitors out (see: SSE1-4, AVX, AVX2, AVX512). Intel is interested in making software that runs well on their products, and if — by coincidence — it also runs well on someone else's, its only by coincidence 😀 I have personally seen this time and time again in the open source space, with things like Intel ISPC, Embree, Clear Linux, OneAPI and so on. OneAPI is a hilarious example of this, because it masquerades as an open standard, but it's very clear that their intention is that you'll be using OneAPI with Intel CPUs and GPUs (i.e. Sapphire Rapids and Ponte Vecchio). If it works on AMD hardware, that will be purely by coincidence, and Intel is certainly not investing any time or effort on this front 😛