NVIDIA Posts teaser on YouTube, Talks GeForce RTX 30 design, PCB and 12-pin connector

Published by

Click here to post a comment for NVIDIA Posts teaser on YouTube, Talks GeForce RTX 30 design, PCB and 12-pin connector on our message forum
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
So they need 20 chokes for power delivery, a new 12 pin connector and a better method for cooling and they call this revolutionary? Thats one way to look at it.
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
Who thinks there will be allot of RMA from broken power connectors?
data/avatar/default/avatar04.webp
Silva:

Who thinks there will be allot of RMA from broken power connectors?
I would say that is a very good possibility the way the connector is located on the card and how it is designed. I could see users of all groups could break this thing off trying to plugin their cards. I could see the highest percentage of RMAs come from users who like to swap out cards a lot for benchmarking.
data/avatar/default/avatar14.webp
Any design that extract the air back into the environment its using to cool itself is kind of idiotic. Its like trying to dry yourself off wile the shower is running.
https://forums.guru3d.com/data/avatars/m/280/280231.jpg
Have we a clear view of which gpus require this new 12pin? Is this only for 3090? Or also for 3070/3080 ones? (Except 3060 of-course.)
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
Martin5000:

Any design that extract the air back into the environment its using to cool itself is kind of idiotic. Its like trying to dry yourself off wile the shower is running.
It's called: case fans. You may want to install some.
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
Fender178:

I would say that is a very good possibility the way the connector is located on the card and how it is designed. I could see users of all groups could break this thing off trying to plugin their cards. I could see the highest percentage of RMAs come from users who like to swap out cards a lot for benchmarking.
Doubt very much that would happen. A multi-billion $ company like Nvidia would be aware of every aspect of usage and would be testing its strength and durability beyond reasonable expectations.
data/avatar/default/avatar37.webp
another weirdness of the internet comments world dodge challenger 492hp = okay dodge challenger hellcat 727hp = omg ! want ! got to do what you've got to do ! we don't have that "power efficiency isn't good enough" in the performance car world we want power and torque everything else is irrelevant (handling counts too...just saying...you know what I mean)
https://forums.guru3d.com/data/avatars/m/224/224796.jpg
Is the only purpose of the 12 pin connector that it takes a bit less space than 2 8 pin connectors? I'm honestly confused here.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
alanm:

Doubt very much that would happen. A multi-billion $ company like Nvidia would be aware of every aspect of usage and would be testing its strength and durability beyond reasonable expectations.
Pretty much what i've been wanting to post. Anyone can look at that photo and think its an easy break point. But the way it is connected to the board as well as if the GPU Cooler is bracing it (which i suspect) to my knowledge isn't known yet.
https://forums.guru3d.com/data/avatars/m/174/174772.jpg
I wonder if the whole lineup is going to take 3 slots :S
https://forums.guru3d.com/data/avatars/m/240/240541.jpg
Elder III:

Is the only purpose of the 12 pin connector that it takes a bit less space than 2 8 pin connectors? I'm honestly confused here.
Yesh and the connector is about the same size as one 8-pin connector but it delivers up to 600w at least for what the spec is asking for, so that's the same as four 8-pin connectors. It can also simplify the power delivery and balancing for the board. Although AIBs have used three 8 pins in the past to just separate out the power between say gpu, memory and pll even though the card cannot possibly pull past two 8-pins, but that's the same force it mentality of just using doublers everywhere because it make marketing happy. I think the main point though was to leave the door open for them to design with larger power targets. Nvidia's reference 2080 ti was limited for extreme overclocking due to the two 8-pins not the vrm. Even on water cooling you could easily oc to hit the 375w power limit on the card, AIB cards made fpr higher OC all used more 8 pins. I think nvidia is thinking about their making sure their Halo product can hit those. A 2080ti on LN2 could hit peak power above 500w in some of those board, the chip had alot of room. In reality one 8 pin connector could do the same job. The MiniFit JR type connectors are not the limit for power delivery, you can do 13A per contact in the connector. Keeping the same pinout you can do 360w on a 8 pin connector with 16awg and that's with-in spec for the connector but not within spec for the standard
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
A 2080ti on LN2 could hit peak power above 500w in some of those board, the chip had alot of room.
Pfft a Vega64 at stock could hit upwards of 600w due to high transient peak power draw. Right that's not a good thing. 😛 Target of ~300 but the high peak draws could be almost double that, Navi has the same issue but more sane power draw targets lessens the max peak draw from these transients other than some of the extreme third party designs. (Which is a bit of a extreme solution to getting the card to scale just a bit higher before capping out anyway so it's a bit of a waste and brute force approach but only one or two GPU models do it so at least it's nothing too common.) Will be interesting to see what NVIDIA (AMD with something too at some point?) could do particularly if third party designs are less restricted or overclocking less locked though I suppose the boost design could also be used and as long as voltage, temperature and overall power draw isn't too high the card could have a target clock speed and a much higher potential max boost unless locked through the bios and wherever the wall is where it starts ramping up exponentially in how much power and the GPU voltage requirement together with the heat output. 2x of these twelvers instead of 4x eights plus the PCI 75w or perhaps a combination depending on what the stock setup is like a 12x and a 8x connector. Suppose that's primarily for this high-end or rather enthusiast segment 3090 card and we'll see what the 3080 and 3070 will be like maybe standard 8x or 6x only. (But there could be room for a 3080Ti in-between model too at some later date possibly as a response to AMD's RDNA2 however that performs when it's actually out.) EDIT: Suppose AMD's not quite in a position to add new standards to their GPU's though and patents are a thing too so it'd be the AMD 10 or 14 unless they want to go and really show NVIDIA what power draw should look like and make a 16 pin. (Well it's a single cable and GPU slot there's just these 4x 8's in the other end into the PSU bit. 😛 ) Guessing that amps won't be too problematic either when it's multiple cables into the PSU or later PSU's having some almost-ATX like GPU slot directly ha ha. (Doubt that's going to happen anytime soon though when this should work just fine over two single 8-pin cables.) EDIT: Anyways...well I can't afford one but I am really curious on the reviews for the 3090 even the 3080 as a maybe more affordable alternative while still a solid performing card. 🙂 September for the reveal at least from what everything is hinting at, going to be fun.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Kaarme:

It depends solely on how much power the card is going to require. If it's a new connector but still within the specs of two old connectors, it's a situation of nothing changing since cards have been using two old connectors for many years.
It doesn't say 2 connectors, it says 8pin pcie, which doesn't tell us how many inputs will be required to adapt to 12p. Seasonics is 2 connectors on the PSU side which is 4 total when swapping the 2 cables out for the one.
data/avatar/default/avatar02.webp
the power connector really piss me off. no one wants ugly adapters.
data/avatar/default/avatar06.webp
what i m really interested and i did not see leaks around is, what the architecture offers? Is this a turing with more shaders and more transistors? or is there something really new?
https://forums.guru3d.com/data/avatars/m/235/235224.jpg
asturur:

what i m really interested and i did not see leaks around is, what the architecture offers? Is this a turing with more shaders and more transistors? or is there something really new?
5 days 7 hours and 4mins until we find out 🙂
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
asturur:

the power connector really piss me off. no one wants ugly adapters.
I hope this was sarcastic..... like if not, wow.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Astyanax:

It doesn't say 2 connectors, it says 8pin pcie, which doesn't tell us how many inputs will be required to adapt to 12p. Seasonics is 2 connectors on the PSU side which is 4 total when swapping the 2 cables out for the one.
Well, yeah. I made some assumptions. If the connector is 300W capable, it matches exactly two 8-pin PCIe power connectors. It can't be any less than two 8-pin connectors in that sense. Of course it could be more, but that might be a bit pointless, plus some experts in the other thread called such a solution dubious.
https://forums.guru3d.com/data/avatars/m/267/267641.jpg
Fox2232:

While card is bigger, its cooling ability is very nice due to design.
You are right that bigger card cards means, bigger heatsinks and it means better cooling, im skeptic to that because, in the past, bigger cards were usually disaster.. and its reference design, if would buy it.. i would prefer 2 slots non reference smaller card, because i need to actually use my PCI-E slots not to cover them, unless big towers and bigger mbs than atx will become more frequent. I personally would not mind to buy some 10 or so full PCI-E slots MBs, because i constantly have not enough slots problems and it would need more PCI-E lines too, so far on Intel side only workstation level MBs have enough PCI-E lines.