PCI Expres gen 4.0 - fuggedaboutit, here's 5.0

Published by

Click here to post a comment for PCI Expres gen 4.0 - fuggedaboutit, here's 5.0 on our message forum
https://forums.guru3d.com/data/avatars/m/227/227994.jpg
I bet that will require atleast 5x 40MM Fans to cool the chipset then, great stuff!
https://forums.guru3d.com/data/avatars/m/69/69564.jpg
Hey I can wait until 2021 when it gets implemented on new motherboards. What do you mean not, Yes I can, ok maybe? Yes I could? Mentally preparing for 2021 DDR5 and PCIE4.0
data/avatar/default/avatar40.webp
This was known to be coming this year, which is why many people expect PCIe4.0 to be skipped by various vendors and go straight for 5 instead, since 5 will have a far longer lifetime then 4 ever can. Of course its all backwards compatible, so a PCIe4.0 device will be able to benefit from it in the future still, but nevertheless, funny how long we've been on 3 and 4 basically gets its replacement released at the same time as its rolling out to mainstream for the first time.
data/avatar/default/avatar16.webp
i mean they completed the specifications, that does not mean this is already cheap to build or easy to get on a motherboard + cpu + chipset. Or also that a cpu does not need too much power to use it. PCI ex4 allow us to double bandwidth of disks and GPUs ( that we never saturated for now ) i think is good for the next 2 years
https://forums.guru3d.com/data/avatars/m/263/263507.jpg
I wonder how many months, semesters or years we will have to wait to have PCI 5.0 motherboard at home. And more important, how much time, till we start taking advantage of it for gaming purposes! I hope this happens fast!
data/avatar/default/avatar10.webp
We barely start to see small performance difference between PCI-E x16 2.x and 3.x on high end GPUs... And PCI-E 4.x is just a fancy addition on next systems and will be for the following years for most users... I would prefer to see two orders of magnitude less on latencies than useless bandwidth... Oh yes, a better and proper GPU page faulting system would also a lot appreciated.. Of course latencies and page faulting aren't important on the marketing side...
https://forums.guru3d.com/data/avatars/m/63/63170.jpg
They could have open-ended x8 slots max for consumer boards, and the full x16 slots (with the extra hardware required) for the WS boards, and servers. Hope for some day soon a full complement of x8 slots. Save some board real estate too by not needing the long x16 slot. As said before, we, Gamers, dont really tax the PCIe Bus that much, and could probably get away with x8 slot cards.
data/avatar/default/avatar14.webp
We still use physical x16 slots instead of x8 because graphics cards are heavy. And for retro-compatibility too..
https://forums.guru3d.com/data/avatars/m/63/63170.jpg
Alessio1989:

We still use physical x16 slots instead of x8 because graphics cards are heavy. And for retro-compatibility too..
I suspect more for retrocompatibility than for the physical support. I dont know of any PCIe spec concerning weight on the socket. I'm sure there are ways of using a brace on the slot, as has been done before on some very heavy cards. I suppose we could have a x16 slot up top,and x8 or x4 all below ? no more x1 slots would be lovely.
data/avatar/default/avatar21.webp
x1 slots are cheap and more than enough for most of expansion cards sold (usb/ethernet/raid/audio/tv/acquisition etc..) and do not take much space. Most of graphics cards need the x16 slot for the weight distribution, and no, putting some aluminium foil around the slots doesn't make them more strong, in the best case is just a useless emi-shield... While x4 slots are usefull for some expensive SSD cards, they are pretty useless for most devices... Giving x16 or (x8 slots in a physical x16) also is needed for all those people that do not change the entire system every 3-4 years but simply upgrade the graphics card. If there is something that 202x MB need to rid of, there are tons of useless '90/2000 I/Os like PS2, DVI, COM etc...
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
I think it's worth pointing out that a PCIe 5.0 x1 slot has the same amount of bandwidth as a 1.0 x16 slot. To me, the x1 devices are the only ones that'll be of any real interest. 5.0 I suspect is going to live a very long time, like 3.0 did. I don't think we're going to need to replace it for a very long time, especially if x16 slots continue to exist at that point (I don't think they should).
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
um 4 is not even out to the masses yet? and 5 is now released?
data/avatar/default/avatar20.webp
PCIe 4.0 was announced (the same thing that's being done here) on Oct 2017, PCIe 5 in Jan 2018. Expect it to take another couple of years at least for someone to implement it, and for it to be SUPER expensive. PCIe 4 necessitated going from a 6 to an 8 layer motherboard, and draws up to 15W to use (the chipset on x570 is essentially a low end CPU itself). PCIe 5 is twice as fast, so expect a suitably higher difficulty and a suitably higher power draw to implement.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Alessio1989:

We still use physical x16 slots instead of x8 because graphics cards are heavy. And for retro-compatibility too..
if i am not mistaken you can literally chop a 16card connection to fit on 1x slot with out problems although it will work on just 1x .... or better yet chop a slit on the connector so the rest of the connector can hang out and it will still work .... i am like 95% sure that this works ...if i am wrong feel free to correct me everyone ! p.s. obviously not recommended!
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Venix:

if i am not mistaken you can literally chop a 16card connection to fit on 1x slot with out problems although it will work on just 1x .... or better yet chop a slit on the connector so the rest of the connector can hang out and it will still work .... i am like 95% sure that this works ...if i am wrong feel free to correct me everyone ! p.s. obviously not recommended!
You are 100% correct, for both. Keep in mind this is how crypto miners get their work done: they don't need the bandwidth so they use PCIe multiplexers and split the lanes into multiple x1 slots. I actually wouldn't be surprised if you could just chop off only pin 17 of the PCIe connector and suddenly you'd get an x1 slot, with the rest of your lanes will becoming useless. But, pins 31, 48, and 81 might also have to be chopped off. But like you said - not recommended lol. The only reason I can think why a GPU would have x16 slots nowadays is for backward compatibility. So take a GTX 1070 for example, where you should be able to run that on PCIe 3.0 on x8 lanes and hardly ever see any performance loss. Run it on x4 lanes and you most certainly will. Remember that bandwidth is roughly doubled after every PCIe generation. So, if you stick that GPU in a PCIe 2.0 x16 slot, it should pretty much run at full speed. You're probably saturating almost all of the available bandwidth, but it will run at full speed. Stick it in an x8 slot and you will see a performance loss, since that's equivalent to a 3.0 @ x4. However, we're at a very good point, where very few products can saturate PCIe 3.0 @ x16. Since such motherboards already exist, to me, now is the chance to phase out x16 slots for next-gen PCIe. 4.0+ GPUs could still be made to support x16 lanes (the excess lanes don't have to be used), but to me, motherboards should start looking to downsize.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
@schmidtbag yay for me ! :P
data/avatar/default/avatar12.webp
schmidtbag:

The only reason I can think why a GPU would have x16 slots nowadays is for backward compatibility. So take a GTX 1070 for example, where you should be able to run that on PCIe 3.0 on x8 lanes and hardly ever see any performance loss. Run it on x4 lanes and you most certainly will. Remember that bandwidth is roughly doubled after every PCIe generation. So, if you stick that GPU in a PCIe 2.0 x16 slot, it should pretty much run at full speed. You're probably saturating almost all of the available bandwidth, but it will run at full speed. Stick it in an x8 slot and you will see a performance loss, since that's equivalent to a 3.0 @ x4.
Compute workloads can also benefit from the extra bandwidth in certain situations.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
yasamoka:

Compute workloads can also benefit from the extra bandwidth in certain situations.
Most of the time, it's the exact opposite: heavy compute loads tend to need fewer lanes, not more. So yes, I take your point that certain situations would call for that, but in such situations, you're probably not strapped for 32GB/s of throughput.