NVIDIA: Rainbow Six Siege Players Test NVIDIA Reflex and Two new DLSS Titles

Published by

Click here to post a comment for NVIDIA: Rainbow Six Siege Players Test NVIDIA Reflex and Two new DLSS Titles on our message forum
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
itpro:

So neural networks will take our jobs. AI is the future worker unless you do work for corporations or government. Native image loses to calculated ones. That's the moral of today's lesson.
The Alexa I have at home can barely turn on the correct light at moments, so I wouldn't be too worried. I think they will take away a lot of the specialized repeatable work though.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Fox2232:

@PrMinisterGR Imbecile could realize that I am pretty aware on how DLSS works. And that you when being corrected adjust narrative as if you are correcting me. I feel we are repeating Console discussions here (multiple). And only honest reply you gave is that yet again, you do not learn anything technical at all from it.
Excellent, I am an imbecile. One request only: WTF are you trying to actually say in this whole thread?
data/avatar/default/avatar15.webp
Fox2232:

Isn't it great to bring static scene as validation of temporal processing?
Both aliasing methods compared in that MB2:B screenshot have temporal component. But DLSS is a more stable one, while being better at resolving/reconstructing far detail. Also more sharper with transparent props. Compared to these two methods, no-AA is a flicker-fest. DLSS suffers from a stupid bug in which tip of the spear sometimes gets blurred (when spear is carried on my back)
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Fox2232:

You made false statement. I corrected it (#18). You got triggered into sandbagging. Rest did happen.
If the correction was that DLSS doesn't provide better image quality than native, then it wasn't a correction. I get triggered because you are a combo of stubbornness with ignorance, that is hard to avoid when we try to have a normal conversation around here. Go and actually see the Digital Foundry video I posted twice already. Since you don't have the hardware yourself, you can at least see what it does.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Noisiv:

Both aliasing methods compared in that MB2:B screenshot have temporal component. But DLSS is a more stable one, while being better at resolving/reconstructing far detail. Also more sharper with transparent props. Compared to these two methods, no-AA is a flicker-fest. DLSS suffers from a stupid bug in which tip of the spear sometimes gets blurred (when spear is carried on my back)
This is interesting, it sounds like it's probably z-aware or something. Do you get any different treatment on the spear on the back if normal TAA is applied?
https://forums.guru3d.com/data/avatars/m/280/280231.jpg
PrMinisterGR:

The Alexa I have at home can barely turn on the correct light at moments, so I wouldn't be too worried. I think they will take away a lot of the specialized repeatable work though.
I had that conversation in University with a colleague. He insisted that AI is fundamentally unable to overtake scientists. I tend to believe otherwise. It's inevitable in my opinion.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Fox2232:

Well. At this point. It is fair to say that I should correct myself in meaning of my post. My assumption that you made false statement was based on expectation that you made honest mistake. Now I know that you are lying. [...] I personally do not run TAA anywhere because I have self respect.
Why would any non-crazy person lie? And self-respect goes with the use of TAA? You're arguing against every single person that has actually seen, tested or used this technology. You're not just arguing, you're attacking people personally. There's something wrong with this attitude and you should fix it, and I'm not saying this in any kind of mean way. I can't imagine how it must be to have someone with this attitude around, and I bet neither could you.
Noisiv:

Works fine with "Temporal SMAAx2". Only with DLSS its bugged Temporal SMAAx2 https://drive.google.com/file/d/1Twlu4iWUwc3_RZa6X8ZDjokdGJp2jD5X/view?usp=sharing DLSS Quality https://drive.google.com/file/d/1BPsCWTAt8pkVpGVIcpD9Q285R4HZ_m3H/view?usp=sharing 4k images ^^^
Interesting. Does the game use near-depth focus blur at all?
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
itpro:

I had that conversation in University with a colleague. He insisted that AI is fundamentally unable to overtake scientists. I tend to believe otherwise. It's inevitable in my opinion.
I'm rewatching Star Trek: Voyager, and I think we will end up with neural networks doing repeatable jobs using simple commands. In the series they all talk to the computer using simple language and the computer interprets meaning and actual metrics to provide a result/service. I think that the things that NVIDIA has shown already (people designing a simple place in Paint, and then an AI turning it into a "real" picture) will be the future. Should be more like extrapolation with more and more nuance. On the other hand, you might be interested in this free Sci-Fi ebook: Blindsight by Peter Watts (rifters.com) It basically argues that consciousness is just dead weight and that you can have something that is terrifyingly smart and efficient, that isn't conscious at all. It's free to read, I would recommend it just for the thought experiments alone.
data/avatar/default/avatar20.webp
PrMinisterGR:

Interesting. Does the game use near-depth focus blur at all?
It has DOF and Motion Blur, but turning them off doesn't fix this -> it's a "trail effect", and as far as I can see it happens only with the spear when carried on my back and rotating the characters Point-of-view.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Noisiv:

It has DOF and Motion Blur, but turning them off doesn't fix this -> it's a "trail effect", and as far as I can see it happens only with the spear when carried on my back and rotating the characters Point-of-view.
In theory DLSS isn't trained with individual titles anymore, I was just wondering if it was misinterpreting this object as something that is out of the focal plane.
data/avatar/default/avatar20.webp
Fox2232:

Yeah, lets throw into garbage whole field of information theory. Entire temporal part of DLSS is subjected to same rules as is video compression when it gets to motion in space and density of information over time. Not that I expect you to understand why you need much higher bitrate in different scenarios described to achieve same resulting image quality in video stream.
It's not. I don't know how many times we have to tell you this on these forums, but you clearly have no idea how these neural networks work. Regarding the juicy "information theory" bomb you dropped, I'll leave you a hint that these neural networks store, in their weights and biases, this information that you think has been "irreversibly lost". Don't bother replying, this reply is for others to not be mislead by your statements. Not you, you're hopeless.
data/avatar/default/avatar37.webp
Fox2232:

If you do not want technical reply, fine. I'll not tell you which technical part of your statement is wrong. But do not ever expect that when throwing dirt and asking for no reply, it will happen. And we both know why you have problem with me. Why your interaction always gets this way. And that it is not for real world reasons. I evade your posts as much as I can. And limit my interaction to occasional like when I agree, if possible.
You're delusional, and the reason you think I have a problem with you has nothing to do with the actual reason. The actual reason is that you insist on your own faulty understandings and you berate others for not agreeing with them. Others that have a formal education in this field constantly attempt to correct you but you behave exactly as a brick wall does. You do not have the required knowledge to discuss some of these topics that you attempt to barge into, and it's cringe-worthy for those of us who have spent the last few years of our lives literally working with these technologies, doing academic-level research on them or working in the industry with them. You have wrong intuition, and this is easily due to the fact that you often have no idea the sort of garbage you spew on these forums. You're basically at the bottom of the knowledge pyramid in these topics, and you need to pass from wrong intuition to wrong analysis, then right analysis, then right intuition so that you get to rely on intuition as much as you think you should. The sort of confusion you create here is that you sprinkle your posts with technical words and irrelevant mathematical analogies that someone who doesn't actually know much about the subject ends up believing you and the misinformation you present. Then they see you doubling down on your statements. You derail threads, insult people who do not agree with you, act like you know better than everyone (you're not even close), and once cornered, expect others to put up with your refusal to actually provide any meaningful statements to back up your argument. That and you have an obvious axe to grind with Nvidia. Literally, no matter what they do, you're always complaining and acting as if their engineers are morons and you know much better than they do about their *own* technologies (which is cringe-worthy and only demonstrates that you have a narcissistic personality, which is unhealthy for everyone). I'm not even sure if anyone in these forums likes Nvidia as a company - most people frequently complain about many of their obnoxious practices. Mostly people here who like Nvidia products like them because they deliver what they're looking for, not because they have some love relationship with a company that, like any other company, seeks its own bottom line to appease its investors. You evade my posts as much as you can because I'm one of the only people here who constantly expose you when you're being fraudulent, and I'm not going to stop until you either change, get banned, or I get banned. I fight ignorant statements, and therefore ignorant people, in a battle to the death. So, either you actually start learning from others here when they correct you, or you would do everyone a service to just leave these forums and take your haughty, narcissistic, know-it-all toxicity with you.
data/avatar/default/avatar38.webp
Case in point.
data/avatar/default/avatar24.webp
Fox2232:

If you do not want technical reply, fine. I'll not tell you which technical part of your statement is wrong.
Like the time you fought the Nyquist Law and lost. And then instead of simply saying "sry i had a brainfart" you doubled-down by linking a video from the guy who later admitted he was wrong? https://forums.guru3d.com/threads/forza-horizon-4-demo-available.422968/page-2 That kind of technical reply? Because there is rarely anything technical about your replies. Instead whole lot of hand waving and pretending like you gonna drop some technical bomb, but always stopping short of. See - now I feel dirty for mentioning this. As well as because of whole tone of the exchange. Is why bro you often get away with your 'mistakes': ppl dont want to get down, because if they do - they get dirty. But then when left unchallenged you take it as an approval and run with it to the other end of galaxy. Which is not a big deal - not a big deal at all - LOL i've seen Nobel laureates talking nonsense on twitter. ppl talk nonsense all the time. BUT... WHY NOT RESIST the temptation to make any big statement that you are not sure you can back up? How about that dog?
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
2121: Forum member FUA: "DLSS SXSP is trash, it only upscales from one quadrillion pixels to 4 quadrillion pixels. True 4 quadrillion pixels looks so much better." Rest of the world: ZZZZZzzzzzzzZZZZZZzzzzzzzz.......
https://forums.guru3d.com/data/avatars/m/156/156348.jpg
A better title would be "NVIDIA: Rainbow Six Siege Players Test futuristic technologies expected to be available for common folks within the next 20 years".
https://forums.guru3d.com/data/avatars/m/156/156348.jpg
PrMinisterGR:

Speaking about throwing entire sections of science into garbage, it's clear you don't understand how this type of NN works. The temporal information is only a part of the puzzle. In fact, the more frames it has, the better it works. Video streams are also lossy, this processing is not. You are not seeing the result of a video stream, in fact you are seeing what is closer to a shader than anything else. Bitrate is completely irrelevant in this scenario. I actually wonder how you can participate in this conversation at all, and we take you seriously when talking about "bitrate" in this situation (in any context). Also you are ignoring reports of people who have actually seen how DLSS does what it does. Bitrate would only be relevant in video comparisons. You are basically disputing every person who has seen this, and expert reviewers on top. I will post this video in case someone else following this thread wants to learn anything, as it is 100% certain you will not see it, yet you will keep talking as if you had. [youtube=YWIKzRhYZm4] Check around 7:18
DLSS 2.0 is awesome but honestly that TAA implementation in Control is awful and should not be called native.
data/avatar/default/avatar09.webp
Fox2232:

That paper did 2 things. 1st, It took 1kHz, 2kHz and 3kHz repetitive stable sine-like signals in 22.05kHz space. Which demonstrates what you wanted to demonstrate, except it did not touch bandwidth problem with non-sine (random) sound samples and sampling rate. With simple sine signals, you can get away with frequencies up to 1/2 of sampling rate. And from samples obtained nearly under or at 1/2 of, you need to know that original was sine wave. Do you know why? Because Triangle wave would have in such situation practically same sampled values. Then on page #9 of your document, they told you: That's saw signal, not white noise, not speech, no multiple sine-like frequency effects changing frequency over time. (What text says is that to get perfect saw, you need infinite sampling frequency.) Then study goes into aliasing, where they kind of go with: "Removal of any frequency that's above 1/2 of sampling rate to prevent aliasing." And that's was my entire point. Shown on taking 2 identical signals and shifting them by smaller amount of time that takes one period. If signals already have frequency equal 1/2 of sampling rate, you are no longer capable to capture them properly because signal is no longer one sine wave, but has multiple peaks and its actual frequency of peaks doubled within one period. = = = = And article is about capturing audio, not mixing multiple audio sampled digital sounds. Spoiler: " Their conclusion is not bad. Or good. It tells you this:"
"The danger here is that people who hear something they like may associate better sound with faster sampling, wider bandwidth, and higher accuracy. This indirectly implies that lower rates are inferior. Whatever one hears on a 192KHz system can be introduced into a 96KHz system, and much of it into lower sampling rates. That includes any distortions associated with 192KHz gear, much of which is due to insufficient time to achieve the level of accuracy of slower sampling. "
It does not tell you that because you can't hear frequencies above 22kHz, sampling is fine at 44kHz. It tells you that use of 96kHz sampling is better than use of 192kHz, because 192kHz sampling devices introduce their own errors to sampled data. (Mind, state of available sampling devices in 2004.) - - - - What does it really says? That with given technology state, optimal sampling rate would be somewhere around 64kHz. But it is not saying anywhere that it is due to no need (or usefulness of) for more. It says such thing due to fact that devices themselves did not deal with such signals properly in their analog to digital converters.
Aaaaand off you go... into any direction you felt like, following any random thought that popped up in your head. Course: unknown. Goal: none. Nothing. Not even a hint of what you're arguing against. Zero discipline. Just a sheer will to persevere.