There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

FidelityFX Super Resolution 3 (FSR3) - AMD Stage Presentation | gamescom 2023

TL;DW:

  • FSR 3 is frame generation, similar to DLSS 3. It can greatly increase FPS to 2-3x.
  • FSR 3 can run on any GPU, including consoles. They made a point about how it would be dumb to limit it to only the newest generation of cards.
  • Every DX11 & DX12 game can take advantage of this tech via HYPR-RX, which is AMD’s software for boosting frames and decreasing latency.
  • Games will start using it by early fall, public launch will be by Q1 2024

It’s left to be seen how good or noticeable FSR3 will be, but if it actually runs well I think we can expect tons of games (especially on console) to make use of it.

Blackmist ,

Anybody tried frame generation for VR? Does it work well there, or are the generated frames just out enough to break the illusion?

cordlesslamp ,

Guys, what would be a better purchase?

  1. Used 6700xt for $200
  2. Used 3060 12GB for $220
  3. Non of the used, get a new $300 card for the 2 years warranty.
  4. Another recommendations.
simple OP ,

$200 for the 6700XT is a pretty good deal. It’s up to you if you’d prefer getting used or getting something with warranty.

twistedtxb ,
@twistedtxb@lemmy.ca avatar

This is huge. DLSS3 was miles ahead of FSR2. So glad for AMD

brawleryukon ,
@brawleryukon@lemmy.world avatar

DLSS3 and FSR2 do completely different things. DLSS2 is miles ahead of FSR2 in the upscaling space.

AMD currently doesn’t have anything that can even be compared to DLSS3. Not until FSR3 releases (next quarter, apparently?) and we can compare AMD’s framegen solution to Nvidia’s.

DarkThoughts ,

Every DX11 & DX12 game can take advantage of this tech via HYPR-RX, which is AMD’s software for boosting frames and decreasing latency.

So, no Vulkan?

Ranvier ,

I’m not sure, been trying to find the answer. But FSR3 they’ve stated will continue to be open source and prior versions have supported Vulkan on the developer end. It sounds like this is a solution for using it in games that didn’t necessarily integrate it though? So it might be separate. Unclear.

kadu , (edited )
@kadu@lemmy.world avatar

deleted_by_author

  • Loading...
  • Edgelord_Of_Tomorrow ,

    You’re getting downvoted but this will be correct. DLSSFG looks dubious enough on dedicated hardware, doing this on shader cores means it will be competing with the 3D rendering so will need to be extremely lightweight to actually offer any advantage.

    dudewitbow ,

    I wouldnt say compete as the whole concept of frame generation is that it generates more frames when gpu resouces are idle/low due to another part of the chain is holding back the gpu from generating more frames. Its sorta like how I view hyperthreads on a cpu. They arent a full core, but its a thread that gets utilized when there are poonts in a cpu calculation that leaves a resouce unused (e.g if a core is using the AVX2 accerator to do some math, a hyperthread can for example, use the ALU that might not be in use to do something else because its free.)

    It would only compete if the time it takes to generate one additional frame is longer than the time a gpu is free due to some bottleneck in the chain.

    echo64 ,

    You guys are talking about this as if it’s some new super expensive tech. It’s not. The chips they throw inside tvs that are massively cost reduced do a pretty damn good job these days (albit, laggy still) and there is software you can run on your computer that does compute based motion interpolation and it works just fine even on super old gpus with terrible compute.

    It’s really not that expensive.

    kadu ,
    @kadu@lemmy.world avatar

    deleted_by_author

  • Loading...
  • echo64 , (edited )

    Yeah, it does, which is something tv tech has to try and derive themselves. Tv tech has to figure that stuff out. It’s actually less complicated in a fun kind of way. But please do continue to explain how it’s more compute heavy

    Also just to be very clear, tv tech also encompasses motion vectors into the interpolation, that’s the whole point. It just has to compute them with frame comparisons. Games have that information encoded into various gbuffers so it’s already available.

    kadu ,
    @kadu@lemmy.world avatar

    deleted_by_author

  • Loading...
  • echo64 ,

    No. Tvs do not quite literally blend two frames. They use the same techniques as video codecs to extract rudimentary motion vectors by comparing frames, then do motion interpolation with them.

    Please, if you want to talk about this, we can talk about this, but you have to understand that you are wrong here. The Samsung TV I had a decade ago did this, it’s been a standard for a very long time.

    Again, tvs do not "literally blend two frames ** and if they did, they wouldn’t have the input lag problems they do with this feature as they need a few frames of derived motion vectors to make anything look good

    kadu ,
    @kadu@lemmy.world avatar

    My man, there’s quite literally no depth information in video - and there’s no actual motion.

    You can calculate how much a given block changes from frame to frame, and that’s it. You can then try to be clever and detect if this is a person, a ball, a network logo. But that’s it.

    This is absolutely an universe away from DLSS (and now presumably FSR) frame generation, and to even suggest they’re the same is such a ridiculous statement I’m not even going to bother anymore.

    Just the mere attempt at comparing a feature modern GPUs are finally being able to achieve with a simple algorithm running in the media decoder of a 4 core little ARM chip on a TV is laughable.

    kadu ,
    @kadu@lemmy.world avatar

    deleted_by_author

  • Loading...
  • echo64 ,

    Rolling my eyes so hard at this entire thread.

    You: doing this on shader units is bad! Not possible! Uses too much compute! Me: this tech has existed for over a decade on tvs, and there is motion interpolation software that you can get today that will do the same thing tvs do on compute and it works fine on even bad cards You: tvs just blend frames. This is different it uses motion vectors! Me: tvs use motion vectors. They compute them, whereas if you hook it up via amds thing, you don’t need to compute them You: no this is different because if you hook it up via amda thing you don’t need to compute them and it can look better

    <— We are here. You’ve absolutely lost your thread on what you are mad about, you’re now agreeing with me but you want to fixate on this this as a marker of how it’s not the same thing as tvs, even though it’s the same thing as tvs without the motion estimation exactly like I have been saying this entire time, but you’re desperate to find some way that no, I was right and win! Even though you’ve lost what thread you originally were talking about.

    Maybe we need to reframe this, how is this not possible or a bad idea to do on shader units? That’s what you were mad about, how is this totally different from tv tech but also the same and less compute heavy as tv tech bad to run on shader units?

    hark ,
    @hark@lemmy.world avatar

    The hit will be less than the hit of trying to run native 4k.

    kadu ,
    @kadu@lemmy.world avatar

    deleted_by_author

  • Loading...
  • hark ,
    @hark@lemmy.world avatar

    Either way, it pays for itself.

    Hypx ,
    @Hypx@kbin.social avatar

    People made the same claim about DLSS 3. But those generated frames are barely perceptible and certainly less noticeable than frame stutter. As long as FSR 3 works half-decently, it should be fine.

    And the fact that it works on older GPUs include those from nVidia really shows that nVidia was just blocking the feature in order to sell more 4000 series GPUs.

    kadu , (edited )
    @kadu@lemmy.world avatar

    deleted_by_author

  • Loading...
  • Hypx ,
    @Hypx@kbin.social avatar

    You aren't going to use these features on extremely old GPUs anyways. Most newer GPUs will have spare shader compute capacity that can be used for this purpose.

    Also, all performance is based on compromise. It is often better to render at a lower resolution with all of the rendering features turned on, then use upscaling & frame generation to get back to the same resolution and FPS, than it is to render natively at the intended resolution and FPS. This is often a better use of existing resources even if you don't have extra power to spare.

    dudewitbow ,

    because I think the post assumes that the GPU is always using all of its resources during computation when it isn’t. There’s a reason why benchmarks can make a GPU hotter than a game can, as well as the fact that not all games pin the gpu performance at 100%. If a GPU is not pinned at 100%, there is a bottleneck in the presentation chain somewhere. (which means unused resources on the GPU)

    kadu ,
    @kadu@lemmy.world avatar

    deleted_by_author

  • Loading...
  • dudewitbow ,

    I still think it’s a matter of waiting for the results to show up later. AMD for RDNA3 does have an AI engine on it, and the gains it might have in FSR3 might be different in the same way XeSS does with branching logic. Too early to tell given that all the test suite tests are RDNA3, and that it doesn’t officially launch til 2 weeks from now.

    CheeseNoodle ,

    Frame generation is limited to 40 series GPUs because Nvidias solution is dependant on their latest hardware. The improvements to DLSS itself and the new raytracing stuff work on 20/30 series GPUs. That said FSR 3 is fantastic news, competition benefits us all and I’d love to see it compete with DLSS itself on Nvidia GPUs.

    Hypx ,
    @Hypx@kbin.social avatar

    If FSR 3 supports frame generation on 20/30 series GPUs, you'll wonder if they'll port it to older GPUs anyways.

    CheeseNoodle ,

    If they did I’m pretty sure it would just be worse than FSR given the hardware requirements.

    csolisr ,

    Given that it will eventually be open-source: I hope somebody hooks this to a capture card, to have relatively lag-less motion smoothing for console games locked to 30.

    Soulyezer ,

    “Even on consoles” you have my attention. Now to see how good it is

    echo64 ,

    For anyone confused about what this is, it’s your tvs motion smoothing feature, but less laggy. It may let 60fps fans on console get their 60fps with only a small drop in resolution or graphical features. But it’s yet to be seen.

    NewNewAccount ,

    Looks like there are two versions. One is the one built into the game itself, far more advanced than what your tv can do. The other, supporting all dx11 and dx12 games, is like the soap opera effect from your tv.

    echo64 ,

    I don’t think so, there’s nothing I can see that suggests that. The only real differences are likely to be to do with lag. There’s nothing suggesting a quality difference between if a game has it built in vs you forcing it on a game.

    NewNewAccount ,

    Then why would any developer ever build it into a game?

    echo64 , (edited )

    It’s part of their suite of tools, that includes other things like lag reduction tech. In addition, if your game isn’t dx11 or dx12 then you can still provide it to the user. The generic version only works with dx11/12

    Also just like nvidia, they pay developers to add these things to games

    simple OP ,

    EuroGamer confirmed there is a difference

    The principles are similar to DLSS 3, but the execution is obviously different as unlike the Nvidia solution, there are no AI or bespoke hardware components in the mix. A combination of motion vector input from FSR 2 and optical flow analysis is used.

    AMD wanted us to show us something new and very interesting. Prefaced with the caveat that there will be obvious image quality issues in some scenarios, we saw an early demo of AMD Fluid Motion Frames (AFMF), which is a driver-level frame generation option for all DirectX 11 and DirectX 12 titles. […] This is using optical flow only. No motion vector input from FSR 2 means that the best AFMF can do is interpolate a new frame between two standard rendered frames similar to the way a TV does it - albeit with far less latency. The generated frames will be ‘coarser’ without the motion vector data

    AProfessional ,

    I hate that AMD copied the same terrible branding.

    Edgelord_Of_Tomorrow ,

    They’re just trying to barely hang on to relevance, they’re not interested in actually innovating.

    dudewitbow ,

    AMD has features in yesteryears that it had before Nvidia, its just less people paid attention to them till it became a hot topic after nvidia implemented it.

    An example was anti lag, which AMD and Intel implemented before Nvidia

    pcgamesn.com/…/geforce-driver-low-latency-integer…

    But people didnt care about it till ULL mode turned into Reflex.

    AMD still holds onto Radeon Chill. Which basically keeps the gpu running slower when idling in game when not a lot is happening on the screen…the end result is lower power consumption when AFK, as well as reletivelly lower fan speeds/better acoustics because the gpu doesnt constantly work as hard.

    kadu ,
    @kadu@lemmy.world avatar

    deleted_by_author

  • Loading...
  • dudewitbow ,

    What makes the other options “theoretical”

    kadu ,
    @kadu@lemmy.world avatar

    deleted_by_author

  • Loading...
  • dudewitbow ,

    I’m not saying reflex is bad and not used by esports pros. Its just the use of theoretical is not the best choice of word for the situation, as it does make a change, its just much harder to detect, similar to the difference between similar but not the same framerate on latency, or the experience of having refresh rates that are close to each other, especially on the high end as you stop getting into the realm of framerate input properties, but become bottlenecked by acreen characteristics (why oleds are better than traditional ips, but can be beat by high refresh rate ips/tn with BFI)

    Regardless, the point is less on the tech, but the idea that AMD doesnt innovate. It does, but it takes longer for people to see t because they either choose not to use a specific feature, or are completely unaware of it, either because they dont use AMD, or they have a fixed channel on where they get their news.

    Lets not forget over a decade ago, AMDs mantle was what brought Vulkan/DX12 performance to pc.

    kadu ,
    @kadu@lemmy.world avatar

    deleted_by_author

  • Loading...
  • dudewitbow ,

    Because AMD gpu division is a much smaller division in an overall larger company. They physically cant push out as much features because of that. When they decide to make a drastic change to its hardware, its rarely seen till its considered old news. Take for example maxwell and pascal. You dont see a performance loss at the start because games would be designed for hardware at the time, in particular whatevers the most popular.

    Maxwell and Pascal had a notible trait allowing it to have lower power consumption, the lack of a hardware scheduler as Nvidia moved the scheduler onto the driver. This allowed Nvidia to manually have more control of the gpu pipeline allowing for their gpus to handle smaller pipelines better, compared to AMD which had a hardware based one with multuple pipelines that needed an application to use properly to maximize its performance. It led to Maxwell/Pascal cards to have better performance… Til it didnt, as devs started to thead games better, and what used to be a good change for power consumption evolved into a cpu overhead problem (something Nvidia still has to this day reletive to AMS). AMDs innovations tend to be more on the hardware side of things which is pretty hard to market because of it.

    It was like AMDs marketing for Smart Access Memory (again a feature AMD got to first, and till this day, works slightly better on AMD systems than other ones). It was a feature that was hard to market because there isnt much of a wow factor to them, but is an innovation.

    kadu ,
    @kadu@lemmy.world avatar

    deleted_by_author

  • Loading...
  • dudewitbow ,

    Which then comes with the question of price/perf. Its not that its a bad idea that DLSS is better than FSR, but when you factor in price, some price tiers start to get funny, especially in the low end.

    For the LONGEST time, the RX 6600, which by default, was about 15% faster than the 3050, amd was significantly cheaper, still was outsold by the 3050. Using DLSS to cover the performance of another GPU does natively (meaning objectively better, no artifacts, no added latency) is when that argument of never buying a gpu without DLSS becomes weak, as the issue for some price brackets is what you could get at the same price or similar might be significantly better.

    In terms of modern gpus, the 4060ti is the one card everyone for the most part, should avoid (unless your a business china that needs gpus for AI due to the U.S government limiting chip sales)

    Sort of the same idea im RT performance too. Some people make it like AMD cant RT at all. Usually their performamce is a gen behind, so in situations like the 7900 xtx vs the 4080, could swing towards the 4080 for value, butnfor situations like the 7900xt, which was at some point, being sold for 700$, ots value, RT included was significantly better than the 4070ti as an overall package.

    kadu ,
    @kadu@lemmy.world avatar

    deleted_by_author

  • Loading...
  • dudewitbow ,

    Which is what.im.sayong, the condition of course that the gpus are priced close enough (e.g 4060 vs 7600). But when theres a deficiency in a cards spec (e.g 8gb gpus) or a large discrepancy in price, it would favor the AMD usually .

    Its why the 3050 was a terribly priced gpu for the longest time, and currently, the 4060ti is the butt of the joke, and someone shouldnt use those over the AMD in the said price range due to both performamce, and hardware deficiency(vram in the case of the cheaper 4060ti)

    kadu ,
    @kadu@lemmy.world avatar

    deleted_by_author

  • Loading...
  • dudewitbow ,

    In the case of the 4060ti 8gb, turning on RT puts them past the 8gb threshold killing performance, hence hardware deficiency does matter in some cases.

    kadu ,
    @kadu@lemmy.world avatar

    deleted_by_author

  • Loading...
  • dudewitbow ,

    Many fixed that, by having to adjust on the fly loading, which speed can vary depending on how often it has to swap. A game that still has odd issues with 8gb vram is Halo Infinite, mainly because its hard to test as the problem arises when getting to the open world part of the game, and requires about 30 minutes to get to the point where it happens. It was discussed in a HUB video a month or two ago. Models and textures like bushes start to look worse from that point on

    Games are adjusting assets on the fly, so even though the framerate may seem “normal” the visual quality nowadays might not be.

    vox ,
    @vox@sopuli.xyz avatar

    yeah if you’re severely gpu bottlenecked the difference is IMMEDIATELY OBVIOUS, especially in menus with custom cursors. (mouse smoothness while navigating menus is night and day difference), in-game it’s barely noticeable until you start dropping to ~30fps, then again: a huge difference.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • [email protected]
  • lifeLocal
  • goranko
  • All magazines