Results 76 to 88 of 88
- Join Date
- Jul 2013
- PSN ID
- Rep Power
Gotta love MS' choice regarding ESRAM and overall wanting to go with the 'broad entertainment play' eh?
I think it will take a while (mid 2014) until X1 games and developers more so have come to grips and figured out how to develop around the limitations of 32MB of ESRAM.
Decent/basic video if anyone is interested in listening to issues tied around the ESRAM.
Article with video from here.Gaming on all platforms right now.
Pretty much all the info we've been hearing/reading over the past few weeks. We'll see how it plays out in the real time gaming in a few weeks, and how developers learn to do it from there on.
The crux of it is that deferred rendering uses to much ram.
Of course, most games don't use deferred rendering, and there are numerous ways of repacking your data into far less space. 4 buffers is possible (killzone used 4) plus a depth buffer.
Any game with single light source/global illumination isn't really going to benefit from deferred rendering.
Sent from my iPhone using Tapatalk - now Free
Its fine to say that most games during this gen didn't use deferred rendering, but pretty much all major 3rd parties engines are starting to use it for lighting and if not full shading as well. Frostbite 3 (frostbite 2 on ps3 used 4 buffers and a depth like killzone 2), cryEngine 4 (lighting at the least), Unreal 4 (removed prebaked GI as an option), Dunia 2 (potential full shading) and AnvilNext just to name the large ones all use deferred techniques in one form or another. It is becoming the norm rather than the exception rather quickly.
Its funny how Killzone comes up so much when deferred rendering gets discussed :P, I'd say mainly because they go into so much detail on the whole process works. Killzone 2 did use 4 buffers with a depth buffer, but at 720p, the ram requirements were around 36 MB. So even at 720p there would be optimisations, repacking and work arounds needed to store killzone 2 on the X1 ESRAM. On a technical standpoint they would probably be able to run it all off the DDR3 pool....anyway.
Shadow fall, from that EVER referenced pdf from the start of the year, used 5 buffers at 16bit plus a 32bit depth buffer. That alone pushes the buffers into the 100 MB range. While the entire render target pool is around the 800 MB range with all the shadow and smaller buffers for post effects......Hmm really starting to see why a single large pool of fast ram is so appealing to developers now....
Yes a game with a single light source probably won't gain much from deferred rendering, though...
a) It would look like utter crap anyway, or would just be a pre-baked system with no dynamic lighting what so ever
b) Can you name a game that has only one light?
Just a sidenote UE4 still has pre-baked GI as an option.
Also many games bake a lot of things for lighting even though they have deferred dynamic lighting.
IE all games using Frostbite.
I would bet that we will see some sort of comeback of forward rendering in next generation, methods like clustered forward shading have been researched a lot.
Take the FP16 buffers in KZ2, MS can natively reproduce that in exponent+ mantissa, to much greater values, in only 10 bits.
Fully DR engines ONLY advantage is when you have multiple light sources, the more lights the better it performs in comparison to forward rendering.
However, multiple pass's via either DR or FR aren't going anywhere anytime soon, I really doubt anyone just went ...well, we have no spare cyclesto copy the data back to the DDR3, were screwed we just filled that 32mb.
Worst case scenario is you'd tile your framebuffer, it still shouldn't be an issue.
Its probably moot, anyway, IW have basically stated its the O/S reserved GPU slices which is causing them grief.
Last edited by mynd; 11-05-2013 at 06:55.
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)