Haha
#12322 posted by ericw on 2015/05/11 21:18:11
I have a folder full of crappy box maps like that for testing compiler features.
#12323 posted by necros on 2015/05/12 01:12:07
so that's pretty damn amazing??! so you basically end up with a .bsp, .lit and .lit2 and your engine will just ignore the .lit2 if it doesn't support it? that's awesome!
Necros
#12324 posted by ericw on 2015/05/12 01:48:04
yep, that's how it'll work! pretty excited about this.
There are some downsides though. The more surfaces use higher resolutions, the longer light.exe will take, the .lit2 file will be bigger, and rendering dynamic lights will take more CPU.
It'll work best when used selectively, but I understand the same is true of the�source engine, which has pretty much the same feature?
#12325 posted by necros on 2015/05/12 04:49:46
i see you mentioned there will be a _lightmapscale setting for func_groups?
so you will be setting lightmap scale based on faces, however, would it not make more sense to set lightmap scale per light instead?
i think 90% of lights would be fine with their old resolution, but for key lights (eg: those shining through cool transparent textures) a higher resolution would be desired.
so could you not just output low res shadow onto a high res lightmap by default and then when a light specifies it, do the extra traces for the high res shadow?
#12326 posted by THERAILMCCOY on 2015/05/12 05:37:33
If you do it per light, what about cases where you want the sunlight casting strong shadows in just one particular area? You set your sun as a high res light and you end up having an enormously inflated lightmap size just for that one transparent chainlink fence texture casting shadows into the courtyard, whereas with per face you can just set the courtyard faces to be high res. As ericw said, the method he's proposing is basically the same as the Source engine, and per face works very well there in my experience.
@necros
#12327 posted by ericw on 2015/05/12 06:27:13
The way it's currently set up in the light tool, that'd be a bit awkward because the tool needs to know each face's lightmap resolution before starting the light tracing. I think you would need whole lighting pass just to figure out what faces a "high res" light is hitting.
Global Setting
#12328 posted by DaZ on 2015/05/12 10:16:24
A global lightmap scale setting would be very nice also.
Lol Daz@
"i basically want the global lightmap to be the highest"
#12330 posted by JneeraZ on 2015/05/12 11:35:40
I have to ask ... why ever opt for standard lightmaps? It isn't like machines today can't handle it or that memory is scarce. Jack that bastard up and let's roll! :)
What Warren Said
#12331 posted by DaZ on 2015/05/12 11:44:29
I Guess
ericw is doing .lit2 as an external file to allow other engines that don't support it to play the map, albeit at lower res. I think this is a good thing
Also,
I find it amusing that you can have real-time lighting and shadows etc in dark places (and I guess you could make some really nice maps using this) and yet it seems the community is still in love with baked lighting :P
#12334 posted by JneeraZ on 2015/05/12 12:51:13
Real time lighting without bounce isn't all that useful. Stencil shadows only go so far...
#12335 posted by Lunaran on 2015/05/12 18:14:27
would it not make more sense to set lightmap scale per light instead?
you're right, this would, in fact, not make more sense. you're picturing only the use case where you put a torch in a little lantern cage thing to throw those neat lines, and you'd rather just tag the light to upres every surface it touches. this sounds like a recipe for winding up with tons of faces with far higher resolution than they need.
I'd be just as excited about selectively turning lightmap resolution DOWN, in areas where high ceilings are shrouded in darkness or on the backs of pillars or what have you where there isn't a lot of variation or contrast in the lighting that actually hits that face. I'm a pretty obsessive optimizer though.
Having control only per brush and not per face is a bit of a shame, but I understand it's a .map format limitation. Maybe we need a new, extensible .map format first :)
#12336 posted by JneeraZ on 2015/05/12 18:29:12
"this sounds like a recipe for winding up with tons of faces with far higher resolution than they need. "
But again, who does this affect? What machine is having a problem loading Quake levels because of texture/lightmap memory?
Turn that dial to 11, let's go! :)
What machine is having a problem loading Quake levels because of texture/lightmap memory?
Lots of them were until the recent version of QS.
There's a fallacy here that a 1996 engine can have no bottlenecks in the year 2015 - it absolutely can.
#12338 posted by JneeraZ on 2015/05/12 19:00:47
Because of texture memory? Not other limitations?
Some maps slow my surface down
#12340 posted by JneeraZ on 2015/05/12 19:10:09
:-|
To Be Honest
#12341 posted by Kinn on 2015/05/12 19:16:55
I'm kinda with WarrenM here.
If the whole map runs fine with increased lightmap resolution, I think I'd just go with that, rather than spending a ton of time setting up funcs all over the place where detailed shadows appear.
Hmmmm
#12342 posted by Kinn on 2015/05/12 19:23:03
Brainfart really
let's say the compiler does a hi-res lighting pass. After this pass, could you then do something a bit clever like automatically detect surfaces that have uniform or near uniform lighting and automatically downscale the lightmap resolution there as appropriate?
#12343 posted by Spike on 2015/05/12 19:31:23
@WarrenM, if you compile a box room and whack the lightmap scaling to 0.0625 (1:1), then fire a rocket, your framerate WILL plummet (you can use rtlight-style lighting to avoid any slowdown, or just use gl-style flashblends instead, in any current engine including vanilla... assuming you were getting those slowdowns from crazy lightmap resolutions in vanilla anyway).
throw in a load of flickering lightstyles and your framerate will stutter 10 times a second (this can be worked around like rmqe already does, although this is not without its own cost).
high lightmap resolutions are NOT without their cost, and using a scale lower than 0.25 (4 texels to each luxel) is probably going to be abusive unless constrained to specific surfaces. 0.25 should generally be okay though.
For reference, 0.0625 gives 176mb for just vanilla start.bsp. There is no way around the load times, if nothing else. Using that resolution on every single surface a light might hit is just impractical, but you can get away with it if its just one or two surfaces.
@necros, typically you want the lightmap resolution concentrated behind the fence texture (typically ground). spreading it around the entire light is going to be wasteful. see my remark about <0.25 being abusive.
it would be nice to auto-detect the surfaces that got light applied to them through a fence texture, or areas with hash shadows, possibly by just calculating it at a high res and then reducing the resolution if it doesn't have many harsh boundaries on it, but this is likely to result in either every surface being high-res or significant glitches between highres and lowres surfaces.
@ericw, presumably the "black" texture should have its lightmap scale set to a low resolution by default, as it shouldn't normally be significant anyway (really this should be surfaces that have all texels set to 0).
#12344 posted by JneeraZ on 2015/05/12 19:36:09
Maybe it could use the standard res lightmaps for dynamic lighting effects ... like, overlay them on the high res lightmaps. Odds are you wouldn't even notice.
#12345 posted by ericw on 2015/05/12 21:30:23
Kinn, I was thinking the same thing, automatically lowering the resolution when it won't be visible would be cool. Just need to try coding it.
|