News | Forum | People | FAQ | Links | Search | Register | Log in
Real-Time Vis = How?
Interpret byte-code. No problem. Unwind brushes. Ok. Point-polygon collision? At the end of the day, that one isn't so hard. Physics, manageable.

Real-time vis? I don't even know how to begin to develop a plan for that.

Yet somehow has to know how this can be done, Saurbraten did it, right? What are the fundamental concepts in developing a strategy for visibility?
First | Previous | Next | Last
(I've long wondered if vis doesn't intentionally show a little bit behind walls for WinQuake's dynamic lighting. Like a lavaball on the other side showing through. Because I'm not aware of anywhere that a dynamic light won't show through a wall, which means the server sent the rocket or lavaball to the client or a player with Quad.) 
That's a good suggestion.

One other advantage I can think of to this approach is that it should no longer be necessary to seal a map.

@Baker: start.bsp in ID1 - if you go to the normal skill hall, stand near the left-hand wall and look towards the right-hand wall: no shine-through. It's easy enough to add an r_lockpvs cvar for testing purposes. Otherwise the answer is in SV_FatPVS. 
I think you'd still need a seal BSP tho. Otherwise you'd have no way of knowing what is void and what is valid game space. 
technically you don't, you'll just have tons more leafs, faces, marksurfaces, lightmaps, clipnodes, etc. I think vis can be modified to accept leaky maps as well, it will just take a lot longer due to all the extra leafs it needs to process. 
I guess that's true ... there WOULD be void on the inside of brushes. But you'd have to basically place those vis sampler nodes he's talking about throughout the entire Quake world grid/cube since it would all be playable game space in a leaking map. 
yeah true. With this technique, the processing time scales with the total volume of non-solid space, rather than the total number of leafs. 
The point about the GPU-accelerated approach is that this shouldn't matter. It will still take longer because a lot of formerly CONTENTS_SOLID leafs will now be CONTENTS_EMPTY, but it should take significantly shorter than software vis because you're just rendering 6 views and reading back occlusion query results. The resulting visdata may also be much higher quality.

For release-quality maps sealing is of course a must, but as a development aid, faster vis times and a higher quality result should be a win. 
Why not just make a realtime raytracing engine. It's been done before (albeit with hardcore CPUs) 
If I Have Time 
I want to try making a version of light.exe that runs on the gpu (OpenCL).
I'm pretty sure it's possible, only question is whether it will be much faster than the cpu version. 
Source Code 
Where can I find source code for modern vis tools? I'd like to learn how some of this stuff is implemented. 
You probably don't. :) Modern tools are stapled on top of the old code. And the old code will drive you to drink, trust me. 
There's no hope for this project if the code is THAT bad :-) 
He's talking about a whole new methodology ... something done at runtime. I haven't seen the game code itself so I don't know how hard it is to mod but I suspect it's been improved in the various engine code bases.

The tools ... not so much. :) 
It's A Hefty Task 
and shit, if you're re-writing engine code to vis during runtime, then you might as well make a whole new map format to boot. 
The Problem With Vis... that it really operates on too fine-grained a level for a modern renderer. Like a lot of things in Quake, it made sense for a software renderer on a lower-specced PC, where every polygon you could save performance by not drawing was a win, but with even halfway reasonably decent hardware acceleration that just goes out the window.

Some relevant notes about the XBox 360 port of Quake 2: - it just didn't bother using vis at all and still managed 60 fps with 4x MSAA at HD resolutions.

Culling of unseen polygons is also eliminated in the Xbox 360 version, deemed unnecessary due to the paltry number of triangles used per map - meaning that the entire world is drawn each and every frame.

That's fine for original content but is obviously going to fall down (badly) on some of the more brutal modern maps. But it does highlight that the really fine-grained per-leaf visibility is essentially disposable when dealing with more modern PCs than the original engines targetted.

if you're re-writing engine code to vis during runtime, then you might as well make a whole new map format to boot

This can seem to make sense on the surface, but you need to dig a little deeper. One of the reasons why BSP2 was successful is that it changed as little as possible in the format. There were discussions about what features it should have while it was being specced (and I did the original spec and implementation so I can be 100% certain about this) and it kept on coming back to making it as easy for other engine authors to implement as possible. So while it could have had features like built-in RGB lightmaps, 32-bit textures (or even a separate palette per-texture), or others, it didn't. It didn't even change the .map format so that mappers could continue using their favourite editors, and all that was required in the engine and tools was a few #defines, some new structs and a bunch of copy-and-paste code.

What's really required to make Vis more efficient is to change it's granularity from per-leaf to something like per-room. I have no idea what that would entail in terms of tool-work, but engine-side it could lead to better efficiencies from less BSP tracing while drawing and being able to build static batches from world geometry. 
we have per-room vis already, in a way, if the mapper makes heavy use of func_detail.

crazy idea, maybe you can recover the "leaf clusters" in the engine if you want coarser granularity vis data for rendering. Just group all leafs together that have the same visdata? 
BSP2 was limited by the requirement that it needed to work with Worldcraft and a Fitzquake derived engine, because switching editors proved to be an unpopular idea and dropping our sympathic newborn engine for Darkplaces or FTE seemed too heartless even to me at the time, although it would have been the right thing to do in retrospect (and it was the first thing I did afterwards.)

In the big picture, BSP2 is a foul compromise but a nice thing if you want to keep the Q1BSP pipeline.

About reducing the VIS detail:

Add a compiler switch that lets the mapper disable automatic vising. Then add a new custom texture (like "trigger") that lets the mapper create portals manually.

I did a similar thing in my single-player maps (which are both very large and very detailed) when I still used FBSP and it resulted in a HUGE performance boost. Despite already using detail brushes. I inserted just enough portals to cull far away areas of the map, instead of going overboard with it like the Quake compilers do by default. Performance is then mostly limited by batching.

The performance increase was comparable to the improvement in Vis time after using func_detail.

I got the idea from looking at how Call of Duty (1) does it, since that's a Quake 3 based game with relatively large outdoor maps. Turns out they changed it completely and yes, the mapper has to manually portal the map in that game.

Quake 1 (and Quake 3) vising was developed for corridor shooters running on 90s consumer hardware. No wonder they tried to cull every little bit whenever possible. But hardware and Quake mapping have changed so much that this formerly very effective method has turned into an obstacle, and a massive obstacle at that.

It is probably less noticeable with deathmatch maps, and thus Quake 3 maps. But single player maps are bogged down by this massive amount of unnecessary info. 
Naive Musings... 
So just for laughs I flew around jam6_ericwtronyn - which is about the heaviest thing I can throw at the quake engine right now (I think) - and noted that in the most epic view I could find, I was getting around 30,000 wpoly and 70,000 epoly. I think this map is unvised, but looking at the structure of it, I can't imagine vising it would bring those polycounts down much.

Now, unless I'm missing something, those kinds of polycounts shouldn't trouble any sort of even vaguely modern hardware (didn't Doom 3 have like 150,000 polys in a typical scene in 2004?)


I get a solid 60fps with jam6_ericwtronyn on a reasonably modern laptop running Quakespasm. Does anyone here get bad performance in this map, and if so - what hardware/engine you running it on?

Are there other factors at work that cause unvised quake maps to perform slowly that are not to do with polycount? Things like 400 monsters running LOS checks? 
I got poor performance on some maps with Mark v on my surface pro. My pro can run dark souls, burnout paradise and some other decent games so it's no slouch.

Dark places and Rmq tend to be better performance for big maps 
I think it's just the old rendering code. It probably spends more time figuring out what not to draw than it would take to just draw it... 
What Warren Said 
TB just renders everything in every frame and it's fairly smooth. I'm sure with proper optimization it could be faster by a factor of two. 
Most engines were focused on features and that was many years ago. Modern engines that break some hardware compatibilities can do it. Iirc it's VBO, as mh did. 
I think it's just the old rendering code. It probably spends more time figuring out what not to draw than it would take to just draw it... an engine that uses some reasonably modern rendering code, an unvised map would run quicker than the same vised map? Is this the case with Quakespasm? 
Potentially. A lot of modern games are throwing around props in scenes that have more triangles than entire Quake levels. 
First | Previous | Next | Last
You must be logged in to post in this thread.
Website copyright © 2002-2024 John Fitzgibbons. All posts are copyright their respective authors.