News | Forum | People | FAQ | Links | Search | Register | Log in
Coding Help
This is a counterpart to the "Mapping Help" thread. If you need help with QuakeC coding, or questions about how to do some engine modification, this is the place for you! We've got a few coders here on the forum and hopefully someone knows the answer.
First | Previous | Next | Last
It's Time For Q1 Support To Begin! 
Alrighty, Quake 3 lightmap support is done. The colors aren't as vibrant as they should be because Unity's RGB lightmap shaders are lame, but the data is ripped and applied correctly. I also replace shader-modified textures with non-shader versions at runtime. Like on the strange flesh-spire and lava here.

http://imgur.com/tO3SSUP

But like the title says, it's time for Quake 1 support. The most comprehensive guide to the Q1 .bsp specs is here:

http://www.gamers.org/dEngine/quake/spec/quake-spec34/qkspec_4.htm#CBSPG

Is this still current? It's really old, and says it matches the .bsp version used in Quake shareware. Is there a more detailed or updated guide on the .bsps produced by modern compilers? 
Q3 Lightmaps. 
q3 has overbrighting. the lightmap scales between logical rgb values of 0 to 4 rather than 0 to 1. The lazy way to deal with that is to just multiply the values by 4 and clamp to 255. The real way to deal with it is to scale your vertex colours by 4 instead, or to put the same scaling in glsl.
This nonsense allows bright areas to oversaturate textures, thus textures which are grey colours can brighten up to become more white in bright areas.
Software vanilla Quake has a similar feature. vanilla glquake just clamps.

Check some engine's bspfile.h, like that markv_bsp2 zip I hacked together recently if you're after bsp2 support. Otherwise bsp29 is still the same bsp29 that's documented in your link. 
Trianges? 
Was studying the BSP29 document some, and I realize that nowhere is there a list of triangles for any of the faces. Is there something I'm not seeing, or does it fall to me to figure that out using sorcery? 
I Don't Think Bsp29 
Works like that. It's the .map file that has that info and that is then compiled into a scrambled file? 
Process 
When I was reading the Q3 .bsp into Unity my process was like this.

Take a face. A face has a list of vertices, and a list of triangle indexes into that faces vertex list. So to make a mesh out of a face all I had to do was make a mesh, and set it's vertex and triangle arrays to the data I pulled out of the face. Each vertex object has data about it's position, texture coords, color, and lightmap coords. The whole process was actually pretty slick.

In Quake 1 it seems like every surface has a list of edges, and each edge has two vertices. Using some simple rules and maths getting a list of verts and the texture mapping info for those verts doesn't seem too hard. So I make a mesh and add the verts and texture coords. But how do I form triangles? In Quake 3 that data was provided for me, but here it's not.

Is each face only going to be three verts? That doesn't seem right. Are the faces always going to specify their verts in the order needed to form triangles out of every three verts? 
 
Noesis can load bsp files, you might be able to find well readable code in there. http://oasis.xentax.com 
ALLCAPS: 
you have to triangulate it yourself, i think. All the faces are convex polygons, so it shouldn't be too hard to triangulate them. Only down side is there may be some degenerate triangles due to extra verts added for tjunctions. I guess you should keep these because otherwise you may get visible cracks. 
 
Dang, really? The renderer back in the day mashed out the tris for the whole map each time it loaded? That's unreal. 
 
actually the software renderer directly rasterized polygons, so it never needed triangles. Later opengl versions would use GL_POLYGON primitive type, so they didn't need triangles either. 
Well... 
I guess since every poly is going to be convex I can just ham-hand it and make tris using a pivot point and casting to each other vertex. 0-1-2, 0-2-3, 0-3-4, etc. It's the least efficient way possible, but I guess it's sure to cover the full face. Come to think of it when I r_showtris in some engines it looks like that's what it's doing. 
Efficiency 
How is this "least efficient"? I don't see a more efficient way to generate triangles from convex polygons. 
 
Honestly I just assumed it was the least efficient/elegant way since it was the first solution I thought of. Probably because it's simple. I just figured there'd be a more complex, "better" way. 
 
each edge has two sides. pick one vertex from each edge based upon the side of the edge you're using. side is determined by whether the edge index is negative or not. this will get you a convex polygon (aka: a triangle fan). more modern renderers can trivially generate triangles from that.
the whole thing is just triangle fans. software kept it like that because its easier to clip+frustum cull, which is part of how it managed to avoid all overdraw from bsps.
remember, these face polygons are never concave. there's no holes or anything. its pretty trivial because of that. 
Efficience Of Triangle Fans 
Strictly speaking there may be an inefficiency there in rendering>/i> those triangle fans. According to a half-remembered article I read a while back, long, thin triangles are slightly less efficient to render than evenly proportioned ones. I'd hazard a guess that's due to better cache-coherence properties of the latter.

I wouldn't worry about it though, primarily because it's a tiny difference. Also because almost all the polygons in quake maps will be 6 sides or less, so there's not a great deal of difference between the best and worst choices.
 
 
This is where I've had the nagging thought that I'll bet modern engines could just create large, texture sorted buckets of triangles representing the entire level and just throw it at the video card. Odds are that on even a mid-level machine, it would run fine.

Keep the BSP for collision checks and line-of-sight stuff, but in terms of raw rendering I wonder if parsing through the VIS data is actually a detriment these days. 
You Know... 
Brute forcing winning over the elegance of a BSP traversal algorithm always felt like an aesthetic unfairness to me.

Not that I mind scraping the vis times. 
Almost Reads Like A Carmack Tweet 
I need to get some sleep 
BSP 
Remember that visibility tests aren't used exclusively by rendering - they're also used by the ai as the first step of deciding if the player can be seen or not. So you will take a double hit on performance as every monster in the level starts doing traceline visibility test every frame.

Of course, it's pretty easy to test the hypothesis that vis is unnecessary - just build a map, saving a copy of the un-vised BSP file as you go. Then compare performance across the two files. You could even run vis at all the different levels and have multiple points of data. I don't know if there's any hard data on how much of a fps gain you get from making vis more accurate, it might be thaat they highest levels don't provide a good return on investment. 
TrenchBroom Does The Brute Force Thing 
It just throws texture-sorted triangles at the GPU, and it's pretty fast. The only drawback is that you have to reupload a lot of data to the GPU when the user changes the geometry, but since usually only a few brushes are selected at a time, just uploading the selected geometry is fast enough. 
 
Preach - You'd need an engine change to test it properly. If it's traversing the BSP tree to gather up the triangles every frame then it's still eating overhead that a bulk renderer wouldn't have.

I wonder about the hit there though, on your monster example. Line traces in a BSP are extremely fast. But pre-computed VIS data is always going to be faster there. The problem is that it would take a lot of work and engine coding to come up with a definitive test. :P

Sleep - That's what ToeTag did as well. I think there was a basic cull for stuff entirely behind you, but that was about it. Everything else got chucked at the card, sorted by texture. I never really saw it slow down at all... 
UQuake Performance 
uQuake leaves all that up to Unity to handle, it discards the bsp tree itself, the vis data, nodes, planes, all that jazz. Unity decides per GameObject what should be rendered and what shouldn't, and with each face/patch its own GameObject it can cull invisible regions and polys very well. In essence I'm throwing the entire level at the engine at once, but the engine is good at throwing only the parts that can be seen right now at the video card. Works well even on an Ouya/HP Touchpad, which are some Android devices with less than amazing GPUs. The Ouya version of uQuake renders and empty level faster than the proper port of OpenArena, even, which I'm pretty sure uses the vis-and-related data to do traditional bsp stuff, as it's a source port.

I am hitting some issues with parsing Quake1 .bsp, though. Perhaps my understanding of variable type and size is not correct, but the file specs I linked above say that a face is thus:

u_short plane_id;
u_short side;
long ledge_id;
u_short ledge_num;
etc...
u_shar light[2];
long lightmap;

Is an unsigned short int not two bytes? is a long not eight bytes, and a char one? I try to read the data out using that assumption and I get garbage. Looking at the offset where faces start It looks like either the specs are not right, or a short is one byte. I did notice the the BSP version in the file is 29 while the spec here is for version 28. 
.h 
comparing with Quakespasm's structs in bspfile.h it looks like they are different. The doc at gamers.org shows a different number of fields than the struct in Quakespasm, with different types as well. 
Unity 
and with each face/patch its own GameObject

Seriously? o_O And Unity handles that ok?

I've been doing similar stuff (parsing a doom3 .proc file into unity meshes) - the great thing about .proc files is that the surfaces are already grouped into the areas that are created by doom 3's visportals. I typically create just a single combined mesh for each of these areas. If I want more granularity to the meshes I'll just stick in more visportals. 
Oh Yeah 
Unity can handle a ton (10k+) of simple gameobjects without issue. I'm not sure how well it'd work if each gameobject had logic/tags/scripts attached, but just using them to render meshes has almost no overhead. I'm pretty sure Unity groups batches of meshes into drawcalls anyway. I've loaded some Return to Castle Wolfenstein SP levels that ended up being about 10k gameobjects, and only got slowdown if I selected worldspawn (so, every single face in the level) in the editor with the game unpaused. Rendering the level was still working great. 
Hmmm 
I guess unity has ways of optimising static gameobjects - I'll have to find out more about what it's actually doing.

On another note - how are you handling the map collision? 
First | Previous | Next | Last
You must be logged in to post in this thread.
Website copyright © 2002-2024 John Fitzgibbons. All posts are copyright their respective authors.