how many entities is the engine able to handle at the same time? would an extreme horde mode (1000+) be possible with the engine?
(I also had the idea of 4 light styles = 4 channels in an RBGA texture, but, this breaks down if you want colored lighting.)
In my top secret guake-not-guake project I do exactly this, and the light colour is implicit in the lightstyle. So, colour is tied to style, which is less freedom, but if you're designing a game from scratch around the idea, it's not a problem.
It works with 3 textures though, one for each "real" colour channel.
#24 - Absolutely
I was just trying to be economical. For my purposes it works well.
Looks like r_shadows does not work in this port and that's intended. Looks too "hacky" for the author. Future builds will have the cvar removed.
it would be really cool if this port had dynamic shadows like those from the remastered quake :)
I have to admit that I don't miss the QS shadows as much as I thought I would. Though I'd love to see the dynamic shadows from KexQuake. I've no idea how possible that'd be but I'm still crossing my fingers as Makro seems to be open to suggestions and is obviously very talented.
Since I first discovered this engine, it's become my default. Finally, I can justify my stupidly expensive GPU!
I also thought: "Crap, no shadows? Now I have to switch back to other Quakespasm builds..."
However, it really turns out it's a rather minor visual feature you can easily live without.
Vanilla Q1 didn't have shadows, either (IIRC). More often than not, shadows also get glitched, so in the end, it's probably not the worst idea to ignore them entirely.
Yes there were no shadows in the original DOS/WinQuake version. It was added only in GLQuake most likely because it was a quick thing to implement.
Even Quake 2, which was designed with graphic accelerator video card in mind, had no shadows (I'm sure they can be enabled, but by default they aren't).
And Quake 3 Arena just had a dark flat blob underneath the character.
Q3A *did* have volumetric shadows, but rather as an experimental feature. They would appear in totally wrong places and it wasn't until Doom 3 when id made them work as intended.
I hacked together a Q1 version of the Q3A shadows, and Baker used it in MarkV. It looks well enough a lot of the time, certainly better than stock GLQuake shadows. A different set of tradeoffs regarding the areas it breaks down in though.
Ironwail 0.4.0 Is Out
I think this
"changed r_softemu 3 mode to replicate the alias model UV distortion from the original software renderer"
would be nice as an optional, also if you're going to do uv distortion for the weapons then do it for the textures on the walls too as I am sure this would be closer to the software renderer too.
The software renderer did perspective correction on wall and other surface textures, IIRC, every 16 texels by default, with the number of texels controllable by a cvar. Just naively disabling perspective correction for these is actually further from the software renderer.
These surfaces are also quite large compared to MDL triangles, so disabling perspective correction would look like utter shite. MDLs don't look so bad because their triangles are small.
You're misremembering here MH so I have 2 images below. One from software quake which demonstrates the lack of accuracy and one in IW which is very accurate.
As you can see the texture projection on the model isn't accurate on the IW one either (and it's at 2k res which is my monitors native res).
I purposely chose the outdoor area in DM3 as the trim texture shows the reduced accuracy quite well. It's not as bad as PS1 style graphics but it's certainly not accurate. My point is that if it's going to be less accurate on the model then why not do it on the bsp textures too?
I'm actually remembering very well; like mipmaps, perspective correction is something that later Quake "lore" has forgotten was actually in the original software engine.
Here's the relevant section from TECHINFO.TXT, sourced from https://github.com/id-Software/Quake/blob/bf4ac424ce754894ac8f1dae6a3981954bc9852d/WinQuake/data/TECHINFO.TXT#L412
Higher-quality perspective texture mapping
For maximum speed, perspective correction is performed only every 16
pixels. This is normally fine, but it is possible to see texture ripples
in surfaces that are viewed at sharp angles. For more precise texture
mapping, set the console variable d_subdiv16 to 0. Doing this will result
in somewhat slower performance, however, and the difference in visual
quality will not normally be noticeable.
This is what brush surfaces look like with perspective correct texturing completely disabled, which is the point: it's not a tolerable small inaccuracy, like on MDLs (because they have small triangles), it's huge.
Start map: https://www.quaketastic.com/files/screen_shots/noperspective_1.jpg
(These may be a little dark so boost your brightness if required.)
That's why software Quake did do perspective correct texturing: because it looks shite without. But it didn't do it every texel, because that would have been too slow, so it did it every 16 (or 8, with d_subdiv16 0) texels and linearly interpolated between. That's why you can still see small inaccuracies.
I'm not saying that would be impossible with a hardware accelerated renderer, but we're talking about something that's still in fixed-function hardware, so you'd need to emulate it. One way might be to divide all surfaces into 16x16 blocks.
Just to demonstrate that I'm not cherry-picking views that make it look bad, here's the same from DM3 again:
The other point which is not obvious from screenshots is that this distortion moves as the player moves. That makes sense, as the players perspective changes with movement. It's not a cool "trippy" effect that might be suitable for a powerup either, it's really really bad, with angles, hard lines, blurs and smears.
Some late-coming additional information on this, from Mike Abrash's Graphics Programming Black Book, specifically from the chapters covering Quake's development:
Early on, we decided to allow lower drawing quality for triangle models than for the world, in the interests of speed. For example, the triangles in the models are small, and usually distant - and generally part of a quickly moving monster that’s trying its best to do you in - so the quality benefits of perspective texture mapping would add little value. Consequently, we chose to draw the triangles with affine texture mapping, avoiding the work required for perspective. Mind you, the models are perspective-correct at the vertices; it’s just the pixels between the vertices that suffer slight warping.
That all confirms that it was a performance/quality tradeoff rather than a deliberate aesthetic choice. It also confirms the point I made above about the smaller triangles in MDLs.
"That all confirms that it was a performance/quality tradeoff rather than a deliberate aesthetic choice."
Obviously. I doubt anyone would argue against that.
mh is right on this one. The only detail he got wrong is that it's every 16 pixels (screen space), not texels (texture space).
Additional info: Quake's software rasterizer perspective correction skips columns, but not rows. When the depth remains the same for each line (e.g. on floor polygons), the rendering is 100% perspective correct.
Also, due to it skipping a fixed amount of columns, the perspective interpolation distortions looks worse at low resolutions. Skipping 16 pixels on a 320x240 screen means that the perspective was only corrected about 20 times horizontally. In full HD, considering the same visible frustum (1440x1080), the perspective correction is performed 90 times horizontally, resulting in a more accurate image.
The huge distortions that happens when disabling perspective correction on BSP surfaces are a fact. I've tried skipping perspective correction in many different ways to optimize the rasterizer, but the rasterizer can't live without it.
The big question is if it's possible to make a filter to accurately reproduce the limitations of Quake's BSP polygon perspective correction. In my opinion, it's not worth it.
So I was right then. Quake was wobbly lol
It'd be neat if it could be simulated in that mode that tries to simulate software.
I'd say that BSP surfaces are wavy, not wobbly, because their texturemapping distortion is conditioned by a frequency unrelated to the polygon, instead of being conditioned by the polygon's edges.
While MDL textures gets distorted in a very uniform way, BSP textures gets distorted in a series of ripples.
MDL textures gets increasingly more distorted the closer they are to the screen, because their triangles' edges gets farther apart. But BSP textures gets less distorted the closer they are to the screen, because the amplitude of the ripples is more dilluted accross higher frequencies. Both kinds of polygons can be distorted, but their distortions behaves in pretty opposite ways.