News | Forum | People | FAQ | Links | Search | Register | Log in
Quakespasm Engine
This engine needs its own thread.

Feedback: I like the OS X version, but I have to start it from the terminal for it to work and can't just double-click it like a traditional OS X app. I'm sure you guys already know this, either way great engine.

http://quakespasm.sourceforge.net/
First | Previous | Next | Last
@esrael 
Yeah, most people don't know (nor care) about the implications of music play on all platforms. I did some engine modding a Playstation Portable Quake. It had a hardware mp3 decoder and also a software decoder libmad. The hardware decoder ran like lightning. Libmad software decoding ran like total foobar with 19 fps.

I try to study the proper way to do things and know exactly why I am doing them.

I have literally watched Quake engines crash and burn that did not know why they were doing what they were doing.

And what's funny is common factor tended to be doing "what most people want" (at the time!) against long-term "health" without understanding "what most people want" is fickle and subject to change at a moment's notice. 
Palette Wank Incoming... 
Poorchop, big crunky piskels is one thing, and I'm somewhat of a fan of that, but for me by far the #1 thing that gets the "WinQuake look" is if the lightmapped textures are drawn via the quake colormap, to get the proper 8-bit palette colours.

Now, FTE does this accurately with its r_softwarebanding option. I don't know of any other GL engine that does it properly, but FTE does. Of course, software engines all do it, but I also quite like having double digit framerates in modern maps, so a GL engine is the only option for me.

With r_softwarebanding on, just firing up start.bsp and looking up at those wooden boarded ceilings, letting myself melt into those rich chocolatey shadows, is enough to give me a proper Quake boner.

A lot of people instinctively dismiss the idea of Quake pallettisation in the modern age of coloured lighting and fog, and indeed if you just did the pallettisation as a post-process effect, then yes, it looks like total arse because of that...

However, FTE neatly avoids this by just doing it on the Quakey bits, and then the coloured lighting and fog just tints the colours "on top of" all that, and it just works and looks absolutely great. Best of both worlds.

(All well as a reply to Poorchop, this post is also a thinly-disguised plea to ericw to do FTE-style r_softwarebanding in QS, but I think I've been fairly subtle about it, mwahahah). 
 
It's not difficult to do, just that it absolutely needs fragment shaders, which might be a step too far for some. 
 
I used to prefer WinQuake's distorted colors too, until playing enough maps with custom textures to realize that it looks like ass in way too many cases.

It's only aesthetically safe to use in vanilla Quake textures, because they were carefully designed for it.

Try playing maps such as SEWAGE.bsp, nar_cat.bsp or dwmtf.bsp in a standard software renderer. Instant vomit. 
#3344 
Yeah, custom textures that rely too heavily on the terrible grey line will look like toss in software mode.

But I think it really makes well-designed textures come alive. The default GL-style lighting makes it all look a bit flat and sterile in comparison. 
 
Yeah when designing textures for quake's software renderer, there needs to be a a lot of noise and roughness, grimy/crusty materials looks best. Wide areas of a single color look really bandy and bad (but are fine in opengl) -- so you have a generation of custom textures made for glquake where people never even looked at it in software mode.

Also the voodoo cards which everyone had in 1997/8 to play GLQuake had its own dithering/grittiness/muddiness that actually enhanced the look of Quake. 
 
To come back to the pixel-scaling thing for a sec, in Quakespasm you can try different values for r_scale. I don't remember off the top of my head if this is also reflected in one of the GUI menus. 
 
@Poorchop:
setting r_scale to a higher number but that just makes everything look really blurry instead of crispy and chunky, and I can't find any other scaling settings apart from scaling the HUD.
This should make QS render to a half-sized framebuffer (for r_scale 2) and scale it up with nearest interpolation, so each original pixel becomes a 2x2 square. There shouldn't be blurring in the upscaling. If it is actually blurring the pixels together, mind posting your system info / screenshot?

It's not difficult to do, just that it absolutely needs fragment shaders, which might be a step too far for some.
QS 0.93's world faces and alias models go through fragment shaders, so just water, sprites, particles, sky, and anything else I'm forgetting use fixed-function at the moment.

I do want to implement this at some point! (I tried a quick hack a year or so ago, that postprocesses the final 32-bit rendered frame. What I did was make a lookup table from rgb565 to the nearest Quake palette color, then just feed in a 32-bit pixel, decimate it to 16-bit (565), then lookup the nearest Quake color and output that). As Kinn was saying, palletizing a 32-bit rendered image that has fog already baked in tends to look like mud, so this needs to go into the fragment shader before fog is added.

The faster / more natural / software faithful way of doing it is to ignore colored lighting and use the colormap.lmp just like software, having the fragment shader read the textures as 8-bit. Another option is to support colored lighting by blending with the lightmap in 32-bit, the using the rgb565 lookup table to convert that to paletted. 
 
Oops you actually mentioned r_scale!

Hmm I haven't had any blurriness issues with that, assuming the necessary setting of gl_texturemode. 
Wrong Word Choice 
Maybe saying that it looks blurry was the wrong choice of words because it does appear to be doing exactly what you mentioned. It just doesn't look like upscaled WinQuake, which is what I was kind of hoping. I guess I went with blurry because when moving around, it kind of looks like what you would see if you were playing in biting cold wind and your eyes were watering.

Here is r_scale 1:
https://i.imgur.com/jJBlmWr.jpg

and here is r_scale 5 (went with a much higher value to better demonstrate what I'm talking about):
https://i.imgur.com/UUCZLV0.png

The few rows of pixels going across the screen around where the shotgun model ends exemplifies what I mean when I say it looks blurry. This is especially true when moving.

The only pertinent visual setting that I'm using is gl_texturemode GL_NEAREST_MIPMAP_LINEAR but I think that I also tried this with gl_texturemode 1 with pretty similar results. This is on Windows 10.

Also interesting stuff Kinn. I haven't really put my finger on what exactly it is that captures the original look but it probably has more to do with what you said rather than just chunky pixels. I haven't tried FTE yet but I'll give it a look to get an idea of what you're talking about. Granted newer maps that take advantage of the fog and colored lighting still manage to look really beautiful in Quakespasm - trying to preserve the original look probably would be more of a detriment in this case. I'll have to look at FTE to get an idea of how the fusion of new and old holds up. To be fair, I'm not too hung up on retaining all of the old visuals because I do like the frame interpolation with animations in modern engines and I like the feel of modern engines especially with regard to the more comfortable mouse look. 
 
The few rows of pixels going across the screen around where the shotgun model ends exemplifies what I mean when I say it looks blurry.

That is a problem with the way that GPUs choose mipmaps. WinQuake uses the nearest vertex distance to determine which mipmap should be displayed on the polygon, but GPUs uses the farthest vertex distance, therefore choosing lower-res mipmaps.

I recall someone explaining this in another thread here, and that there's no solution because there's no API to finetune this behavior on GPUs. The only way to minimize the problem is through anisotropic filtering, but afaik it only works in GL_LINEAR_MIPMAP_LINEAR. 
 
I am pretty sure that in Quakespasm (or GL Mark V) the gl_texture_anisotropy setting does affect even gl_nearest_mipmap_linear texturing in some way. Don't have the details handy though. 
 
GPUs uses the farthest vertex distance

That's not true.

With a GPU, mipmap level selection is (1) per fragment, NOT per vertex or per polygon, and (2) based on the screen space derivatives of each fragment.

For fixed pipeline hardware selection is described in the GL spec, e.g for GL 1.4 on page 145: https://www.khronos.org/registry/OpenGL/specs/gl/glspec14.pdf 
 
By "per fragment" you mean trilinear filtering?

I'm on mobile, can't easily search for the specific post where I saw the info.

(2) based on the screen space derivatives of each fragment.

Which means that for polygons that aren't parallel to the camera plane, the texels farther away from the camera gets smaller, so the texel on the vertex farthest away will determine which mipmap to use. That's how I understood it.

What I'm talking about is a comment about the differences between trilinear filtering, anisotropic filtering and why the walls on hardware-accelerated engines gets blurry when seen from an angle. I don't remember the exactly thread, but the comment was somewhat recent. 
 
Okay, here's the exact quotes:

winquake's mipmapping was based purely upon the distance (and texinfo+screensize+d_mipcap). 

on the other hand, opengl decides which mipmap to use based upon the rate of change of the texcoords - or in other words surfaces that slope away from the view are considered more 'distant' and thus get significantly worse mipmaps. 

glquake and winquake have _very_ different mipmapping rules, and glquake just looks blury in about every way possible...

Also Worth Adding...
#3143 posted by mh [137.191.242.106] on 2017/11/22 18:29:07
...that in hardware accelerated versions of Quake, mipmap level selection is not normally something that the programmer has control over; this is done by fixed function components in the graphics hardware.
 
 
The first quote is the #3140 posted by Spike [86.151.115.97] on 2017/11/22 17:55:03. 
 
No, forget about vertexes, they're not relevant. Likewise distances: Not relevant.

Per fragment is not trilinear.

In OpenGL a "fragment" is a collection of values used to produce a final output; please read https://www.khronos.org/opengl/wiki/Fragment for more info.

Texture coords (NOT texture lookups) are linearly interpolated between the per-vertex and per-fragment stages. Those interpolated coords are then used for mipmap level selection and texture lookup, which may have linear or nearest filtering applied.

In shader speak,

in vec2 texcoords;

Vec2 ddx = dFdx (texcoords);
Vec2 ddy = dFdy (texcoords);

Vec4 color = textureGrad (texcoords, ddx, ddy);

And that will give you the same result as the GPU's automatic miplevel selection.

The important message however is that miplevel selection in hardware is NOT per-vertex or per-Surface, because the fragment shader stage just doesn't have access to that info. It's per-fragment; per-pixel if that's easier (even if not 100% correct) terminology.

This is all public information; it shouldn't need to be explained. 
 
mh: Yes, I didn't try to learn exactly how it works, but it was enough to give an answer that despite being not technically correct, points to the right place where the cause is: the GPU mipmap selection algorithm. This way Poorchop knows that it's not a fault of the texture mode or screen scaling algorithm used by QuakeSpasm.

I was just trying to help so the guy won't waste his time messing around with every cvar trying to fix it. In this sense, my answer should give the proper results.

And yes, I recognize I'm way more dumb than I should be when it comes to hardware rendering, but it's not my field. There's no future for me in it. I only learn what I need to know about it from an user perspective. 
Render Gun In Lienar Or Nearest 
Problem Solve ! 
Yet Another Controller Question! 
Is there a way to designate a specific controller in QS?

I just got a lovely Hori Fighting Controller that I use for old d-pad based games and emulation.

Problem is that it's plugged in whereas my analogue controller is wireless. It seems that QS only reads the first controller recognised by Windows or something.

When I boot up QS it only responds to my plugged in controller and I can't find a way to get QS to read what has now become my 'second' controller.

PS: Just a FYI that the Nightlies page has a 502. :) 
 
At the moment it's hardcoded to use the "first controller in the list" returned by SDL.. agree this would be a good / easy thing to improve.

I don't know what would be a good way to store it in a cvar - just the controller index would work I guess, as long as the OS returns them in a consistent order across reboots.

re: Nightlies, thanks.. rebooted. 
 
Do you think that sometime in the future you'll create a menu for controller options? Maybe if players can easily change things in a menu it wouldn't be such an issue if they did have to occasionally change the CVAR.

Of course, I've really no idea how much work it is to add menu stuff in Quake... Just curious. :) 
Potential Code Snippet Donation 
in_sdl.c -> IN_StartupJoystick ...

Replace "for (i = 0; i < SDL_NumJoysticks(); i++) { ... }" block with following.

{
qboolean do_second = COM_CheckParm ("-controller2")
qboolean found_first = false;
for (i = 0; i < SDL_NumJoysticks(); i++)
{
const char *joyname = SDL_JoystickNameForIndex(i);
if ( SDL_IsGameController(i) )
{
const char *controllername = SDL_GameControllerNameForIndex(i);
gamecontroller = SDL_GameControllerOpen(i);
if (gamecontroller)
{
if (do_second && !found_first) {
found_first = true;
continue; // skip to the next one
}
Con_Printf("detected controller: %sn", controllername != NULL ? controllername : "NULL");

joy_active_instaceid = SDL_JoystickInstanceID(SDL_GameControllerGetJoystick(gamecontroller));
joy_active_controller = gamecontroller;
break;
}
else
{
Con_Warning("failed to open controller: %sn", controllername != NULL ? controllername : "NULL");
}
}
else
{
Con_Warning("joystick missing controller mappings: %sn", joyname != NULL ? joyname : "NULL" );
}
}
}


Hipnotic Rogue could just add -controller2 to the command line. 
... 
Someone in the QuakeDroid thread asked for controller support for Android ...

Was about to implement controller support from a SDL2 development blog, then realized if it didn't use the same key names, behaviors and such as Quakespasm would be a missed opportunity.

Short version is that I spent some time researching the differences between the initialization in Quakespasm vs. what I read on the SDL2 development blog, and ended up reading this section of code several times. 
 
Cool, I hope the QS controller code is useful. I'm biased but I really like how it turned out (pretty clean integration with the engine, playable out of the box, defaults went though a lot of player testing - i.e. cubic easing, the deadzone implementation.)

Here is the initial commit that shows the changes outside of in_sdl.c: (mostly just adding K_ABUTTON and K_BBUTTON to the menu code.)
https://sourceforge.net/p/quakespasm/code/1293/

I made some further tweaks which I think were restricted to in_sdl.c. Let me know if anything is unclear.

(Thanks for the -controller2 code snippet, that seems like a good starting point for multi contorller support) 
First | Previous | Next | Last
You must be logged in to post in this thread.
Website copyright © 2002-2024 John Fitzgibbons. All posts are copyright their respective authors.