#3334 posted by mankrip on 2018/03/31 14:49:29
On lifts? Dead bodies could use MOVETYPE_FOLLOW.
On demo1.dem, where a dead scrag falls in the GK water pool? I'm not sure. It seems to be more of a general problem with far away entity movement being not interpolated correctly, since in some engines it also happens with the living fiends walking at the bottom of the water pit in the beginning of E2M2.
Scaling Up
#3335 posted by Poorchop on 2018/04/02 07:55:40
Is there a way to scale up lower resolutions similar to Mark V WinQuake in order to get the classic chunky pixel look? Mark V WinQuake allows you to play at modern resolutions but scale up from 320x280 or 640x480 and it looks pretty true to the original even when playing at something like 1080p. It also preserves the original HUD scale.
WinQuake is great for closely approximating the look of the original game while still playing on modern monitors but I can't get the music to work and a bunch of new maps/mods utilize Quakespasm features. I feel like at least part of the original look could be recreated with upscaling. I tried setting r_scale to a higher number but that just makes everything look really blurry instead of crispy and chunky, and I can't find any other scaling settings apart from scaling the HUD.
#3336 posted by Baker on 2018/04/02 08:06:55
gltexturemode gl_nearest_mipmap_linear helps Quakespasm look a bit more WinQuakey.
Music In Mark V
#3337 posted by Esrael on 2018/04/02 08:14:30
I'm assuming you have your music in ogg-format. Mark V (infamously) doesn't support it (or at least I think it still doesn't). MP3 works at least.
#3338 posted by Baker on 2018/04/02 08:31:33
Mark V has native Mac and iPhone ports. Apple API does not do OGGs.
Apple API decodes mp3 via hardware acceleration and stop and start music at the drop of a pin including on Windows. Other engines cannot do this and have to do things like restart the map and such.
Mark V also does not need any DLLs to do what it does.
Also mp3 decoding via hardware uses almost no CPU, especially important on a portable device like an iPhone due to battery.
mp3 > ogg simply because all Intel and ARM chips have MPEG accelerated decoding built into the chip.
This is why your DVDs play fast. This is how music on your iPhone or Android phone plays.
Try playing an ogg on an Android phone and watch what it does (hint --- it says "converting to MP3").
#3339 posted by Poorchop on 2018/04/02 08:56:32
Disabling texture filtering certainly helps but it still looks quite far from WinQuake without being able to scale up while preserving the aspect ratio. Also thanks for the advice regarding the music. I'll try the mp3 files instead so that I can have some music alongside the chunkiness of WinQuake when playing through vanilla stuff.
Mark V Winquake FTW!
#3340 posted by Esrael on 2018/04/02 09:20:19
I also like to play the id1 episodes on Mark V Winquake. :) Same goes for the early custom maps that don't have lits and stuff.
Interesting stuff, Baker. I didn't know how well MP3s are supported all the way down to the hardware level. I'm starting to see, why you're defending it over ogg so vehemently. c:
@esrael
#3341 posted by Baker on 2018/04/02 10:09:30
Yeah, most people don't know (nor care) about the implications of music play on all platforms. I did some engine modding a Playstation Portable Quake. It had a hardware mp3 decoder and also a software decoder libmad. The hardware decoder ran like lightning. Libmad software decoding ran like total foobar with 19 fps.
I try to study the proper way to do things and know exactly why I am doing them.
I have literally watched Quake engines crash and burn that did not know why they were doing what they were doing.
And what's funny is common factor tended to be doing "what most people want" (at the time!) against long-term "health" without understanding "what most people want" is fickle and subject to change at a moment's notice.
Palette Wank Incoming...
#3342 posted by Kinn on 2018/04/02 12:19:58
Poorchop, big crunky piskels is one thing, and I'm somewhat of a fan of that, but for me by far the #1 thing that gets the "WinQuake look" is if the lightmapped textures are drawn via the quake colormap, to get the proper 8-bit palette colours.
Now, FTE does this accurately with its r_softwarebanding option. I don't know of any other GL engine that does it properly, but FTE does. Of course, software engines all do it, but I also quite like having double digit framerates in modern maps, so a GL engine is the only option for me.
With r_softwarebanding on, just firing up start.bsp and looking up at those wooden boarded ceilings, letting myself melt into those rich chocolatey shadows, is enough to give me a proper Quake boner.
A lot of people instinctively dismiss the idea of Quake pallettisation in the modern age of coloured lighting and fog, and indeed if you just did the pallettisation as a post-process effect, then yes, it looks like total arse because of that...
However, FTE neatly avoids this by just doing it on the Quakey bits, and then the coloured lighting and fog just tints the colours "on top of" all that, and it just works and looks absolutely great. Best of both worlds.
(All well as a reply to Poorchop, this post is also a thinly-disguised plea to ericw to do FTE-style r_softwarebanding in QS, but I think I've been fairly subtle about it, mwahahah).
#3343 posted by mh on 2018/04/02 12:55:05
It's not difficult to do, just that it absolutely needs fragment shaders, which might be a step too far for some.
#3344 posted by mankrip on 2018/04/02 14:18:33
I used to prefer WinQuake's distorted colors too, until playing enough maps with custom textures to realize that it looks like ass in way too many cases.
It's only aesthetically safe to use in vanilla Quake textures, because they were carefully designed for it.
Try playing maps such as SEWAGE.bsp, nar_cat.bsp or dwmtf.bsp in a standard software renderer. Instant vomit.
#3344
#3345 posted by Kinn on 2018/04/02 14:42:59
Yeah, custom textures that rely too heavily on the terrible grey line will look like toss in software mode.
But I think it really makes well-designed textures come alive. The default GL-style lighting makes it all look a bit flat and sterile in comparison.
#3346 posted by metlslime on 2018/04/02 19:20:02
Yeah when designing textures for quake's software renderer, there needs to be a a lot of noise and roughness, grimy/crusty materials looks best. Wide areas of a single color look really bandy and bad (but are fine in opengl) -- so you have a generation of custom textures made for glquake where people never even looked at it in software mode.
Also the voodoo cards which everyone had in 1997/8 to play GLQuake had its own dithering/grittiness/muddiness that actually enhanced the look of Quake.
#3347 posted by Joel B on 2018/04/02 21:01:34
To come back to the pixel-scaling thing for a sec, in Quakespasm you can try different values for r_scale. I don't remember off the top of my head if this is also reflected in one of the GUI menus.
#3348 posted by ericw on 2018/04/02 21:02:29
@Poorchop:
setting r_scale to a higher number but that just makes everything look really blurry instead of crispy and chunky, and I can't find any other scaling settings apart from scaling the HUD.
This should make QS render to a half-sized framebuffer (for r_scale 2) and scale it up with nearest interpolation, so each original pixel becomes a 2x2 square. There shouldn't be blurring in the upscaling. If it is actually blurring the pixels together, mind posting your system info / screenshot?
It's not difficult to do, just that it absolutely needs fragment shaders, which might be a step too far for some.
QS 0.93's world faces and alias models go through fragment shaders, so just water, sprites, particles, sky, and anything else I'm forgetting use fixed-function at the moment.
I do want to implement this at some point! (I tried a quick hack a year or so ago, that postprocesses the final 32-bit rendered frame. What I did was make a lookup table from rgb565 to the nearest Quake palette color, then just feed in a 32-bit pixel, decimate it to 16-bit (565), then lookup the nearest Quake color and output that). As Kinn was saying, palletizing a 32-bit rendered image that has fog already baked in tends to look like mud, so this needs to go into the fragment shader before fog is added.
The faster / more natural / software faithful way of doing it is to ignore colored lighting and use the colormap.lmp just like software, having the fragment shader read the textures as 8-bit. Another option is to support colored lighting by blending with the lightmap in 32-bit, the using the rgb565 lookup table to convert that to paletted.
#3349 posted by Joel B on 2018/04/02 21:02:50
Oops you actually mentioned r_scale!
Hmm I haven't had any blurriness issues with that, assuming the necessary setting of gl_texturemode.
Wrong Word Choice
#3350 posted by Poorchop on 2018/04/03 01:06:58
Maybe saying that it looks blurry was the wrong choice of words because it does appear to be doing exactly what you mentioned. It just doesn't look like upscaled WinQuake, which is what I was kind of hoping. I guess I went with blurry because when moving around, it kind of looks like what you would see if you were playing in biting cold wind and your eyes were watering.
Here is r_scale 1:
https://i.imgur.com/jJBlmWr.jpg
and here is r_scale 5 (went with a much higher value to better demonstrate what I'm talking about):
https://i.imgur.com/UUCZLV0.png
The few rows of pixels going across the screen around where the shotgun model ends exemplifies what I mean when I say it looks blurry. This is especially true when moving.
The only pertinent visual setting that I'm using is gl_texturemode GL_NEAREST_MIPMAP_LINEAR but I think that I also tried this with gl_texturemode 1 with pretty similar results. This is on Windows 10.
Also interesting stuff Kinn. I haven't really put my finger on what exactly it is that captures the original look but it probably has more to do with what you said rather than just chunky pixels. I haven't tried FTE yet but I'll give it a look to get an idea of what you're talking about. Granted newer maps that take advantage of the fog and colored lighting still manage to look really beautiful in Quakespasm - trying to preserve the original look probably would be more of a detriment in this case. I'll have to look at FTE to get an idea of how the fusion of new and old holds up. To be fair, I'm not too hung up on retaining all of the old visuals because I do like the frame interpolation with animations in modern engines and I like the feel of modern engines especially with regard to the more comfortable mouse look.
#3351 posted by mankrip on 2018/04/03 01:48:26
The few rows of pixels going across the screen around where the shotgun model ends exemplifies what I mean when I say it looks blurry.
That is a problem with the way that GPUs choose mipmaps. WinQuake uses the nearest vertex distance to determine which mipmap should be displayed on the polygon, but GPUs uses the farthest vertex distance, therefore choosing lower-res mipmaps.
I recall someone explaining this in another thread here, and that there's no solution because there's no API to finetune this behavior on GPUs. The only way to minimize the problem is through anisotropic filtering, but afaik it only works in GL_LINEAR_MIPMAP_LINEAR.
#3352 posted by Joel B on 2018/04/03 02:50:32
I am pretty sure that in Quakespasm (or GL Mark V) the gl_texture_anisotropy setting does affect even gl_nearest_mipmap_linear texturing in some way. Don't have the details handy though.
#3353 posted by mh on 2018/04/03 06:09:31
GPUs uses the farthest vertex distance
That's not true.
With a GPU, mipmap level selection is (1) per fragment, NOT per vertex or per polygon, and (2) based on the screen space derivatives of each fragment.
For fixed pipeline hardware selection is described in the GL spec, e.g for GL 1.4 on page 145: https://www.khronos.org/registry/OpenGL/specs/gl/glspec14.pdf
#3354 posted by mankrip on 2018/04/03 06:52:27
By "per fragment" you mean trilinear filtering?
I'm on mobile, can't easily search for the specific post where I saw the info.
(2) based on the screen space derivatives of each fragment.
Which means that for polygons that aren't parallel to the camera plane, the texels farther away from the camera gets smaller, so the texel on the vertex farthest away will determine which mipmap to use. That's how I understood it.
What I'm talking about is a comment about the differences between trilinear filtering, anisotropic filtering and why the walls on hardware-accelerated engines gets blurry when seen from an angle. I don't remember the exactly thread, but the comment was somewhat recent.
#3355 posted by mankrip on 2018/04/03 07:17:54
Okay, here's the exact quotes:
winquake's mipmapping was based purely upon the distance (and texinfo+screensize+d_mipcap).
on the other hand, opengl decides which mipmap to use based upon the rate of change of the texcoords - or in other words surfaces that slope away from the view are considered more 'distant' and thus get significantly worse mipmaps.
glquake and winquake have _very_ different mipmapping rules, and glquake just looks blury in about every way possible...
Also Worth Adding...
#3143 posted by mh [137.191.242.106] on 2017/11/22 18:29:07
...that in hardware accelerated versions of Quake, mipmap level selection is not normally something that the programmer has control over; this is done by fixed function components in the graphics hardware.
#3356 posted by mankrip on 2018/04/03 07:20:34
The first quote is the #3140 posted by Spike [86.151.115.97] on 2017/11/22 17:55:03.
#3357 posted by mh on 2018/04/03 08:00:59
No, forget about vertexes, they're not relevant. Likewise distances: Not relevant.
Per fragment is not trilinear.
In OpenGL a "fragment" is a collection of values used to produce a final output; please read https://www.khronos.org/opengl/wiki/Fragment for more info.
Texture coords (NOT texture lookups) are linearly interpolated between the per-vertex and per-fragment stages. Those interpolated coords are then used for mipmap level selection and texture lookup, which may have linear or nearest filtering applied.
In shader speak,
in vec2 texcoords;
Vec2 ddx = dFdx (texcoords);
Vec2 ddy = dFdy (texcoords);
Vec4 color = textureGrad (texcoords, ddx, ddy);
And that will give you the same result as the GPU's automatic miplevel selection.
The important message however is that miplevel selection in hardware is NOT per-vertex or per-Surface, because the fragment shader stage just doesn't have access to that info. It's per-fragment; per-pixel if that's easier (even if not 100% correct) terminology.
This is all public information; it shouldn't need to be explained.
#3358 posted by mankrip on 2018/04/04 17:34:08
mh: Yes, I didn't try to learn exactly how it works, but it was enough to give an answer that despite being not technically correct, points to the right place where the cause is: the GPU mipmap selection algorithm. This way Poorchop knows that it's not a fault of the texture mode or screen scaling algorithm used by QuakeSpasm.
I was just trying to help so the guy won't waste his time messing around with every cvar trying to fix it. In this sense, my answer should give the proper results.
And yes, I recognize I'm way more dumb than I should be when it comes to hardware rendering, but it's not my field. There's no future for me in it. I only learn what I need to know about it from an user perspective.
|