News | Forum | People | FAQ | Links | Search | Register | Log in
Mark V - Release 1.00
http://quakeone.com/markv/

* Nehahra support -- better and deeper link
* Mirror support, "mirror_" textures. video
* Quaddicted install via console (i.e. "install travail")
* Full external texture support DP naming convention
* Enhanced dev tools texturepointer video inspector video
* IPv6 support, enhanced server capabilities
* Enhance co-operative play (excels at this!)
* Software renderer version (WinQuake)
* "Find" information command (ex. type "find sky")

Thanks to the beta testers! NightFright, fifth, spy, gunter, pulsar, johnny law, dwere, qmaster, mfx, icaro, kinn, adib, onetruepurple, railmccoy

And thanks to the other developers who actively provided advice or assistance: Spike (!), mh, ericw, metlslime and the sw guys: mankrip and qbism.

/Mac version is not current yet ...; Linux will happen sometime in 2017
First | Previous | Next | Last
 
I wouldn't call it a conflict, hehe.

The DirectX version implementing DirectX features is just natural.

The OpenGL remaining at 1.2 for broad hardware compatibility is not something very bloody likely to stop MH.

To say MH is good at rendering is like saying Isaac Newton was good at calculus or that Einstein was pretty okay at physics ;-) 
About MH ... 
There's assembly language in his shaders in the RMQ engine. 
@Baker - Gamma And Contrast 
Currently going through the MarkV code to figure how it implements gamma and contrast in the GL renderer.

To be honest, I see absolutely nothing in either that wouldn't work in D3D right now; even version 8.

Gamma just sets adjusted gamma ramps, which will also work in D3D. D3D does have it's own gamma functions too, but you're better off using the native Windows SetDeviceGammaRamp/GetDeviceGammaRamp stuff (in particular the D3D functions don't work at all in windowed modes, whereas the native functions do).

Contrast is just a load of blended quads over the screen. The only thing I see in there that may be a problem with D3D8 is commented-out code enabling scissor test. D3D8 doesn't have scissor test but D3D9 does.

For D3D9 I'm going to do something different.

I'm going to do shader-based gamma and contrast using render-to-texture. This is achievable with D3D shader model 2 shaders, broadly equivalent to OpenGL 1.5 with ARB assembly shaders so compatibility remains good. It will enable gamma and contrast in windowed modes without affecting the entire display, and it won't require gamma-and-contrast-adjusting screenshots.

The interface will be 1 function: BOOL D3D_SetupGammaAndContrast (float gamma, float contrast) which it will be appropriate to call in GL_BeginRendering. Returns TRUE if it was able to do it's thing (or if gamma and contrast are both 1), FALSE otherwise in which case you need to decide on a fallback - either route it through the GL codepath (which should also work) or do nothing. Everything else will be automagic. 
 
Likely future scheme, not that this would have any impact on coding anyway ...

vid_hardwaregamma (*) - following the FitzQuakian cvar scheme that I rather like such as r_lerpmodels 0 (off), 1 (best), 2 (always)...

vid_hardwaregamma (or whatever name becomes)

0 - Never. Use best available non-hardware method

1 - Windowed mode uses non-hardware method (looks better on desktop), fullscreen uses hardware method (faster and hardware method is also brighter, some displays tend towards the dark side no matter what without hardware gamma). Default.

2 - Hardware method always.

(*) bad name because also does contrast? 
GL_RGBA 
Also: may notice the code is biased towards GL_RGBA and I have to switch the bytes around to be BGRA for various operations. It isn't actually an oversight or an inefficiency I didn't correct, but rather that OpenGLES only has GL_RGBA.

Just wanted to point that out because I know you may see that code and think "This is so wrong."

/The video/input/system code long ago was entirely rewritten in a way to support devices. In some of the files, there is living device code from back in 2014. 
"libGL.so.1 Not Found" 
Hmm.

So, my reQuiem test builds were done on CentOS 6.4. Looking at that setup now, ldd says that my reQuiem-debug.glx executable is using /usr/lib64/libGL.so.1.

rpm -qf on that file shows that it came from the package mesa-libGL-9.0-0.8.el6_4.3.x86_64

I don't remember now if that was something that I explicitly installed for reQuiem's benefit. 
 
Also: may notice the code is biased towards GL_RGBA and I have to switch the bytes around to be BGRA for various operations. It isn't actually an oversight or an inefficiency I didn't correct, but rather that OpenGLES only has GL_RGBA.

This doesn't actually matter at all in D3D aside from getting the byte ordering right, because you're writing the data directly to the texture rather than relying on the driver to not screw up. 
 
writing device memory using bytes instead of a cacheline-aligned memcpy will be slower, but whatever. modern apis just have you write it into ram and have the gpu itself move it onto the gpu's memory so there's no issues with uncached memory or whatever.
either way, d3d10+(eg gl3.0)/vulkan hardware has proper RGBA texture uploads so its not like modern gpus care. older gpus/apis will still need some sort of conversion but its okay to be lazy and submit only the lightmaps as bgra. streaming is the only time it really matters. oh noes! loading took at extra blinks-duration! *shrug* 
Compiling Requiem On Linux 
Ok .. first snag ...

#include <sys/cdefs.h> file not found

Solved with: sudo apt-get install -y libc6-dev-i386

Then next issue ...

fatal error: X11/extensions/xf86dga.h: No such file or directory

Does ... sudo apt-get install libxxf86vm-dev -y

But is already installed.

Goes to /usr/include/x11/extension ... no such file as xf86dga.h. Slight Googling turns up ... "xf86dga.h is obsolete and may be removed in the future."

Looks like future is now. See note about warning include <X11/extensions/Xxf86dga.h> instead. on that same page I Googled.

Don't have one of those sitting in /usr/include/x11/extensions either. Hmmm. Hope is not brick wall.

@johnny - I'm posting this for informational purposes. I never expect anyone in particular to assist, just fyi. I'm hoping someone reading this thread that knows what the above could be about may chime in. 
Guys 
I'm working on my quakespasm-irc engine thingy, and expanding it with more streamer-features.

One thing that I've been wanting to add are the joequake demo features. Rather than reinvent the wheel, and being that quakespasm and mark v share a bunch of code already, I was wondering if I could have a look at the mark v code to steal... uh, borrow from. 
BGRA 
GL_BGRA was really only ever significant as an Intel performance fix, and even then it also needed GL_UNSIGNED_INT_8_8_8_8_REV (which was probably the most annoying GLenum to type) in order to get the fix; without both it still ran slow.

Both NV and AMD also ran slower without these (with BGRA being by far the most important), but insignificantly so; Intel was catastrophically slower.

This is trivially easy to benchmark. Just do a bunch of glTexSubImage calls in a loop and time them. Adjust parameters and compare.

Both GL_BGRA and GL_UNSIGNED_thing are core since OpenGL 1.2, with the latter being adopted from GL_UNSIGNED_INT_8_8_8_8_EXT in the old GL_EXT_packed_pixels extension. So if you're targetting GL 1.2 you can quite safely use them without compatibility concerns.

Since Microsoft did the world a favour by forcing the hardware vendors to standardise on RGBA in the D3D10 era, I don't believe that any of this stuff is even important for Intel any more.

Basically if it's less than 10 years old it probably has good support for RGBA (if less than 5 make that definite) so you can really just use RGBA and no longer worry about this stuff.

I obviously don't speak for mobile hardware, where the rules may be different, and anyway there are far more interesting formats such as RGB10A2 which lets you do a 4x overbright blend without sacrificing bits of precision and with only 2 unused bits per texel. I never formally benchmarked this format but tests ran well.

What's more important about FitzQuake is that it uses GL_RGB for lightmap uploads. Even in the days of robust RGBA support, that's always going to force a format conversion in the driver. Combine that with hundreds of tiny updates (rather than few large ones) and FitzQuake can still chug down to single digit framerates even on some modern hardware.

No amount of BGRA can fix that, and here's where I believe FitzQuake has done the community a disservice. There are lots of interesting things that mappers can do with dynamic lights and animated lightstyles, but because FitzQuake performed so poorly with them I suspect that much of that early potential was never realized.

NV and AMD both suffer from this, but if all you ever benchmark is timedemo demo1 (or map dm3) with gl_flashblend 1 you'll probably never even notice. Intel suffers from this AND needs BGRA/UNSIGNED_etc.

Again it's trivially easy to demonstrate the perf difference, but to robustly fix in the engine requires more reachitecting than I'm willing to do within the scope of my current work. 
@shamblernaut 
If you look around in the Quaddicted engines directory, you can find older Mark V versions like this one where I marked things very cleanly ...

... for ease of porting most of the features very easily to Quakespasm.

Current Mark V isn't structured like for many reasons including that the WinQuake software renderer has been combined into Mark V, but the source is on the Mark V page. 
The True Question Is Baker... 
... how the hell did I not see the source link the first time I looked at that page... 
Oh 
just looking at host_cmd.c

maybe use an external file for the bad words array rather than a hardcoded one, that way it can be user updated 
@shamblernaut 
Probably at some point I'll do that.

Thinking about host_cmd.c ... I don't think I documented anywhere ...

give command extra options

give silverkey
give goldkey
give rune1 // rune1 to rune4

If you already have the gold key and wish to remove it, typing "give goldkey" will remove it if you already have it.

Typing "give" in the console will display a list of item names
 
Note: Did end up getting Requiem sorted out. 
 
QMB effects: lava balls or knight spikes do not leave trails when traveling very fast (velocity 2000) when the server is at .1 ticrate (FvF default due to many monsters and projectiles constantly flying all over the maps).

The QMB trails work fine at .05 ticrate even with the very fast projectiles (Mage's RL = Instant Fireball attack).

Non-QMB particle trails do work under these conditions, but they tend to "skip" a bit. 
 
Most engines assume a distance traveled per update frame of 200 units or more is a teleportation.

That's my guess without looking at the code because 2000/10 (ticrate .1) = 200

An entity that is assumed to have teleported won't be treated with any kind of continuous movement treatment. 
 
I saw on Quakeone, Dutch mentioned Mark V has "a difficult time playing external MP3" on his WinXP computer.

That's not very specific, but I wonder if he's seeing the same issue I reported, where there is a pretty significant delay loading the MP3s, during which time the game is completely frozen up.


We tried LOTS of troubleshooting steps in the old thread, but nothing helped for me on my netbook (though the problem didn't exist on my older WinXP laptop). The only thing that made a difference in the end was me re-encoding the MP3s with certain settings that decreased the loading delay (from 17 seconds to 4.5 seconds!). 
 
I've noticed that with the latest Mark V, it doesn't carry over my video settings (from id1 config) when playing a mod.

Also some occasional flickering/corruption when using GL. I know it's not helpful just to say that so I'm experimenting a bit more to see if has anything to do with my latest NVidia driver update and/or if it happens in other engines. If there's anything you'd want me to do or try that would be helpful in debugging that, let me know. FWIW, latest qconsole.log is at https://dl.dropboxusercontent.com/u/11690738/temp/qconsole.log 
 
Hm. Upon startup, my autoexec.cfg sets up some aliases (or runs other cfg files that set up aliases).

But then if I am sitting on the server and I reconnect to the same server (by "connect fvf.servequake.com" -- I was toggling external lit files), many of my aliases are forgotten.

But not all of them....

For example:

alias zoom_out "chase_active 1;chase_mode 1"

Is always forgotten upon reconnecting, but:

alias start "changelevel start"

is never forgotten.


This is pretty inconvenient. It means I have to re-run my cfg files when I want to reconnect or connect to a different server without completely restarting Mark V.




QMB effect difference: Particles in Quake are "fullbright" but the QMB blood effect is not fullbright.... So, in Quake you can always tell when you are hitting a monster in a dark area, but with the (more realistic) QMB blood, you can't tell.... Perhaps in this case it would be good to throw in a few standard red particles (even QMB particles are fullbright) in addition to the blood sprites (they are kind of sprites, aren't they?).

But this will fall under QMB fine-tuning.... 
 
alias zoom_out "chase_active 1;chase_mode 1"

Cannot reproduce.

I bound the above exact alias. Connected to a random server. Disconnected. The alias was there. Reconnected. Disconnected. The alias was there. Even connected to your server, disconnected, reconnected, changed the map, disconnect, reconnect, disconnect.

The alias was still there.

red fullbright blood - monsters in the dark

Interesting difference. Although I haven't looked into it, I suspect QMB blood is already fullbright, it just isn't very bright. Something to think about. 
 
QMB blood types 1 and 2 have a blend mode of GL_ZERO, GL_ONE_MINUS_SRC_COLOR, so they're not lit, they're just blended with the background geometry. 
 
Probably should change then. At least for the ones that aren't intended as true "ambience" and few of them are. 
 
Ok... There's some weird interaction of FvF + my tricky aliases + Mark V

So in FvF, when a player connects, or when a map first loads, I do:

stuffcmd(self, "re-exec\n");

Then I can set up stuff like this, in autoexec.cfg for example:

---------------

alias fogon "fog 0.05; bind f8 fogoff; alias fogset fogon; echo Fog On"

alias fogoff "fog 0; bind f8 fogon; alias fogset fogoff; echo Fog Off"

alias re-exec "fogset"

alias blah1 "echo blah1 is set"

fogon

alias blah2 "echo blah2 is set"

------------------

Yeah, I don't know how "correct" that is, but it works. It lets me toggle fog on and off with F8, and at each level change the automatic "re-exec" alias will run the "fogset" alias which will be assigned to either the "fogon" or "fogoff" alias, depending on which I last toggled with the F8 key....

But I think the problem is arising where I am assigning one alias to another alias like that ("alias fogset fogon" within the fogon alias...).


So, as I said, usually all this works fine. Every level change the fog setting will be re-applied after the level starts, or upon first connecting to FvF.

But if I am sitting on the server and I re-connect to the server, things go wrong, and ANY alias that was assigned AFTER the "fogon" command in the cfg above will be forgotten (so blah1 above will remain, but blah2 will be forgotten, as will any alias you have manually entered into the console).


OHHH, I think I may get it... or part of it....

It seems to happen upon DISCONNECTING from the server.

So this may be happening:

- All aliases are set up before connecting.
- Upon connecting, the server has you run "re-exec" by stuffcmd, which runs "fogset" which runs "fogon" which includes "alias fogset fogon" so now Mark V thinks it's a SERVER stuffed alias, so it discards it when you disconnect from the server....? Maybe?

That still doesn't explain why ANY alias set after that point (either in your cfg file or manually by console) is also forgotten....


Let me see if I can trim this example down a bit more.... The above fog setting code is actually useful (to me anyway -- I don't just do this crazy stuff for no reason!), but here's just a proof of concept to show the issue I'm having:

-----------

alias setblah "alias blah say blah"

alias re-exec "setblah"

alias blah1 "echo blah1 is set"

setblah

alias blah2 "echo blah2 is set"


--------------


The odd thing is if you remove the manually running of "setblah" from the autoexec,cfg file above, then everything seems to work, and only the "blah" alias will be forgotten upon disconnecting. Blah2 will remain. Ah, but any alias you set AFTER connecting to the server will be discarded.

But if that line is there, then every alias after that point will be forgotten upon disconnecting from the FvF server.... blah2 will be gone, as will any alias you set in the console after that point.

That's weird, right? 
First | Previous | Next | Last
You must be logged in to post in this thread.
Website copyright © 2002-2024 John Fitzgibbons. All posts are copyright their respective authors.