#610 posted by mh on 2016/12/06 22:28:40
8x multitexture (which was easy, even for me)
I'm going to offer 4.
Here's the reason why: http://www.nvidia.com/object/General_FAQ.html#t6
Summary, for anyone who didn't follow the link, is that with fixed-pipeline OpenGL NVIDIA only offer 4 texture units.
@steve
#611 posted by Baker on 2016/12/07 01:16:23
Yeah, I'd like to take a look at that source code.
@mh
#612 posted by Baker on 2016/12/07 01:29:45
I can't think of a Quake scenario where anything more than 4 texture units would be necessary. I think even 3 may be enough to eliminate to the overbright pass as a separate pass in FitzQuake.
DSP
is awesome... no idea how that could be implemented in quake maps... trigger volumes?
Unreal Engine used to allow "zones" to be created by sealing off entire rooms and areas, that was a good technique.
@gunter: Sound Modification Zones
#614 posted by Baker on 2016/12/07 02:39:48
If I were to add a modification to that certain areas of a map had special zone modifications zones like Half-Life (echo areas) I would probably be inclined to implement it in the following manner:
- A brush similar to func_illusionary indicates the sound region. Static entities are available to the client. A field of the entity indicates the sound modification.
- Although someone like Spike may correctly point that this to some degree does not follow the client/server model ... every modern engine already breaks the client/server model by indicating skybox and fog (and now several other keys like wateralpha and friends) in the worldspawn, which is client-side. The server has no opportunity to legitimately set those global values, and maybe for the best because those would have a delay of a split second.
I would mostly need to examine a way to make it backwards compatible to non-supporting engines.
This would also allow easy editing via external .ent files. And to possibly allow someone so inclined to take an existing map and add sound zones.
Also makes it so no complex new way of communicating sound zones to a client need be added to an engine and would be compatible with save games and everything else.
@fifth
#615 posted by Baker on 2016/12/07 02:47:54
The Engoo engine has some of sound modification capability. It's one of the software renderer engines like qbism super 8.
But yeah ... I'll do some thinking and figure out some way to support such a thing --- and in a way that is very friendly to mappers and easy.
/Isn't a soon thing but more likely a vague "probably some time in 2017" thing -- like security cameras security cameras concept video. So many items in the queue + limited time ...
#616 posted by Spike on 2016/12/07 02:53:37
I can see 5 being used for diffuse+lightmap+fullbright+caustics+detailmaps maybe.
Or add 3 more if you split the lightmaps into their own textures.
Then add two extras if you want light directions and deluxemaps for bumpmapping.
And add 4 more if you want to ramp up specular to a decent exponent...
Although its kinda pointless to worry about it if the engine needs to retain a 4-tmu pathway anyway... or two... or one...
But yeah, more than 4 and you really should be using glsl/hlsl, trying to do everything with only an alpha channel for temp data is just unworkable.
@spike
#617 posted by Baker on 2016/12/07 03:05:57
Yeah, eventually I do have something like a "surface effect texture" planned in my head for possible surface effects.
Might as well ask your thoughts on this question to you ...
Although not soon, I would like to use probably 4-8 QME 3.1 bit flag slots but and would like to avoid any possible with what FTE uses.
One example might be to indicate additive blending.
/I have not put much thought into this lately, but while discussion about future mapping enhancements ensues ... is fairly good time to bring up those thoughts.
Modelflags
#618 posted by Spike on 2016/12/07 03:42:25
hexen2 defines modelflags up to and including 23.
1<<24 is the first undefined one as far as fte is concerned.
Not that fte implements all of the hexen2 ones, or that I'm even aware of any models that use them... but hey.
that said, for additive, you're probably better off sticking with EF_ADDITIVE=(1<<5). Yes, this can be used by mappers, and does not necessitate any wire change (read: protocol uses a previously-unused bit which permits backwards-compatibility on the assumption that old implementations can safely it).
maybe some of the other bits you're thinking of are similarly already implemented in another engine.
Intel Video Identification Bug
#619 posted by mh on 2016/12/07 18:07:12
if (!strcmp(gl_vendor, "Intel"))
I see that Mark V has inherited this from the original code too.
The D3D equivalent (which is read from the driver so I don'thave control over it) is actually "Intel(R) HD Graphics" so this test will incorrectly not identify it.
Strictly speaking this is also a bug in the GL case because Intel may change their vendor string.
Change it to:
if (strstr(gl_vendor, "Intel"))
#620 posted by mh on 2016/12/07 20:54:15
dx8_mark_v -width 750 -height 550
= borked up screenshots
Fixed.
This is actually a wrapper bug, so my apologies for my previous misdiagnosis.
@Baker: in the glReadPixels implementation, "case GL_RGB" should also be using "srcdata[srcpos * 4..." because the source data will be 4-byte, even if the dest is 3.
I may have mentioned a while back that there are advantages to going native, and screenshots are one such.
Baker has implemented PNG screenshots, so in the native GL code he's doing a glReadPixels, then converting the memory buffer to PNG format (presumably via a statically linked libpng or maybe stb_image) and saving it out.
A native D3D version wouldn't do half of that work. Instead it could just use D3DXSaveSurfaceToFile on the backbuffer. Give it a filename, specify the type, boom, done, three lines of code.
I'm going to add some native exports to the new wrapper for functionality such as this.
#621 posted by Gunter on 2016/12/07 21:36:10
The DX stuff mh is doing sounds really great.
I don't understand most of it, but I hear things like "improved performance." And bug fixes are always good.
And if I'm getting the gist of things regarding more rendering passes, this might allow addressing of some issues:
Fullbright textures should not be affected by contrast -- it makes them ugly.
Screen Blends should not be affected by gamma or contrast -- I have found this is the main thing that makes them far too intense. When I use vid_hardwaregamma 0 and various values for txgamma, the screen blends look perfect, though if I mess with the contrast slider, it makes the blends too intense again.
So yeah, if that's a possibility, the screen blends should be drawn last so they are not affected by gamma or contrast.
But I have no real understanding of this low-level 3D rendering stuff. Though it sounds like there will be a great benefit from mh's work.
@mh
#622 posted by Baker on 2016/12/07 21:49:24
Mark V applies gamma and contrasts to screenshots (only) when applicable.
For instance
1) If you are using hardware gamma/contrast, it will adjust the screenshot accordingly.
2) If you are not using hardware gamma/contrast, it will not apply it to the screenshot.
So depending on situation, writing directly to file is not necessarily desirable because screenshots could be, for instance, too dark/etc.
#623 posted by Gunter on 2016/12/07 23:49:46
Hm, there's an issue that looks similar to the previous QMB lighting texture thing with txgamma, but it appears whether or not txgamma is being used, wherever there is fog.
I first noticed it at very long range when I was zooming in, because I use very light fog, but if you set the fog much denser, like .5 and then fire the lightning gun, you will see the bad effect.
#624 posted by mh on 2016/12/08 00:14:29
Mark V applies gamma and contrasts to screenshots (only) when applicable.
Ahhh, understood.
By the way, here's a sneak preview of something I just did: http://i64.tinypic.com/o6flm1.jpg
This was actually taken with the GL version, just to demonstrate that there's no shader trickery or suchlike going on.
#626 posted by dwere on 2016/12/08 04:52:19
Mark V seems to struggle with complex maps that are smooth in QS. On my questionable system, I mean.
jam2_mfx.bsp is a good example. Right at the start, looking towards the palace produces a very noticeable slowdown.
Fitzquake v0.85 also has this problem.
@dwere - Vertex Arrays/ Water Warp/ IPad Version ...
#627 posted by Baker on 2016/12/08 05:31:11
@mh - Haha, your water warp. I suspected you would do that ;-)
---------------
@dwere
Especially on older hardware, vertex arrays help achieve a more reliable 72 frames per second in the BSP2 era.
I hadn't implemented yet them because there were many other things on the to-do list. I'm prototyping an iPad/iPhone version, which uses Open GLES which requires use of vertex arrays so I actually have to implement vertex arrays here in a hour or so. I'm still sticking with the "that's it for 2016", but version 1.3 will have it.
---
@gunter - You are probably right on blending. I'm hoping that MH will provide HLSL shader option for gamma/contrast in DX9 ... everyone has their wish list ;-) btw ... I still hate your computer, but I sure appreciate all the compatibility testing it has helped provide.
iPhone/iPad - 2017
But since I'm making a prototype iPad/iPhone version right now which will controls similar to Minecraft on the iPad which is very playable on an iPad.
(Android is more of a pain because very crude development tools which is like banging rocks together to make fire. iPhone development tools have always been very nice.)
/Any new builds will be 2017 and unsure where the priority of an iPhone/iPad version. Now that I have stable release and zero issues outstanding, playing around is a bit more leisurely. May upload a video later tonight after I get it initially running ...
#628 posted by mh on 2016/12/08 07:35:12
I couldn't not do water warp.
Performance. Vertex arrays help with big maps, but they're only part of the story. What really helps is draw call batching, and vertex arrays are just a tool that allows you to do batching.
glBegin/glEnd code is fine for ID1 maps but as maps get bigger this kind of code gets slower and slower.
Toss in a few dynamic lights and/or animated lightstyles (which stock Fitz also handles very poorly) and it gets worse.
Batching is all about taking a big chunk of work and submitting it in a single large draw call, instead of several hundred tiny draw calls. Each draw call has a fixed overhead, irrespective of how much work it does, and that overhead is typically on the CPU. So if you have, say, 1000 surfaces all of which have the same texture and lightmap, drawing them with a single call will have 1/1000 the CPU overhead that drawing them with 1000 calls would.
Stock Fitz would also benefit from lightmap update batching. Again it's the difference between "few large updates" versus "lots of tiny updates" with the former being more efficient. Stock Fitz also uses GL_RGB which compounds the problem by forcing a format conversion in the driver. This stuff is mostly hidden on modern hardware, but you can still find devices (and some scenes in maps) where you get an unexpected nasty surprise.
Ironically, one way to make stock Fitz run faster can be to disable multitexture. Try it - pick a scene where it gets the herky-jerkys then compare with running with -nomtex. This will cause it to have fewer state changes between draw calls so that the driver can optimize better, as well as batch up it's lightmap updates (for the world, not bmodels which are still slow). Depending on the scene, the extra passes might turn out to be a lower workload.
If the engine itself implemented all of this batching then running with -nomtex would not be necessary.
The D3D9 wrapper takes the supplied glBegin/glEnd calls and attempts to be somewhat agressive about batching them up. It converts everything to indexed triangle lists and concatenates multiple draw calls that don't have state or texture changes between them. It also attempts to merge multiple lightmap updates.
None of this is as efficient as if the engine did it itself, of course. Going into the actual GL code and doing it right from the get-go is always going to give the best results.
Warping The Water
Well, I wouldn't care whether "authentic" waterwarp is implemented into the DirectX or rather OpenGL build. But the one that would get it would be my personal default. :P
@johhny Law
#630 posted by Baker on 2016/12/09 10:44:21
I'm sizing up Requiem to see what unique things it adds for likely addition ...
I know it can create items (interesting idea), for instance. jdhack had some interesting ideas in there.
A question for you, if you know ...
I can't get Requiem to run on Linux, it says "libGL.so.1 not found". Engines like ezQuake run fine for me on Ubuntu Linux or even super old FitzQuake 0.80 SDL. Could it possibly be expecting a 32-bit .so ?
If you happen to know ...
@nightFright
#631 posted by mh on 2016/12/09 11:04:32
Well, I wouldn't care whether "authentic" waterwarp is implemented into the DirectX or rather OpenGL build. But the one that would get it would be my personal default.
What if both were able to get it? :)
The Eternal Conflict
That would mean you and Baker found a way to solve your epic "conflict" regarding its implementation? Sounds like a great X-Mas gift to me, actually...!
#633 posted by Baker on 2016/12/09 14:36:12
I wouldn't call it a conflict, hehe.
The DirectX version implementing DirectX features is just natural.
The OpenGL remaining at 1.2 for broad hardware compatibility is not something very bloody likely to stop MH.
To say MH is good at rendering is like saying Isaac Newton was good at calculus or that Einstein was pretty okay at physics ;-)
About MH ...
#634 posted by Bake on 2016/12/09 14:40:12
There's assembly language in his shaders in the RMQ engine.
|