|
Posted by metlslime on 2007/08/08 04:57:56 |
This is a counterpart to the "Mapping Help" thread. If you need help with QuakeC coding, or questions about how to do some engine modification, this is the place for you! We've got a few coders here on the forum and hopefully someone knows the answer. |
|
|
Stream Of Conciousness
#311 posted by Preach on 2009/09/27 12:24:43
When it comes to functions like dprint and sprint, the usual trick is to call the print function after each ftos call, to flush the buffered string into the console, eg:
dprint("health: ");
s = ftos(self.health)
dprint(s);
dprint(", height:");
s = ftos(self.origin_z)
dprint(s);
dprint("\n");
This works fine with prints to the console, as the messages are cumulative, written one after the next. The problem with centerprint is that each call flushes whatever was previously printed, so you need to get everything into one message.
This of course leads us down the road of byte by byte messages, where you construct a function which can convert a float to a sequence of characters, which are then sent by SVC_BYTE. So it's possible, just not pleasant.
I've been trying to think of a way to write a "library" which would make writing this kind of centerprint byte by byte more straightforward. I think the best structure would be:
� A buffer of 16(?) floats, enough to store one line-width of a centerprint message
� A function called Flush() which sends all 16 characters in the buffer with SVC_BYTE.
� The concept of a function object called a stream.
� Customisable functions can then be written which read values from streams into the line buffer.
It would be at the last level that layout could be controlled by the coder - deciding what to do if a stream is longer than the current line, whether you need to render a border on the first and last character, etc. The library would come with some examples.
The important idea is the stream. This would be a dummy object with .think set. What you set think to depends on the content of the stream, but the simplest example of a constant string would look very much like a monster animation function. For example, the sequence "hello, world" would begin:
helloworld_1 = ['h', helloworld_2 ] {};
helloworld_2 = ['e', helloworld_3 ] {};
helloworld_3 = ['l', helloworld_4 ] {};
Then reading a character from the stream self is performed by the lines
self.think();
char = self.frame;
(you need to make whatever is thinking self, that's the normal quake rules)
The advantage of this method is that then you can invent a new function which, for example, streams the digits of a float to some precision. As long as you make the interface the same - use .think() to advance the stream a character and .frame to access the current character - then the higher level functions can handle it identically.
It would probably be helpful to add one property to the stream entity: length. That way, the highest level of function can see if a stream will fit on the current line, and if not then consider moving it to the next line. It would also be easy to wrap streams around lines, simply output as much as will fit the current line, flush, then pass the partially completed stream to the next line.
You can also imagine a stream modifier, an entity which could when queried would read a second stream entity, and then either pass on the character it read, or skip that character and go to the next one - stripping out white space perhaps.
And One More Thing
#312 posted by Preach on 2009/09/27 12:28:07
If you only need three values, and it's ok to print them out one after another, you could use vtos() on a specially constructed vector, with the three values set to _x, _y and _z of the function. A little easier than overengineering stringstream for quake...
Wait... What?
#313 posted by necros on 2009/10/04 02:51:37
two questions,
1. is the engine hard-coded to give players the axe and shotgun when they first load up a map?
2. (and this is the weirder one) if parm1-16 are global variables, how is it in COOP with more than 1 player, they can retain items and such. are parms special variables that have extra copies for each player? how does the engine know which player's parms we're talking about then?
#314 posted by necros on 2009/10/04 02:56:57
scratch the first question, i did something dumb which made it seem that way.
A Parm And A Weave
#315 posted by Preach on 2009/10/04 13:30:10
The engine stored parms for each player internally. I believe the way it works is that the QC globals parm1-16 are set to the values the engine saved just before it calls PutClientInServer() for that player. So the parms are only valid to read in coop for the duration of that function (and any functions it calls). Otherwise you will probably be reading the parms of the last client on the list.
If you're removing the axe from the player, make sure you handle the case where he runs out of ammo with all weapons - in the standard code it will switch to axe without checking if you have one.
#316 posted by necros on 2009/10/04 20:01:10
great, thanks for the info. :)
#317 posted by necros on 2009/10/26 03:36:41
this one's more of a math problem...
how can i align monsters with the ground plane underneath their bodies? is it even possible? vectoangles only ever sets x and y, never z, so a full parallel alignment seems impossible.
Something Along The Lines Of
#318 posted by Lardarse on 2009/10/26 07:29:37
Trace directly downwards, and compare trace_plane_normal to the vertical, then derive the angles needed from that.
You probably need 2 calls to vectoangles, with things swapping around between calls.
Algorithm
#319 posted by Preach on 2009/10/26 10:47:00
This is roughly what I'd do, not properly converted into qc:
//yaw is the desired facing of the monster
mapAnglesToFloor(entity mon, float yaw) =
{
//do the trace and exit early in easy cases
traceline down
if(trace_fraction == 1 || trace_plane_normal == '0 0 1')
�return '0 yaw 0'
//construct a vector perpendicular to both the desired facing of the monster and the normal from the ground, by using v_left
makevectors('0 yaw 0');
//get the actual facing out vector using the cross product
facingvec = crossproduct(v_right, trace_plane_normal);
return vectoangles(facingvec);
}
I'm not sure it actually addresses your concerns about ang_z not being set though. It is also possible that I have the handedness of the coordinate system wrong, and so you need -1*v_right for it to work.
You also might want to consider interpolation of some sort so that the monster doesn't go crazy on broken ground or trisoup terrain. In that case, it is probably best to have a field storing the old "normal", then calculate a new one with
normal = normalise(normal + trace_plane_normal);
You could scale one of the vectors to cause slower or faster interpolation. By interpolating the normal vector, you can be sure that the changes in facing by the monster are responded to directly.
So
#320 posted by ijed on 2010/03/16 19:43:38
I've got a point entity called a func_door_model. The idea is for the mapper to specify a .bso model in there and circumnavigate a load of issues - breaking max models, patchy lighting, messing around with texture alignment etc.
It seems to work pretty well, the whole idea is to later on extend it to other entities like trains, breakables and so on.
The first problem (of many) is that doors created this way aren't generating their trigger field.
What I have is a 'wrapper' piece of code that specifies a .mdl and replaces the model field with that value - this is a an engine fix for DP that I imported from elsewhere.
As I understand it:
cmins = self.mins;
cmaxs = self.maxs;
and
self.owner.trigger_field = spawn_field(cmins, cmaxs);
Are what create the trigger field, but the first one isn't working when an external .bsp is referenced.
I can touch the door and fire it from a trigger, so the functionality is intact apart from the trigger.
Any ideas?
Relativity
#321 posted by Preach on 2010/03/16 21:41:35
If you placed your func_door_model at the origin, then I expect that the trigger field would work perfectly. If you enjoy a challenge then work from that hint and ignore the rest of the post...
Otherwise, I'm hoping that the following is the fix to your problem: The assumption that the trigger field works code works on is that the entity is placed on the origin of the map. This is always true for baked-in models like a standard func_door.
For example, if you built a 64x64x64 func_door with a corner closest to the origin at '512 512 0', then door.mins = '512 512 0', door.maxs = '576 576 64' and door.origin = '0 0 0'. It's a bit strange to think about at first, because regular entities like players or monsters tend to have the origin half way between the mins and maxs. (*)
Luckily, this makes the fix very simple, and backwards compatible with a regular func_door.
cmins = self.mins + self.origin;
cmaxs = self.maxs + self.origin;
This should work with regular .mdl files too, although you'll run into another problem with them which is much harder to work around. You could also try swapping self.mins + self.origin with self.absmin, not sure if it's any better for compatibility with dp though.
(*) There is an uncommon exception to this rule, found in rotating entities. Since models can only rotate about a single axis - the origin - the bsp compiler has to play some games with them. It finds the origin of the info_rotate for the model, then moves the brushes of the model so that the origin of the map lines up with that spot.
Once it has compiled that model, it sets the origin value of the entity with the model to equal the origin of the info_rotate - thereby reversing the movement and restoring the original location of the entity. Of course, the lighting and texture alignment are likely shot to hell, but who cares!
Hm
#322 posted by ijed on 2010/03/16 22:36:12
You're the man.
Challenge is good but the first paragraph was a bit cryptic - I had to read the rest to get the bit about it being the world origin.
Makes perfect sense though, and the added bonus of true .mdl doors is interesting...
I've wondered about why the rotating mesh was moved to the center of the world, but docs there aren't much of.
In any case, this puts me well on the way - also going to try some stuff with trigger_once_bbox working in a similar way, the mapper setting the size of the trigger numerically.
Am I right in thinking you did something similar in Quoth2?
Just
#323 posted by ijed on 2010/03/16 22:37:10
Don't tell me how you did if you did - there has to be some challenge :)
Some Bits
#324 posted by Preach on 2010/03/16 23:04:14
We had the model-saving bounding boxes on triggers and external bsp format models. We don't have .mdl format doors yet, there's a little wrinkle with solid types which I don't think can be fixed nicely.
We also didn't fix the bug from #320 in quoth yet, although I expect it'll be in the next release... As a workaround, I suppose people will have to add the door trigger manually.
MOVETYPE_PUSH Not Bsp
#325 posted by ijed on 2010/03/16 23:11:27
Yeah, got that one. It also seems to do something strange to the bbox - I've seen this before where the size isn't what'd be expected.
I imagine that can be fixed by changing the movetype and putting an invisible helper in there - that's similar to our pushable solution.
Can Anyone Confirm?
#326 posted by necros on 2010/04/07 06:33:57
if i set .nextthink = time + 0.001 (or any very very small number), that should guarantee it will be called on the next frame, right?
Almost Surely
#327 posted by Preach on 2010/04/07 12:29:58
Most of the time that will be fine. Most engines enforces a 72 fps limit on physics, because once you get into higher framerates you start to see physics glitches. I assume these arise because of floating-point errors once you start evaluating collisions with increments that small. If you want to see it, try increasing the max_fps in fitzquake to the 300-400 range before riding on a lift (I may have remembered the command wrong, but there is one that lets you change it server side).
One circumstance where you might have a problem is if someone sets host_framerate very low to create a slow motion effect. It's not too much of a concern though, because it is someone messing with console variables, and you can't always prevent people breaking stuff if they use em.
However, there's an even simpler way: set .nextthink = time, or even .nextthink = 0.01.
I used to think that quake ran think functions if the following inequality held
time - frametime < .nextthink <= time
But actually, the left hand inequality does not exist. From the engine source:
�thinktime = ent->v.nextthink;
�if (thinktime <= 0 || thinktime > sv.time + host_frametime)
����return true;
�if (thinktime < sv.time)
����thinktime = sv.time;
�ent->v.nextthink = 0;
The engine resets nextthink to zero every time it calls a think function, and so ignores think functions only if nextthink is in the future or <=0. You should be able to get things to run every frame just by setting nextthink to something <=time.
There's an interesting additional bit of info to be gained from this section of the engine code. We see that the QC value time is set by the following line
�pr_global_struct->time = thinktime;
The previously quoted source code set thinktime from the entity's original .nextthink value. So during the QC think function, time is always reported to be the moment that the entity intended to think, rather than the server time at the end of the currently processing frame. The one caveat is that time is bounded below by sv.time, so if we set self.nextthink = 0.01, we won't suddenly be miles back in the past when think is called.
Some of that explanation might be a bit garbled, I was figuring it out myself as I went. So maybe this timeline will help
We assume that at this time the server is running at 50fps.
sv.time
This is the "old time", the first point in time not processed in the last frame. Time is set to this value for the following things:
StartFrame, think functions for .nextthink <= sv.time.
sv.time + delta
As long as delta < 0.02, then the think for that entity must occur in this frame. Time is set to this value for the following things:
think functions for
sv.time < .nextthink < sv.time + 0.02
sv.time + 0.02
Anything with a think greater or equal to this is ignored, it will happen in the next frame instead.
In conclusion, I'd go with self.nextthink = 0.01, because time will be set to sv.time anyway. I've not tested if there are any side effects of that though. self.nextthink = time should also work and might look less weird.
Thinking Without Thinking
#328 posted by Lardarse on 2010/04/07 14:40:43
Personally, I prefer self.nextthink = time; although feel free to follow it with // Next frame if you want your code to be understood by others.
And note that the code checking for .nextthink being non-zero means that the think function is only run once (unless .nextthink is reset). But setting it to 0 will cause it to stop thinking. Which is what causes the statue bug in id1.
Reading that through has taught me something, though. Namely that time is relative during a think function.
I do have one question, though: What happens first: think or touch?
#329 posted by necros on 2010/04/07 19:34:28
thanks, preach. as always, very informative!
while you're here, could you tell me if i understand .ltime correctly?
it seems to basically be time, except it only counts up when a bsp model is moving.
i believe it only works when the solid is SOLID_BSP or if the movetype is MOVETYPE_PUSH.
when it stops moving (comes to rest OR is blocked), the timer stops. this is how calcmove can work with a static nextthink even if a door or train gets blocked.
Oh, Yeah
#330 posted by Preach on 2010/04/07 21:28:31
I should have mentioned straight off that the rules are different if the entity is MOVETYPE_PUSH. But it's worth looking at, because it also lets us explore how physics timing relates to this new QC-think timing.
You are right about how ltime works, it's "local time" for the MOVETYPE_PUSH entity. It advances at the same rate as the normal clock except if
a) The entity is blocked, in which case time is not advanced
or
b) The entity's .nextthink will occur before .ltime + host_frametime(within this frame) in which case ltime is increased only as far as .nextthink (bounded below by 0)
The latter case is important because when ltime only advances by as much time as it needs to equal nextthink, the physics run on the entity this frame are calculated so that it only travels for this amount of time, rather than for the full length of the frame.
I should add that again, this is only applied to MOVETYPE_PUSH entities. Other entities always move for the entire length of the frame, host_frametime. There is also a strange kind of time travel which can affect these entities. Think functions are calculated first, and when they are called the QC variable time might be anywhere between sv.time and sv.time + host_frametime, depending on the exact value of .nextthink. Once the think is resolved they will get moved, but if they collide then the QC time is always set back to sv.time for the .touch functions.
As a final thought, it is worth remembering that entities in quake are processed by the physics engine in sequential order, and setting the QC time variable does not interpolate any entities between their positions at the start and end of the frame. All the entities before the current entity will be at their end of frame position*. All entities afterwards will likewise be in the position they occupied at the end of last frame. Knowing how the QC time variable is set is only of interest to resolve seeming paradoxes where you are sure two events occur in the same frame, but the QC reports different times for them.
* Ok, this might not be their final position, because something might collide with them or teleport them in QC or something. But in terms of their own physics, they are done for the frame, no more moving or thinking.
Clarification
#331 posted by Preach on 2010/04/07 22:20:23
b) should really have read:
"The entity's .nextthink is less than .ltime + host_frametime(before the time at the end of the frame) in which case ltime is increased only as far as .nextthink (bounded below by 0) "
This would make it clear that ltime does not advance when nextthink <= ltime.
Back To Ltime
#332 posted by necros on 2010/04/10 00:20:02
is there any reason why ltime would run slower than normal time?
i can't really explain it better than that, other than when i watch both ltime and time displayed next to each other (for example, bprinting both every frame), ltime counts up at a slower rate.
Imprecision
#333 posted by Preach on 2010/04/10 00:28:18
Only thing I can see which might account for it is that sv.time and host_frametime are both double-precision floats. Since ltime is stored in a QC field it only retains the precision of a regular float. In fact, the increment gets cast to a single-precision float before it's added to the single-precision ltime value. So my guess would be that it's a byproduct of the lower level of precision in that calculation.
#334 posted by necros on 2010/04/10 01:30:05
i tested a bit more and slow frametimes seem to slow it down a lot? o.0
if i set fitzquake's maxfps to 10, it counts extremely slowly. i don't really understand what's going on here.
Sorry Lardarse
#335 posted by Preach on 2010/04/10 02:33:45
I do have one question, though: What happens first: think or touch?
I totally missed this question first time round, here goes:
For a player:
PlayerPreThink always comes first.
Next is the .think function (for frame animation etc)
Then comes whatever kind of physics the player has, so if there's a collision .touch happens now
Finally PlayerPostThink is run.
If it's a MOVETYPE_BSP, it moves first (and so any .touch and .blocked calls occur), and then calls .think functions after that*.
MOVETYPE_NONE won't create collisions by itself, so it only runs .think. It's worth considering that a MOVETYPE_NONE can still be involved in collisions, and in that case the .touch may be called before or after the think, depending on whether the colliding entity is before or after this one in the list. The same is true for any entity when they are the second party in the collisions.
MOVETYPE_NOCLIP doesn't generate collsions either, it's just .think when it comes to processing.
MOVETYPE_STEP is for monsters, and they have a weird order. If they're free-falling from a jump, then that gets processed first, and any collision there produces a touch before the think. However, most of the time monsters only move during think functions. One of the two navigation builtins are called in the think, which hands control back to the engine for physics to be run on the "step". If it collides, then the touch gets called on top. So the think will begin before any touch, and finish after the touch!
Finally, all the other "projectile" movetypes run .think before moving, and thereby having a chance to .touch anything.
In conclusion, it's a mess! The entity might always be the "other" entity in a collision, so you can't really say concrete stuff about whether the touch will happen before or after a chance to think. I think the list here is still useful though, for knowing when the physics runs. This means you can always be aware of whether an entity has already moved or not while in a think function.
*There is an argument here for rewriting the hipnotic rotator code here. If you make a function which is called from StartFrame which loops through all the blocker entities and sets velocities for them, you have changed their velocity before physics runs on them, so they'll move into place this frame. The current code sets it in the think, which means they're always lagging behind the target for a frame. You would also be able to set .nextthink to (.ltime + framtime * 0.5), and use a doubled velocity to ensure exact motion.
|
|
You must be logged in to post in this thread.
|
Website copyright © 2002-2024 John Fitzgibbons. All posts are copyright their respective authors.
|
|