|
Posted by Kinn on 2004/10/07 07:13:44 |
Discuss modified Quake engines here, I guess. What engines do you use? What are the pros/cons of existing engines? What features would you like to see implemented/removed? |
|
|
Groovy!
#648 posted by killpixel on 2017/01/09 00:25:17
Engine CPU / GPU Usage
#649 posted by Kinn on 2017/01/09 13:23:20
Can anyone recommend a tool I can use to profile the cpu / gpu usage of quake engines on my pc? I have noticed that when using Quakespasm, my laptop is silent, but on other engines I've tried (FTE, MarkV), running under the same config I use for QS, my laptop's fan blows like a foghorn for some reason...
#650 posted by Kinn on 2017/01/09 13:56:39
Gonna have a punt with a tool called HWiNFO. Will report if I see anything interesting
#651 posted by mh on 2017/01/09 15:34:48
This is almost certainly happening because QS is using a Sleep call every iteration of it's main loop, whereas other engines are not.
There are valid arguments to be made that running flat out is actually correct behaviour for a game engine.
Ok
#652 posted by Kinn on 2017/01/09 15:44:14
Might that also be related to how QS's CPU usage rockets when I have it minimised vs when I'm playing it?
Right, So
#653 posted by Kinn on 2017/01/09 18:43:07
I used HWiNFO to log all the stats when running the quakes.
I did a test where I played demo1 twice in QS, and did the same for MarkV using equivalent graphics settings etc. (actually I played demo1 three times in MarkV because the fan was really going crazy after the second time round and I wanted to see if it was getting any hotter)
So here is a simple chart of the max CPU temp across all my cores when playing the demo, sampled at 2-second time intervals (there are less samples for QS because I played the demo twice there vs three times for MarkV)
https://docs.google.com/spreadsheets/d/1ozielkRSptmYI6PSSrn1QwSBcai8WTHauQ1pW8z__5s/edit#gid=0
With QS my CPU temp always stays just below 50C, but with MarkV it's constantly running over 70C.
Now I'm no engine guru but my gut feeling is I'd rather keep my CPU temp at 50C if I have a choice in the matter (considering I'm getting identical framerate in game either way)
mh - what are the advantages to "running flat out", as you say?
#654 posted by killpixel on 2017/01/09 19:00:51
is this win7? you can set your cpu states in power management. depending on your mobo you can do this via the bios. You can also have the client run on 1 core, may help temps. Maybe even give process tamer a try.
70C isn't necessarily bad. I wouldn't want to run 74-80 constantly, though.
Win10
#655 posted by Kinn on 2017/01/09 19:13:57
Not sure I want to go digging around in bioses and tinkering with CPU settings, not really my area :}
#656 posted by ericw on 2017/01/09 20:31:17
Yeah, QS has a 1ms sleep in main_sdl.c. It also sets the Windows timer precision to 1ms (done by sdl) - afaik this raises power use somewhat for the entire system while QS is running, but it should mean that a 1ms sleep is actually 1ms and not 15ms or something.
Just looked at MarkV's source, it looks like you need to set "host_sleep" "1" to enable sleeping.
I imagine the temps will be about equal once MarkV is sleeping. On complex non-id1 maps QS may take a lead, owing to its new GL renderer, but not sure if this will show up in temps.
Host_sleep "1"
#657 posted by Kinn on 2017/01/09 20:54:44
Well damn, that did the trick! Just did another profile and the temps in MarkV are down to around 50C, similar to QS.
So, any reason why this isn't the default behaviour?
Advantages To Running Flat Out
#658 posted by mh on 2017/01/09 21:24:21
First of all, and to establish context, there at least used to be a perception that Quake is an older game, uses less resources, therefore it should have lower CPU usage than a newer game. However, a simple "while (1) {}" loop is sufficient to peg a CPU core at 100% (you won't see this in modern Windows because the OS will move it from core to core quite frequently). So it's not the amount of work you do that matters.
Both Windows Sleep and Unix/Linux usleep specify that Sleep times are a minimum, not a guaranteed time interval. To quote from the usleep man page: "The sleep may be lengthened slightly by any system activity or by the time spent processing the call or by the granularity of system timers."
So when you sleep for 1 millisecond, you will actually be sleeping for more than 1, and the specification allows it to be much more than 1; this can be sufficient to cause you to miss a vsync interval, or disrupt input. 1 millisecond doesn't sound like much, but if you remember that 60fps is 16 milliseconds per frame, then it's a substantial enough fraction of it.
At this point a typical reaction might be something like "Quake frame times are so short anyway, surely this is just a theoretical problem". So let's assume that a typical frame time is 1ms. The frame runs then it's followed by a bunch of Sleep(1) calls until it's time to run another.
It should be obvious what's going to happen - at some point the last of those Sleep(1) calls is going to fire, and if you sleep for just a little too long you're going to miss the interval and the next frame will run late. Slightly late with vsync disabled, but with vsync enabled you'll drop to 30fps.
So the reason why it's preferable to run flat out is because the alternative is potentially worse.
That shouldn't be read as meaning that all Sleep calls are bad. If you're running on battery your priority changes and you'll definitely want to sleep. But sleep as a general solution should be avoided; the OS should move your program between cores automatically which will prevent individual cores from ever running at 100% all the time, so nothing should be overheating. Or alternatively, if it's a multithreaded game that's already CPU-bound, then you most definitely don't want to be giving up a resource that you don't have enough of to begin with.
None of this is going to convice anyone who's already made up their minds, I know...
Very Interesting
#659 posted by killpixel on 2017/01/09 21:51:39
how does this relate to c-states and os power management? Do sleep calls trigger certain c-states? If a system has c-states and power management disabled does host_sleep override that or does it do nothing?
#660 posted by mh on 2017/01/09 22:02:46
I don't know about Linux, but on Windows Sleep calls don't define how your program interacts with OS power management. All that the documentation states is that the current thread is suspended for an interval based on the value you specify.
What I infer from that is that if there's other work that the core could be doing, it will do it, so you should have no expectation that calling Sleep will guarantee that you'll go into a power-saving mode. After all - Quake isn't the only program running on your OS.
Mh
#661 posted by Kinn on 2017/01/09 22:04:06
Thanks for that detailed explanation, most informative.
#662 posted by ericw on 2017/01/09 22:05:27
My only objection, for Windows anyway, is the current Sleep docs here seem to state pretty clearly that, if you set the timer resolution to 1ms with timeBeginPeriod, a Sleep(1) will actually sleep somewhere between 0 and 1ms, but not more than 1ms. IIRC, I have measured Sleep(1) sleeping for several milliseconds if you don't call timeBeginPeriod, but if you do it's reliably never more than 1ms. (Of course, this is one random test I did, one PC, probably windows 8.1 or 10, but at least it matched the docs.)
Agree regarding vsync.. it always seemed broken to me, at least in QS (adds so much input lag that it's almost unplayable), maybe most Quake engines (?). If vsync is in use, shouldn't the throttling of the mainloop be left to the blocking "swap buffers" call?
I See, Thanks
#663 posted by killpixel on 2017/01/09 22:21:09
#664 posted by mh on 2017/01/10 00:40:06
If vsync is active then I suggest don't throttle otherwise, either via sleep or Host_FilterTime. IIRC a glFinish immediately after SwapBuffers helps some; causing input to sync up correctly with display, otherwise the CPU may be running a few frames ahead of the GPU.
Last time I checked vsync was a busy-wait on some hardware/drivers, but that was in the D3D 9 era.
@Kinn
#665 posted by Baker on 2017/01/10 02:16:59
About 3 weeks, I tried the Mark V WinQuake on a high end gaming laptop I imagine is similar to yours.
The experience was absolutely terrible and was like 10 fps.
Keep in mind on my Windows machine which is nothing remarkable, I get an easy 300-400 fps in Mark V WinQuake on resolutions like 640x480.
I made some adjustments in the just uploaded Mark V which I think may dramatically improve the WinQuake version with machines like yours.
(*I say "I think" because I made some adjustments for the GL version and it was perfect afterwards on that machine. It slipped my mind to do a WinQuake test, so I can't know for sure.)
Baker
#666 posted by Kinn on 2017/01/10 12:10:46
Ace, just had a go on your new GL WinQuake! Will report back in the other thread
Is Darkplaces Still In Development
is it worth my while reporting a bug?
DP Is Not In Active Dev
as far as I know...
#669 posted by ericw on 2017/01/17 19:40:57
There is a gitlab with bug tracker here, not sure if it's Xonotic specific. However, it doesn't look like a fork; at least commits are synced between that and the icculus site. Still seems to be somewhat active.
https://gitlab.com/xonotic/darkplaces/issues
@Shamblernaut ... Check This Out!
#670 posted by Icaro on 2017/01/17 20:19:02
Thanks Guys :)
We'll see if this fixes the bug, which is likely linux specific.
heh, doesn't fix it... The 64 bit glx version forces my monitors into mirrored mode, rather than primary / secondary.
SDL version works fine though.
|
|
You must be logged in to post in this thread.
|
Website copyright © 2002-2024 John Fitzgibbons. All posts are copyright their respective authors.
|
|