News | Forum | People | FAQ | Links | Search | Register | Log in
Multithreaded VIS For Windows
WVis is a modified version of Bengt Jardrup�s VIS tool. It�s the exact same program as the one you can get here�

http://user.tninet.se/~xir870k/

�except that it has multithreading turned on. Basically, you get 1 thread for every core/processor you have on your machine. This will speed up VIS compile times dramatically if you have a machine with multiple cores/processors.

Usage and syntax are exactly the same.

Enjoy!

WVis
http://www.quaketastic.com/upload/files/tools/windows/misc/WVis.zip
First | Previous | Next | Last
 
Thanks for the source.

I'll enable this in Linux bjptools, too, unless someone else does it first. Although I reckon it's done a little differently in *nix.

The base is already there. 
 
It probably is. You might want to grab the source code for the Mac version of the tools instead and use that as a reference. It's much closer to *nix than this Windows version will be.

I've lost the link for that source code and I don't have access to my FTP from work ... I'll try to remember to post the link though. I'll just post a new archive to Quaketastic when I get a chance.

The Windows stuff is fairly far removed. 
Cool 
Anyone tried it with wine ? 
Question 
Is Wine taking profit from CPUs as windows does? I mean: as Windows uses all the available CPU cores (4 for a Quad, 2 for a Dual), is Wine having the same ability to take profit of the multiple CPU core from a Unix/Linux system (e.g Sparc processor, etc...)? 
Of Course 
 
Seems To Run Good :) 
wine 1.1.26
threads 2

Full: 72.03%, Elapsed: 1h 9m, Left: 1h 58m, Total: 3h 8m, 37%

CPU usage 90% 
 
top shows every cpu you have as 100% I think. So if you have a quad, that would be 400% overall. I have 2 cpus and top goes over 100% sometimes for me. 
Cluster 
Anyone want to make vis use MPI (Message Passing Interface)? I can run it on a 100+ node cluster if y'all want. This would also pave the way for people to cooperatively use vis over the internets -- are we communists yet?

What's the the speedup per additional processor? 
Clarification 
I mean, more than just one data point (thanks Willem!). A run that with one processor takes an hour, is better than one that is already short & reasonable (e.g. Evil Exhumed). 
 
How complicated is it to make a distributed app like that? Seems like an undertaking... 
Another Lunchtime? 
 
Willem 
Just put in the MPI hooks in the source, and compile against the MPI libs. 
 
I'm not familiar with how any of that works at the moment ... so if I add the hooks to VIS this will all work magically? Doesn't there need to be something running on a server somewhere? 
Alternate Approach 
If you know how to split up the map into independent parts, you could apply the map reduce paradigm to the vis problem.

Re: MPI inclusion: magic doesn't exist. 
Inertia 
So you're one of the ones (like me) who just thought that the little girl in Pan's Labyrinth was schizophrenic? 
 
I disagree about magic. 
On Magic 
Yesterday I defined "faith" for a buddy:

Faith: Belief in irrational hypotheses. 
 
So I'm right then? It's a major undertaking? 
MPI 
I don't know. It's worth exploring. As I said, if you can restructure it to use MPI, you can make the leap to making it work over the internet.

Or go the whole hog and figure out how to separate the map into mutually-independent parts for vis. 
Or 
Black-Dog or Moaltz or Bambuz can port the thing to Haskell! 
 
It's something to think about but I wouldn't hold your breath. :) 
 
The source code is available if someone wants to get THAT ambitious, BTW:

http://www.quaketastic.com/upload/files/tools/windows/misc/WVis_Src.zip 
 
actually it would be nice to see a completely modern rethinking of vis and light. For example it should be possible to use your GPU to render lightmaps for you. As for vis, i'd like to see alternate implementations, maybe even tradeoffs such as 90% faster for 10% higher polycount.

The problem with this super long vis times is that the result is nobody vises their maps during development; they only do it once at the end, and if there's a problem, they're not willing to fix it and waste another 2 months for another vis.

I don't have any fully-developed ideas for vis, but i could imagine some sort of raycasting approach that sends rays out in a coarse grid, then subdivides that grid as needed. Or some sort of CSG method that starts with the half-space on the far side of a portal, then flows recursively through portals clipping that half-space as you recurse through each next portal. Or maybe a system where you use the standard algorithm but have a much smaller set of portals to work with, either though automatic selection of "important" portals (which would be awesome but i can't quite see how to do it), or through mapper-defined "portal brushes". 
 
Wait, this is the VIS for Quake(1), right? This thing was initially developed more than 13 years ago and it is still _this_ slow? I've never done anything for Q1, but a VIS compile of a (cleanly build) Q3 map typically took less than 2 minutes.

The Q3 VIS, as I understand it, recursively subdivides a map into a tree of convex shapes and then calculates which leafs can "see" each other. Does the Q1 VIS do any more than that, or why exactly does it take this long to compile a map? Maybe the algorithm of the Q1 VIS is just very naive (in which case one could probably adopt some code of the Q3 VIS)? 
 
I recall reading back in the day something Tokay or Romero said I think, that id's tools allowed you to build the maps incrementally. Implying that you could create one section, vis it, test it, build another part, vis that then add it to the already vised part. 
First | Previous | Next | Last
You must be logged in to post in this thread.
Website copyright © 2002-2024 John Fitzgibbons. All posts are copyright their respective authors.