Thanks For That Necros
#12832 posted by ijed on 2007/10/13 18:41:11
Looks like a viable alternative to the Nvidia toolset.
It Appears
#12833 posted by Lunaran on 2007/10/13 20:51:57
to be based on difference-of-gaussian filtration, which I often do by hand when I'm trying to pull a more accurate normal out of a diffuse. (There's nothing I hate more than seeing a brick wall normal map with a ridge at the top of every brick and a trough at the bottom because someone just fed the raw diffuse map into the nvidia plugin.)
Lun
#12834 posted by necros on 2007/10/13 20:56:20
that still happens sometimes with this. it's far from perfect, but much better than just sticking the diffuse in there.
you can fiddle with different levels of detail to bring out (or hide) different aspects of the image. the 'very large' slider will accentuate (usually) the larger differences in height. the 'analyse for 3d' thing is hit and miss. sometimes it works, but othertimes, not so much.
you could probably get a good mix with doing the basic shape of the stuff yourself by drawing the grayscale in photoshop and then mising in the greyscale this program creates, masking out the less good areas or something.
Different Thing
#12835 posted by ijed on 2007/10/14 03:00:01
I've heard alot of arguments about retouching normalmaps; hand painting, basicly. The constant pompous response by programmers is that it's no longer a normalmap if retouched.
I'll be trying this out monday and through the week by which time I'll be (maybe) posting some intelligent feedback.
When putting the diffuse through the juggins of normalmapping are you putting the exact same diffuse or selected layers of it? Obvious question, I think, but I've been evolving methods for a while and how others have approached it is interesting.
Especially when some twat trying to justify thier paycheck by saying everying needs diffuse, normal, specular, etc. Even though on screen its smaller than a little finger nail.
Well
#12836 posted by Lunaran on 2007/10/14 08:12:08
The constant pompous response by programmers is that it's no longer a normalmap if retouched.
Artists are capable of understanding what a "normalized vector" is and how that translates to color because they're reading the normal map in the first place. Programmers don't trust artists to be smart. :) (and granted, it's usually because artists are putting 2048 textures on things like donuts)
Don't ever just feed the diffuse into the normal map filter. At the very least, pick the color channel that looks the most like a heightmap and retains the least color/dirt/lighting information. Invert it if that helps. But remember that the height map you feed the filter will not look like the diffuse map at all, and sometimes it takes a lot of work with overlays/airbrushing/dodging and burning/etc, but it's always worth it.
#12837 posted by metlslime on 2007/10/14 09:43:23
The constant pompous response by programmers is that it's no longer a normalmap if retouched.
Well technically, to be a normal map all vectors have to have the same length, which can easily not be true if you retouch an RGB image. But since those vector components are pretty much passed directly into the dot3 equation, it probably doesn't matter and none of the math really requires that the vectors are all normalized as far as i know.
Uh...
#12838 posted by bal on 2007/10/14 09:53:28
But any artist with half a clue will obviously normalise his map once he's messed around with it by hand, so the map is just as valid as it was to start with really.
I pretty much do what Lun says, and I split up my diffuse alot, when I'm making the diffuse I think alot about what different layers I should keep to be able to produce the best normal map, using different settings on each layer as I transform it, and then combining them together. When I have the time I do a quick object in max or zbrush though.
Question:
#12839 posted by metlslime on 2007/10/14 10:00:49
how do you normalize a normal map -- is there a plugin or filter in photoshop that does it?
Metl...
#12840 posted by bal on 2007/10/14 11:24:29
Bal:
#12841 posted by metlslime on 2007/10/14 11:51:28
ah, I'd used that before but didn't know it could normalize an existing normalmap.
Btw, Are
#12842 posted by bambuz on 2007/10/14 20:15:11
asset creators already using that four flash camera or some laser scanner or something to make the normalmaps from real world stuff, along with the diffuse ones... specular might be harder?
Hrm
#12843 posted by megaman on 2007/10/14 21:03:18
i wonder if you couldn't extract the 3d from two slighltly shifted photographs (like eyes)
Bambuz
#12844 posted by Lunaran on 2007/10/15 02:55:47
what are we, rich?
Lun
#12845 posted by bambuz on 2007/10/15 13:24:37
since it costs so much to make them by hand (artist salaries), I'd imagine some equipment that speeds up the creation by many times would quickly pay itself back.
But maybe the textures are already bought from third companies and this is their money making secret.
Bambuz
#12846 posted by bal on 2007/10/15 13:59:00
The use would be limited, as most of the time you just don't have what you want a texture of available as a real object.
Interesting Bal
#12847 posted by bambuz on 2007/10/15 14:55:00
Maybe that's the case. Or maybe it depends on the genre then. I remember the guys at Remedy making Max Payne going to New York and taking a huge amount of photographs of everything and a lot of that ended up in the game. (The dev showed us from where many textures had come from.)
On the other hand if you're doing some alien stuff maybe then original material is of limited use... But I even remember Jurassic Park guys using a laser 3d scanner so they could use elephant skin for the big dinosaur renders...
Everybody remembers that camera with four flash bulbs in the corners, which fire sequentially and thus a lot of depth information from the image can be gotten automatically and quite easily. Was some project, maybe at MIT?
Ah
#12848 posted by bambuz on 2007/10/15 15:00:56
Hmm
#12849 posted by bambuz on 2007/10/15 15:03:21
now that I think more of it, it might not be so good for normalmaps, since it's more for discontinuity detection, but techniques might be tunable...
Bamb
#12850 posted by Lunaran on 2007/10/15 15:30:49
Lasers cost a lot more than humans, and they don't do 100% of the job, so you still need artists to 'clean up' the maps and make them tileable and they still have to match the diffuse maps and bla bla ...
Motion capture saves exactly 0 animator time. It works the same way.
Banbuz
#12851 posted by ijed on 2007/10/15 16:33:35
Heheh, look at any mud texture in almost any game. Go to Google images, type mud and there you've got the base of the texture.
Lunaran is right - an artist is always needed to modify whatever image to make it actually usable. Its the same with outsourcing, the stuff always needs a few weeks of someones time because management won't extend the contract to have it done properly.
Remedy sending some of thier devteam to take photos around New York sounds like a tax loss situation - they spunk some cash on something effectively useless to drop thier tax rating. Ok, maybe not in that case, but I gauruntee that Remedy itself didn't pay.
Bambuz
#12852 posted by ijed on 2007/10/15 16:34:05
Heheh, look at any mud texture in almost any game. Go to Google images, type mud and there you've got the base of the texture.
Lunaran is right - an artist is always needed to modify whatever image to make it actually usable. Its the same with outsourcing, the stuff always needs a few weeks of someones time because management won't extend the contract to have it done properly.
Remedy sending some of thier devteam to take photos around New York sounds like a tax loss situation - they spunk some cash on something effectively useless to drop thier tax rating. Ok, maybe not in that case, but I gauruntee that Remedy itself didn't pay.
Shit.
#12853 posted by ijed on 2007/10/15 16:35:15
How Did You Managed The Typo's In Bambuz ^^^?
#12854 posted by RickyT33 on 2007/10/15 16:50:17
But Surely
#12855 posted by bambuz on 2007/10/15 17:04:23
cleaning up can take a lot less time than creating from the raw photo. If the team knows what to look and has trained in the local environment before, it could be quite effective in asset "stealing" I imagine.
I also imagine it's quite time consuming in creating some normalmaps by hand, people model stuff in max and create it from that then etc...
He specifically showed some floor texture for example: the photo showing the hotel corridor, then the face-on photo of the tiles, then the texture in the texture browser and finally the texture in-game (I must say that considerable vividity (?) loss in colors was observable along the process). Of course it was an old game and the world hadn't even heard of normal maps back then.
I don't know of funding, and there might have been other reasons to go there too, like to get some feel for what the city is like and get inspiration and even possibly just for fun.
Of motion capture, I assume there are still reasons for using it, like getting more realistic animations in some sense. Or maybe it's just a retarded trick used by EA for marketing reasons, to unfairly tread on small cute studios!
I Meant
#12856 posted by bambuz on 2007/10/15 17:06:22
cleaning up a normal map gotten by some hardware method would take less time than making one with an nvidia hackjob approximating tool from a picture and then tweaking and editing that ad infinitum to reach the equivalent quality... Now, of course if you're willing to do with less quality or realism, then...
|