News | Forum | People | FAQ | Links | Search | Register | Log in
General Abuse
Talk about anything in here. If you've got something newsworthy, please submit it as news. If it seems borderline, submit it anyway and a mod will either approve it or move the post back to this thread.

News submissions: https://celephais.net/board/submit_news.php
First | Previous | Next | Last
Thanks For That Necros 
Looks like a viable alternative to the Nvidia toolset. 
It Appears 
to be based on difference-of-gaussian filtration, which I often do by hand when I'm trying to pull a more accurate normal out of a diffuse. (There's nothing I hate more than seeing a brick wall normal map with a ridge at the top of every brick and a trough at the bottom because someone just fed the raw diffuse map into the nvidia plugin.) 
Lun 
that still happens sometimes with this. it's far from perfect, but much better than just sticking the diffuse in there.

you can fiddle with different levels of detail to bring out (or hide) different aspects of the image. the 'very large' slider will accentuate (usually) the larger differences in height. the 'analyse for 3d' thing is hit and miss. sometimes it works, but othertimes, not so much.

you could probably get a good mix with doing the basic shape of the stuff yourself by drawing the grayscale in photoshop and then mising in the greyscale this program creates, masking out the less good areas or something. 
Different Thing 
I've heard alot of arguments about retouching normalmaps; hand painting, basicly. The constant pompous response by programmers is that it's no longer a normalmap if retouched.

I'll be trying this out monday and through the week by which time I'll be (maybe) posting some intelligent feedback.

When putting the diffuse through the juggins of normalmapping are you putting the exact same diffuse or selected layers of it? Obvious question, I think, but I've been evolving methods for a while and how others have approached it is interesting.

Especially when some twat trying to justify thier paycheck by saying everying needs diffuse, normal, specular, etc. Even though on screen its smaller than a little finger nail. 
Well 
The constant pompous response by programmers is that it's no longer a normalmap if retouched.

Artists are capable of understanding what a "normalized vector" is and how that translates to color because they're reading the normal map in the first place. Programmers don't trust artists to be smart. :) (and granted, it's usually because artists are putting 2048 textures on things like donuts)

Don't ever just feed the diffuse into the normal map filter. At the very least, pick the color channel that looks the most like a heightmap and retains the least color/dirt/lighting information. Invert it if that helps. But remember that the height map you feed the filter will not look like the diffuse map at all, and sometimes it takes a lot of work with overlays/airbrushing/dodging and burning/etc, but it's always worth it. 
 
The constant pompous response by programmers is that it's no longer a normalmap if retouched.

Well technically, to be a normal map all vectors have to have the same length, which can easily not be true if you retouch an RGB image. But since those vector components are pretty much passed directly into the dot3 equation, it probably doesn't matter and none of the math really requires that the vectors are all normalized as far as i know. 
Uh... 
But any artist with half a clue will obviously normalise his map once he's messed around with it by hand, so the map is just as valid as it was to start with really.
I pretty much do what Lun says, and I split up my diffuse alot, when I'm making the diffuse I think alot about what different layers I should keep to be able to produce the best normal map, using different settings on each layer as I transform it, and then combining them together. When I have the time I do a quick object in max or zbrush though. 
Question: 
how do you normalize a normal map -- is there a plugin or filter in photoshop that does it? 
Metl... 
Bal: 
ah, I'd used that before but didn't know it could normalize an existing normalmap. 
Btw, Are 
asset creators already using that four flash camera or some laser scanner or something to make the normalmaps from real world stuff, along with the diffuse ones... specular might be harder? 
Hrm 
i wonder if you couldn't extract the 3d from two slighltly shifted photographs (like eyes) 
Bambuz 
what are we, rich? 
Lun 
since it costs so much to make them by hand (artist salaries), I'd imagine some equipment that speeds up the creation by many times would quickly pay itself back.

But maybe the textures are already bought from third companies and this is their money making secret. 
Bambuz 
The use would be limited, as most of the time you just don't have what you want a texture of available as a real object. 
Interesting Bal 
Maybe that's the case. Or maybe it depends on the genre then. I remember the guys at Remedy making Max Payne going to New York and taking a huge amount of photographs of everything and a lot of that ended up in the game. (The dev showed us from where many textures had come from.)

On the other hand if you're doing some alien stuff maybe then original material is of limited use... But I even remember Jurassic Park guys using a laser 3d scanner so they could use elephant skin for the big dinosaur renders...

Everybody remembers that camera with four flash bulbs in the corners, which fire sequentially and thus a lot of depth information from the image can be gotten automatically and quite easily. Was some project, maybe at MIT? 
Ah 
found the links
http://photo.net/learn/technology/mflash/merl-non-photo
http://www.merl.com/people/raskar/NprCamera/
Seems they haven't yet discovered the application I had in mind for it. 
Hmm 
now that I think more of it, it might not be so good for normalmaps, since it's more for discontinuity detection, but techniques might be tunable... 
Bamb 
Lasers cost a lot more than humans, and they don't do 100% of the job, so you still need artists to 'clean up' the maps and make them tileable and they still have to match the diffuse maps and bla bla ...

Motion capture saves exactly 0 animator time. It works the same way. 
Banbuz 
Heheh, look at any mud texture in almost any game. Go to Google images, type mud and there you've got the base of the texture.

Lunaran is right - an artist is always needed to modify whatever image to make it actually usable. Its the same with outsourcing, the stuff always needs a few weeks of someones time because management won't extend the contract to have it done properly.

Remedy sending some of thier devteam to take photos around New York sounds like a tax loss situation - they spunk some cash on something effectively useless to drop thier tax rating. Ok, maybe not in that case, but I gauruntee that Remedy itself didn't pay. 
Bambuz 
Heheh, look at any mud texture in almost any game. Go to Google images, type mud and there you've got the base of the texture.

Lunaran is right - an artist is always needed to modify whatever image to make it actually usable. Its the same with outsourcing, the stuff always needs a few weeks of someones time because management won't extend the contract to have it done properly.

Remedy sending some of thier devteam to take photos around New York sounds like a tax loss situation - they spunk some cash on something effectively useless to drop thier tax rating. Ok, maybe not in that case, but I gauruntee that Remedy itself didn't pay. 
Shit. 
 
How Did You Managed The Typo's In Bambuz ^^^? 
 
But Surely 
cleaning up can take a lot less time than creating from the raw photo. If the team knows what to look and has trained in the local environment before, it could be quite effective in asset "stealing" I imagine.
I also imagine it's quite time consuming in creating some normalmaps by hand, people model stuff in max and create it from that then etc...

He specifically showed some floor texture for example: the photo showing the hotel corridor, then the face-on photo of the tiles, then the texture in the texture browser and finally the texture in-game (I must say that considerable vividity (?) loss in colors was observable along the process). Of course it was an old game and the world hadn't even heard of normal maps back then.
I don't know of funding, and there might have been other reasons to go there too, like to get some feel for what the city is like and get inspiration and even possibly just for fun.

Of motion capture, I assume there are still reasons for using it, like getting more realistic animations in some sense. Or maybe it's just a retarded trick used by EA for marketing reasons, to unfairly tread on small cute studios! 
I Meant 
cleaning up a normal map gotten by some hardware method would take less time than making one with an nvidia hackjob approximating tool from a picture and then tweaking and editing that ad infinitum to reach the equivalent quality... Now, of course if you're willing to do with less quality or realism, then... 
First | Previous | Next | Last
You must be logged in to post in this thread.
Website copyright © 2002-2024 John Fitzgibbons. All posts are copyright their respective authors.