ObjReader Community
Tips & Tricks => Tips & Tricks => Topic started by: Michael Lobko-Lobanovsky on August 09, 2018, 10:18:58 am
-
So far ObjReader has been able to simulate 3D object bumpiness only with conventional normal maps, and I must admit it wasn't perfect at doing so. Now that the glitch is corrected and ObjReader has acquired new options to enhance conventional normal mapping with parallax and steep parallax bump mapping through its more versatile uber-shader, I feel it would be reasonable to brief you a little on exactly what we're doing and why. There will be no maths in my explanations (you can google the relevant info easily elsewhere) but just a few words in common English to help you understand where we stand now.
Normal Mapping
1. Conventional
Conventional normal mapping bakes the details of object bumpiness into the predominantly purple colored tangent-space normal maps that allow per-pixel simulation of fine bumps in the model areas where, due to model decimation, there are no corresponding geometric vertices any more. Combined with the OpenGL automatic smoothing capability, normal maps are a very cheap means to enhance the looks of the model's low-poly LODs.
The best quality normal maps should be baked from the original high-poly model before decimation and then applied to the decimated LODs. This can be done in almost any professional 3D editor like Cinema 4D, 3ds Max and Maya, Blender, etc. If for some reason this isn't possible (e.g. the high-poly model isn't available), then usable but considerably lower quality normal maps can be derived from diffuse maps in e.g. the Photoshop nVidia plugin or CrazyBump app.
Conventional normal maps are however of limited applicability because their range of veritable bumpiness is rather narrow.
2. Parallax
Parallax bump mapping derives additional height information from the normal map itself and allows us to control the amount of visible bumpiness in real time. It allows us to either make the cavities deeper, or the bumps higher, or both with respect to conventional normal mapping, through just two additional parameters that can be controlled by the observer.
Parallax bump mapping looks much, much better than conventional normal mapping but still it isn't physiologically correct. Distant bumps protrude just as prominently as the closest ones, and this isn't how the human eye and brain percept bumpy objects in real world.
3. Steep Parallax
Steep parallax bump mapping eliminates the above deficiency by using a separate grayscale height map that reduces the object's visible bumpiness as the apparent distance to the bumps increases. Steep parallax can also be controlled in real time through a number of parameters.
Note: ObjReader is not a 3D editor but just a viewer. Its uber-shader has a minimal set of uniforms to reduce render time load on the computer CPU and data bus. Both parallax and steep parallax parameters have been set to some predefined general case numeric values that are hardcoded in the shader.
Just like a conventional normal map, the height map is best baked from the original high-poly model but can also be derived satisfactorily in most cases from the object's diffuse map in e.g. the CrazyBump app. The lion model I sent you uses maps that have all been derived in CrazyBump from the one and only diffuse map the model was originally supplied with.
If the height map isn't available, ObjReader will use a blank white dummy map and will behave just like an ordinary parallax renderer though still doing quite a lot of thus unnecessary calc. I think we will need a new steep parameter to the map_bump meta to control if a material should use steep parallax rather than ordinary parallax when rendered. Ordinary parallax uses much fewer computations in the shader and is thus less stressful for the GPU.
4. Bent Normals
Normal maps allow us to reproduce fine and medium sized bumps but they don't affect the shadowing of emulated relief. Unlike true geo, emulated bumps don't produce shadows on the other adjacent emulated bumps and cavities regardless of the position of the lights in the scene. The existing shadow casting techniques are based on geo shadowing other geo but they ignore the ghost relief emulated with normal maps.
There is a tiny tool written in C that, given a conventional normal map and a corresponding height map, allows us to modulate ("bend") the normals with additional height data in such a way that they would emulate casting gray-scale shadows dynamically on the adjacent relief in accordance with the current position of scene lighting.
The lion model I sent you features both conventional and bent normal maps generated by the tool (bent ones are visibly more greenish). You can experiment with the material normal maps in the .MAT file to see the visual effect more clearly. As you nod the lion's head slowly towards you under forward directional lighting, you will see how the bent normal map's greenish areas start to cast gray "shadows" over the nearby bumpy areas of the head.
The bent normal map acts like some kind of continuous dynamically generated ambient occlusion but unlike the latter, it comes to us absolutely for free, computation-wise.
Shadowing
1. Light/Shadow Mapping
If a model storage format provides for at least two concurrent sets of texture coords, then it becomes possible to use tiny colorful light maps to emulate shadowed and lit areas of the scene's static geometry. OpenGL is able to interpolate colors over surfaces nicely, and when the tiny light map is magnified to the model's actual size, color interpolation produces beautiful emulated penumbra shadows pre-baked into the light map in 3ds Max, Maya, Blender, Gyles and some other 3D editors.
Light maps are however per-triangle or per-plane based and use a different topological layout in their own texture atlas than the conventional diffuse/normal/specular maps, hence the need for a concurrent alternative set of texture coords in the model format. If it isn't available as e.g. in our .OBJ format, then a conventional but minified and colored shadow map could be used but I don't know of any 3D editors that are able to bake colored rather than grayscale AO maps to emulate veritably colorful shadows and light spots.
Light maps can also be emulated roughly by vertex colors. But this technique works poorly for low-res LODs because their sparse vertices tend to distort the contours of the lit and shadowed scene areas.
Both light maps and vertex colors are used to colorfully GL_MODULATE the diffuse channel of the rendering pipeline.
2. Ambient Occlusion
Ambient occlusion is similar to light mapping but is usually grayscale and used to modulate the ambient component of the scene lighting. It can be static (pre-baked into an AO map in a 3D editor with an ordinary set of tex coords) or dynamic for animated models. Dynamic AO is however computationally very intensive and therefore an FPS killer for high-quality real time rendering. Let me remind you that ObjReader is currently able to support static AO in its map_Ka channel.
________________________________________________
Hopefully this explains a bit what I'm trying to achieve in ObjReader. I'd like to see it rendering medium-resolution models with high-resolution quality. I'm afraid I won't be able to maintain high FPS counts forever for your multi-million poly models of ever increasing complexity. I wish you found more fun toying with texture maps than geometry, my friend. :)
-
Thank you very much for this very detailed and informative head's up!