Pegwars in Darwin..

Been a while since the last Pegwars effort, since then I've packed up shop and moved to Darwin! Also since then I've been getting into Android programming. One of my Android apps involves porting some of the Stingray engine to Java and OpenGL, which was FUN. However since dry season hit, and my cool 3D rain app can no longer be worked on (no way to test it lol) I've moved back onto Pegwars programming.

First step was reading my codebase and understanding it again. To do this, I followed through my DogFight and SpaceFlight python modules, and drilled down into the how the Scene works, with n cameras and lights and render channels and Scene vizCallbacks etc. Very happy once I'd figured it out, to remember that I had come a long way. With hindsight and a fresh perspective, I jumped in and fixed a few long-standing bugs.

Then to test out my knowledge of the renderer, I launched in and implemented both depth-of-field scene blurring and materials that reflect the dynamic skybox, and got these new features done in a single night. That was an awesome feeling, I was really happy to get 2 huge features working in such a short amount of time. It required no new architecture work, just straight implementation using existing paradigms. I'm so happy right now that the rendering engine I made last year is so flexible and that I had come so far already.

I wrote up a how-to on how to add new Post-Processing stages.

How to add a Post-Processing stage
==================================
Each CameraScene owns a "post" render channel strip.
Currently each Camera Scene has the same python code and same post setup.

1. Add a Render Channel to CameraScene.post render channel strip. e.g. rcs.add( Stingray.RenderChannel(), "DepthOfField", 9.5 )

2. Need a list of Draw Commands to render your effect. Do this via XML, and load in python - see Scripts/Post
self.dofCommands = []
path = "data/depthOfField.xml"
d = Stingray.root().open( path )
for name in d.children().keys():
dc = Stingray.DrawCommandClassFactory( path + "/" + name, None )
self.dofCommands.append(dc)
self.postChannelStrip.strip.addDrawCommand( "DepthOfField", dc )
3. The SinglePassFilter DrawCommand takes 4 source textures and a destination texture in XML, e.g.
SinglePassFilter
[DrawCommand]
TransferFilter
4. Create custom shaders for the steps.

Pixel Shader Post Processing Constants
//from ShaderConstants::setStandardPostProcessConstants()
float4x4 view : register(c0);
float4x4 proj : register(c4);
float4x4 viewProj : register(c8);
float4x4 invViewProj : register(c12);
float4x4 invView : register(c16);
float4 user1 : register(c20); //user constant 1
float4 user2 : register(c21); //user constant 2
float4 user3 : register(c22); //user constant 3
float4 user4 : register(c23); //user constant 4
float4 lut[64] : register(c24);
5. Assign python control over shader constants

In PyStingray/ShaderConstants.py
register( "dof", "nearZ", "shaders2/post/dof_lens.pso", 20, "near Z for depth of field camera lens" )

Then based on the scene knowledge, I added something I always wanted to, image-based procedural world maps. This is really cool, I now have a 2Kx2K image, where each pixel creates a 250m. x 250m. tile of world, such as city pieces, water pieces, suburbs, forest, etc. This image is procedural, using the Python Imaging Library, I have a layer-based procedural image compositing system. In the end, I'm going to use this to generate planets, and allow players to build hand-made cities and do terraforming.

Got to go now, will post screenshots up later but I need to finish optimising the world tile streaming, currently it's pretty jerky when moving around.

Comments

Popular posts from this blog

Screenshot Progress Update!

Rendering Nebulae

2022 Screenshot Update