Start with the result: aperricone.altervista.org/wegl2/
Recently I had the crazy think to convert a my experiment from OpenGL to WebGL, you can see the experiment here: https://www.youtube.com/watch?v=qhwlZcrYKFc. It is about a spherical light source: with position an radius. I think the lighting by area light is more realistic of other simulated, and a sphere is a good approximation to apply in realtime.
The rendering is a forward with z prepass, The openGL version steps are:
- Scene rendering in a cube depth map centered in the light, using geometry shader so it is done in a single pass.
- Scene z-prepass on main frame buffer.
- Heavy scene rendering, with:
-
- a point light model modified to use the light radius.
- PCSS, percentage closed soft shadow.
- SSAO, screenspace ambient occlusion, using the depth from the z-prepass.
- a post process (very simple, only to gamma correct the result)
Converting to WebGL: there is no geometry shader, obviously, so instead of a single pass, It do six different passes, to six different frame buffer… pretty bad.
Another problem, it is not possible create a cube map with format as depth, there is a conformity test to check that the browser does not allow it… Workaround: the format is color and the rendering uses a shader that encode the depth in the color, in this way:
vec3 rgbVal = vec3(fract(zVal*vec3(1.0,256.0,256.0*256.0)));
rgbVal -= rgbVal.yzz * vec3(1.0/256.0,1.0/(256.0*256.0),0.0);
then, the read from a simple sampling become:
float zVal = dot(rgbVal,vec3(1.0,1.0/256.0,1.0/(256.0*256.0)));
I don’t like it, but this problem is solved. 🙂 I think it is a stupid restriction…
Next problem, I wrote “using the depth from the z-prepass”, in OpenGL it is very easy, create a depth texture and do a glCopyTexImage2D, it copies the frame buffer on a texture, if it is a depth texture then copy the depth buffer, here the extension: depth_texture. It is allowed also in OpenGL ES: OES_depth_texture… But on WebGL WEBGL_depth_texture the only way to write a depth texture is using it inside a frame buffer.WTF..

First attempt, move the SSAO in the post process, It works, is simple and easy.
There is a little problem, to do this the ambient during the rendering is set to black, so when I add the SSAO there is no way to know the color below, then it is all gray: It works, but it is a little ugly…
Now I can add a render target to store the color, but I don’t like it.
The second attempt was use the OES_depth_texture, then I create another frame buffer with another depth texture, and instead of glCopyTexImage2D, the program changes frame buffer and write the current depth with shader in this way:
gl_FragDepthEXT = texture2D(depthBuffer,texCoord).x;
I implemented the glCopyTexImage2D, It is very wacky, but the result is what I want!
This experiment is not only WebGL, I am proud of:
- use of ammo.js for physics, through a WebWorker, So the page is multi-thread 😎 .
- Use of audioContext, the sound is present. The result is trash, but I enjoy doing it… 🙂
- Minimizing: I made a program that parse the HTML, and minimize all js, parse the js and minimize all shader, very cool.
That’s all folks, Live long and prosper.
Be First to Comment