Problems implementing a screen space shadow ray tracing shader
- by Grieverheart
Here I previously asked for the possibility of ray tracing shadows in screen space in a deferred shader. Several problems were pointed out. One of the most important problem is that only visible objects can cast shadows and objects between the camera and the shadow caster can interfere. Still I thought it'd be a fun experiment.
The idea is to calculate the view coordinates of pixels and cast a ray to the light. The ray is then traced pixel by pixel to the light and its depth is compared with the depth at the pixel. If a pixel is in front of the ray, a shadow is casted at the original pixel.
At first I thought that I could use the DDA algorithm in 2D to calculate the distance 't' (in p = o + t d, where o origin, d direction) to the next pixel and use it in the 3D ray equation to find the ray's z coordinate at that pixel's position.
For the 2D ray, I would use the projected and biased 3D ray direction and origin. The idea was that 't' would be the same in both 2D and 3D equations. Unfortunately, this is not the case since the projection matrix is 4D. Thus, some tweak needs to be done to make this work this way.
I would like to ask if someone knows of a way to do what I described above, i.e. from a 2D ray in texture coordinate space to get the 3D ray in screen space.
I did implement a simple version of the idea which you can see in the following video: video here
Shadows may seem a bit pixelated, but that's mostly because of the size of the step in 't' I chose.
And here is the shader:
#version 330 core
uniform sampler2D DepthMap;
uniform vec2 projAB;
uniform mat4 projectionMatrix;
const vec3 light_p = vec3(-30.0, 30.0, -10.0);
noperspective in vec2 pass_TexCoord;
smooth in vec3 viewRay;
layout(location = 0) out float out_AO;
vec3 CalcPosition(void){
float depth = texture(DepthMap, pass_TexCoord).r;
float linearDepth = projAB.y / (depth - projAB.x);
vec3 ray = normalize(viewRay);
ray = ray / ray.z;
return linearDepth * ray;
}
void main(void){
vec3 origin = CalcPosition();
if(origin.z < -60) discard;
vec2 pixOrigin = pass_TexCoord; //tex coords
vec3 dir = normalize(light_p - origin);
vec2 texel_size = vec2(1.0 / 600.0);
float t = 0.1;
ivec2 pixIndex = ivec2(pixOrigin / texel_size);
out_AO = 1.0;
while(true){
vec3 ray = origin + t * dir;
vec4 temp = projectionMatrix * vec4(ray, 1.0);
vec2 texCoord = (temp.xy / temp.w) * 0.5 + 0.5;
ivec2 newIndex = ivec2(texCoord / texel_size);
if(newIndex != pixIndex){
float depth = texture(DepthMap, texCoord).r;
float linearDepth = projAB.y / (depth - projAB.x);
if(linearDepth > ray.z + 0.1){
out_AO = 0.2;
break;
}
pixIndex = newIndex;
}
t += 0.5;
if(texCoord.x < 0 || texCoord.x > 1.0 || texCoord.y < 0 || texCoord.y > 1.0) break;
}
}
As you can see, here I just increment 't' by some arbitrary factor, calculate the 3D ray and project it to get the pixel coordinates, which is not really optimal. Hopefully, I would like to optimize the code as much as possible and compare it with shadow mapping and how it scales with the number of lights.
PS: Keep in mind that I reconstruct position from depth by interpolating rays through a full screen quad.