In photography DOF(Depth Of Field) is the range in which the image blur is smaller than one pixel. That means that outside the DOF the image gets blurry.
As you might have already seen, I had a Blur Compute Shader implemented. So making a DOF is just a simple matter of doing blur adjusted to the depth of the pixel, right? Not really.
Doing DOF is more complex than I initially thought.
The first issue I encountered is that when I wanted to blur pixels, which are before the DOF, my blur wasn’t wide enough. I didn’t want to increase the kernel so I just sampled with steps of 2 pixels instead of 1.
That was simple enough but then I realized something else. Even though I was sampling wider and even though I was performing a blur there were still hard edges in areas with discontinuity.
What do I mean? Imagine you look behind the corner of a wall. The wall that is near you should be blurred, and the wall that is far should be sharp.
However, even though the nearer wall is blurred, there is still a hard edge because the blur is happening on a per pixel basis.
What we wanted to happen is for the closer wall to smear and blur above the far wall. To do that I had to consider an environment of depth values when deciding on the blur factor instead of just considering the depth value of the current pixel.
The final step was to make the DOF adjust according to what the player is looking at. For that I sampled the 4×4 center pixels of the depth buffer inside a separate Compute Shader and wrote out the result into a 1×1 texture that will be read by the DOF Compute Shader as a shader resource.