For motion blur I wanted to have a global solution. I didn’t want the motion to blur only the entire screen but not objects that move independently of the camera.
I also didn’t want to redraw the scene many times to produce the motion blur or didn’t want to scale down the scene and use that.
My first attempt wasn’t so succesful. My intuition told me that all I need to do is render the velocity of each pixel into an offscreen buffer and use that to blend the resulting image.
There are a few problems with that. First, it’s more complex to calculate the velocity of skinned mesh. Secondly, it is more complex to blend pixels between different objects.
If I have a moving object but the background doesn’t move, I need to blend the object on the background but the background has velocity 0. So the pixels I want to blend with the background will have velocity 0.
Then, I had the idea to do the inverse operation. Instead of looking at a pixel and it’s velocity to sample other pixels, I will simply sample a lot of pixels and move them around and add them to a buffer(if they land inside of it).
That didn’t go well either and had very limited results.
Afterwards I thought, what if I blend the previous frames? Blending previous frames is easy because I only need to store the result of summation. However, wouldn’t that make certain pixels tracing back to very old frames? No! The reason is that the color buffer has 8 bit per color channel(Red, Green and Blue) and so it has only 256 levels per channel. If I divide the previous summation every frame, and add the current frame multiplied by a factor of 0.5, I will have a trace of 8 frames at most. Because dividing repeatedly by 2 will make older values divided into 0.
However, the results of this method were not enough. With this method there is a problem that the motion blur is too coarse, you can see the ghosting of previous frames. It didn’t help even when I tried to blur previous frames.
Finally, I have decided to combine the two of the previous methods. I will render the velocity but I will also sum the velocity into an offscreen texture in a similar way I did for the color. This way I will have both the coarse and fine resolution.
The fine resolution will happen when I sample the color map and push pixels, and the coarse resolution will come from the sum of previous velocity maps that will be used for pushing the pixels.