In the previous post about motion blur I presented my results even though I was not completely happy with them.
For the motion blur I was calculating screen space vectors in an offscreen texture. These velocity values are then used by the compute shader post processing to calculate how much to push pixels around to make pixels smear in places with high velocity.
It turns out I forgot to normalize the screen space x and y coordinates by the w component which actually stores a depth value to make the perspective projection. This resulted in a very acute blur for far objects, and a very minute blur for closer objects.
Another thing I wanted to add is motion blur for skinned meshes. To calculate the velocity of each pixel for each object I was calculating each vertex’s position AND what would be its position in the previous frame. I was considering world movement and camera movement.
However, for skinned meshes I didn’t include object deformation differences in the previous frame, so a skinned mesh had only extrinsic blur and not intrinsic blur.
Similar to calculating the extrinsic velocity, in order to calculate bone affected position of the vertices, I had to calculate their position in the previous frame. To do that I simply sent a copy of the previous frame’s bones offsets. This might prove to be a performance issue since now I send twice the amount of bones’ offsets, but for now it’s good enough.
Finally, I also did motion blur for particles. I simply rendered the velocity vectors of the particles with blending into the velocity offscreen texture. It works good enough, but I didn’t put more effort than that. I might do something else later.
I think the next stage would be to combine motion blur with Depth Of Field. Hopefully it’s just a matter of pipelining the two filters.