Boosting the FPS: GC (Part 1)

I was getting annoying long stutters on the render rate of my 3D Android RPG game Heroes Of Honesty(

It was weird to me because normally I would see most frames time were bellow 33ms which means I would need to get at least a solid 30 FPS. As can be seen in this diagram:

Frames bellow 33ms

After a few more measurements and profiling, I realized what was one of the issues responsible for this stutter. It turns out the Garbage Collector implementation on Android for Java is not very good.

You can see, in the following diagram, there are gaps in the GLThread(the render thread) whenever the GC kicks in:

GC Stutters

(Notice the GC kicks in about every 300ms in the diagram above)

GC saves programmers the hassle of managing memory allocation and release. However, the more you allocate memory at run time the harder the GC will have to work to clean after you. So in order to prevent the GC from working hard we need to minimize the amount of allocations we make.

We do this by “recycling” memory. We use a memory pool, a chunk of pre allocated memory or an array of pre allocated memory we allocate once and reuse many times for different data.

Let’s say you need to set a Projection matrix to an OpenGLES2.0 shader. You might allocate the matrix before each time you send it, it might be even inside a render loop used for many different 3D objects.

Instead of allocating the matrix for each object, we can allocate it once outside the loop and reuse the same memory chunk for all the 3D objects in the render list.

After minimizing some of the memory allocations in the render thread, I got the following results:

Better performance of GC with memory recycling.

Notice the GC occurs once every 800 ms.

There is another issue. What I have shown up until now was a scene with static resources. I couldn’t load the whole map of Heroes Of Honesty into memory, so I had to dynamically load map patches only when they are required.

This means that in some of the frames I put an extra effort in loading resources into VBOs. I also happen to allocate a lot of memory in the process of generating the index and vertex data before sending them to the GPU.

The following diagram shows what happens when we are in the middle of dynamically loading resources:

Dynamic resource loading and GC

On the left side of the diagram you can see there is a big gap between the two times the GC get to work. However, on the right side the GC is being called many times.

Notice that not all of the stalls are due to the GC’s fault, some of it is because of the extra work we need to do to actually generate the dynamic resource.(more on that on the second part)

The following diagram shows the performance after using memory pools and recycling memory for the index and vertex buffers that need to be dealt while loading a dynamic resource:

GC performance is now better with dynamic resources

In the diagram above you can see the GC doesn’t work any harder when dynamic resources are being created. You can see where resources are being create because the GetMesh method is marked with little wedges.

Notice that there are still gaps in the render thread and we will still get stutters even though we solved the GC issue.

For the sake of completion here is a closer look at the frames timelines while loading dynamic resources:

Single thread resource loading.

In the diagram, a “normal” render frame would consist of a mostly black part and a pink part. The black part is mostly the OpenGLES draw calls, and the pink part is the game logic update. Currently they are both done in the same render thread.

Some frames have a green part, the green part is CPU time “wasted” on loading the resource. As you can see the many green parts make the frame time longer and thus the frame rate will drop once in a while.

You may also notice that the GC doesn’t do work this whole time. In the next part I will explain how I improved the render thread’s rate and the overall performance(with multi threading).

Improving Animations’ Key Frame Performance on the CPU

For my 3D Android RPG game Heroes Of Honesty( I was making 3D animated characters.

There are numerous ways to animate a 3D characters and one of the ways is to have an artist set animation key frames.

A key frame is a state of an object or bone of the character set in a specific time of the animation timeline. By setting many key frames you can animate the characters into all sort of pre made behaviors.

When drawing an animated character in a 3D game, you would select a specific time in the animation timeline and calculate the posture of the character in that specific time. The posture is then calculated by the nearest keyframes in the timeline and interpolate two adjacent key frames if there is no key frame at that specific time.

How would you find which key frame is the closest to the frame time?

Linear search

Since the keyframes are ordered from the earliest to the latest we can go from the beginning to the end, one step at a time, and stop when we find the first key frame that is bigger than our current frame time.

This sort of search has O(n) complexity and it is pretty slow when we have a lot of key frames.

We might have many key frames if our animation software bakes the keyframes for things such as Inverse Kinematics.

Binary search

We can do better than linear search.

Since our key frames are ordered in the time line from small to big, we can do a binary search.

The binary search algorithm always check half of the relevant key frames. If we know a specific key frame is bigger than our frame time then all the key frames above it are irrelevant for us. We then continue to check the second half of the first half(the quarter) we found is relevant and so on.

The following code is a binary search in C++:

	unsigned int
	Mesh::AnimationNode::CoreFindPosition (aiNodeAnim * n, unsigned int First, unsigned int Last, double t)
		if (First+1>=Last)
			return First;
		unsigned int Index = (Last+First)/2;
		if (n->mPositionKeys[Index].mTime==t)
			return Index;
		if (n->mPositionKeys[Index].mTime<t)
			return CoreFindPosition(n, Index, Last, t);
		return CoreFindPosition(n, First, Index, t);

For the binary search we get a complexity of O(log(n)).

Baked Search

We can do even better. Our game render rate is limited. Most games run at no more than 60 Frames Per Second. This means that at the very least the animation time steps would be 16.67 ms.

We can create an array that will be the size of the animation timeline divided by the rendering frame time(16.67 ms). This array will have indices to the nearest key frame to the frame time each entry represents.

Now in order to find the closest key frames we simply access one array cell that corresponds to the animation’s frame time and get the index of the closest key frame.

This algorithm is of O(1) complexity.

Full baking?

A character usually consists a hierarchy of objects that compose all the parts of the character. In order to animate the character we need to find each object’s relative position key frame, and then calculate its absolute position up to the root object.

Instead of baking indices of key frames in the baked search, we can bake the absolute position of the object in that specific time. Since the rendering FPS is limited, we can bake the absolute position dense enough so we won’t need to interpolate.

This will save us all the hierarchical calculation of the animation tree. However, this way we won’t be able to mix this specific animation with other animation or mix it with dynamic elements such as rag doll animation.

3D Graphics Quality and Floating Point Accuracy on Android

I am working on a 3D RPG game for the Android called Heroes Of Honesty

My Galaxy Note Android phone got less than 16M concurrent colors. I am not sure how many, maybe 64K colors, maybe a bit more. The point is that on 3D graphics, gradients look a bit coarse color wise.
There is nothing much you can do about it, this is a hardware limitation although you might be able to choose a different color palette.

However, there is another issue. Floating point accuracy.

There is a limitation to floating point accuracy on the programmable shaders of GLES2.0, and you can even reduce the accuracy on compile time.
This might cause artifacts, and it might be worse than you realize.

The following images were taking from my Galaxy Note phone. You will notice that the top image’s gradients are more coarse. This is exactly the same scene.

Bad Float Accuracy

Good Float Accuracy

You can see the gradients on the fog at the back, and also if you look carefully at the ground you will see at the top picture that it’s more jagged.

So why does this happen? This is the exact same scene.

When drawing a 3D scene you need to set the position of the objects to draw and the position of the camera and light sources.

At this scene the characters are somewhere on the world map away from its center. This mean I am setting the camera look at 3D coordinate to something such as (-1000, 0, 1000).
The character on the center is at the same position of the camera look at coordinate, so it’s position is also set to (1000, 0, -1000).

What if we would draw the same scene, but now we move all the objects, camera and light source so the main player will be at position (0, 0, 0). It doesn’t matter for the scene since we always look from the viewpoint of the camera.

It turns out that doing this improve the floating point accuracy.

Since positioning the camera and characters at (1000, 0, -1000) makes the floating point contain a large number(1000), all the calculations relative to it are done relative to a big number. Therefore there are fewer bits in the floating point assigned to the smaller values.

Such smaller values might be the vertex coordinates of the character’s mesh, but now instead of being 1000.01 they are 0.01 which leaves more room for accuracy.

The top image is rendered when everything is placed relative to the map’s center, and the bottom image is rendered when everything is placed relative to the character’s center.