Things to Consider Before Backing up a Game on Kickstarter.

(I will try to make this article more informative than being a rant).

I think any indie game developer that wasn’t being able to make a living out of his games had those frustrating moments where he saw a game on Kickstarter gets a lot of money for almost no effort.

It seems people don’t realize that a lot of Kickstarter projects are using simple tricks to make people excited about the project while the end result(if ever finished) might be quite paling in comparison.

So without further ranting I will tell you about little things that hint how much quality the presented content actually have.

First of all the video.

Trailers and videos can easily impress people. Sometimes it is justified and sometimes it is not.

For instance, there is a program called Aftereffects which is a video post processing program(mostly).

This means that it adds effects to the trailer after all the video content is ready.

A simple example is beautiful titles or subtitles.

You can also add all sort of overlay animations. You can overlay a video of small red ash coming from a flame to make it seem like the game has awesome particles.

It is also quite a common effect to have flaming ash in videos. When you see those things you need to understand that the developer probably didn’t render it by himself, it’s just a video overlay.

You can take it a step further.

There are websites with pre-rendered CG which you can upload your logo or text and it will render them beautifully and seamlessly inside a high quality CG video you can put in your trailer.

Again, this is content that wasn’t created by the developer.

I am not saying that developers shouldn’t use those things.

However, as a potential backer you should notice how much content was actually created by the developer so you can tell how likely this developer will deliver a product that is similar to what was presented.


Besides the visual effects, there is also the actual gameplay and what gameplay the developer suggests the game might have.

Many in-game footage are scripted.

Watchdogs is a good example for scripted gameplay. The gameplay we saw in the trailers at first seemed amazing, but it was too amazing.

If the game presents a chain of events that reminds more of a Hollywood movie than actual gameplay it’s probably because it was scripted and produced like a movie rather than being actual in game scenarios.

There are also limitations of the medium. There is a limit how much control you can have over an in game character in a chase scene(like in watch dogs) with your keyboard and mouse.

You can’t control every little detail of the character with your mouse and keyboard in real time.

What about the AI?

Does the game imply that characters will have something interesting to say about everything you do?

That means that the developer will either have to create tons of dialogues or will have to tackle on generating sentences with an AI which as far as I can tell was never done in a game before(or not very good outside of a game).

That being said, the developer might actually have an amazing tech that does amazing things that no other game have done before.

But if you consider how much content was actually created for the Kickstater by the developer and what is the quality of the content, it could hint whether it is possible the developer has an amazing tech for amazing gameplay.

Well there is a lot more to be said on this subject, but I think those are some easy tricks to make people impressed by your Kickstarter project and as a consumer you should have those things in the back of your head while viewing another Kickstarter project.

You should also consider how much content and how much of the tech and game itself the developer have at the moment of making the Kickstarter.

If the developer have very little content and game ready, he is not in a very good position to look far ahead and suggest which features and what scope the game will actually have.

Disclaimer: I am not an expert on the subject, I am just pointing out a few things I consider when watching a Kickstarter game project.

And maybe you just like to back up a project that looks cool on Kickstarter and you don’t care how it was created.

Anyway, these are my 2 cents.

Mipmaps and GL_REPEAT Artifacts in OpenGL ES 2.

I am working on a new racing game and I encountered some odd artifacts in rendering the textures of the track.

I verified that the artifacts are not ZBuffer fighting related and I couldn’t tell what caused it.

The track has a repeating texture which means it use one texture and repeat it along the track segments.

Track Start Artifacts

Track Start Artifacts

Guard Rail Artifact

Guard Rail Artifact

In the first image notice the patch of asphalt just before the car. It has artifacts while the patch further away does not.

In the second image look at the guard rails.

This phenomena happened by using either a raw mipmapped texture or a compressed mipmapped texture(pvr on iOS).

However, it only happened on textures that used the GL_REPEAT flag and not on other mipmapped textures.

So what was it?

It turned out that there was an issue with calculating the level of the mipmap.

Notice on the rail guard image that the artifact happen at a specific band of depth range which is the transition between one mipmap level to a lower level.

Look at the UV mapping of the track on the bottom right window:

Long UV mapping

Long UV mapping

The UV mapping for this track stretch way beyond the U texture mapping coordinate .

If the UV coordinates repeat on the range of [0..1]x[0..1] then the mapping on this object reach a U coordinate of about 50.

This is done to make use of the repeating texture but it is also messing up with the OpenGL ES 2 internal calculation of the mipmap.

With large U coordinates I am guessing there is a floating point accuracy issue with the mipmap calculation. I don’t exactly know how the mipmap calculations are done but I assumed this is the cause of the artifacts.

Notice that in the first artifacts screenshots the car is at the beginning of the track, which means the first patch you see on the screenshot is actually the last patch on the track model as the track is cyclic.

The artifacts get worse the further you go on the UV mapping as the values are bigger and cause more floating point accuracy issues.

Since the texture is repeating it doesn’t matter if the UV mapping is repeating the same UV area.

I made a more compact UV mapping version of the same model:

Compact UV mapping

Compact UV mapping

Using this version of the mesh made all the artifacts disappear!

Start Fixed

Start Fixed

Guard Rail Fixed

Guard Rail Fixed

In conclusion:

If you verified that you don’t have ZBuffer issues.

If you see depth related artifacts on a specific range band of a texture with mipmaps.

And if the artifacts do not appear on small UV coordinates but become more severe the more the UV coordinates are away from [0..1]x[0..1], then there is a good chance you have a UV mapping mipmap related artifacts.

Solving this issue might be as simple as remapping the UVs to have values closer to [0..1].

There is more to say why this happens and what exactly are the floating point inaccuracies that happen in the mipmap calculation done by OpenGL but this is beyond the scope of this article.

I hope you find this article useful.


glDepthRangef and Depth Clipping on OpenGL ES 2.

I am working on a new racing game for mobile devices.

In this game I have a plane for the ground which stretch to the horizon and a spherical sky panorama.

As you may know you can’t really render into infinite in OpenGL and even if you could the rasterization and resolution limitation would still make the horizon look jaggy and aliased.


In order to make the ground plane blend with the sky sphere I needed to make the ground pixels near the edge of the render frustum to blend with the background.

However, most of the ground does not need to blend with the background but rather a thin stripe near the far end of the view.

We can render the ground in two phases. One with regular shading where there is no fading and one from where the fading begins.

The fading is dependent on depth so we need to split the rendering based on depth.

Notice: Splitting the render based on depth can be useful performance wise or can be useful to simplfy shaders. In my case there wasn’t a big difference in performance in rendering the ground in two pieces(with and without blending).

So what we need is Depth Clipping.

Our frustum box already does clipping. It clips whatever we project into it that is outside the box of [-1..1]x[-1..1]x[-1..1]

How would we clip based on depth that is smaller than 1?

When rendering the scene in the racing game we are using a perspective projection matrix which is created with the following parameters:

Field of View angle(on Y axis), Aspect ratio, Near plane and Far plane.

For instance we can have a perspective camera with a field of view of 45 degrees, an aspect ratio of 4:3, a near plane of 1 and a far plane of 500.

Lets say we want to render an object in the scene but clip it on depth of 450 instead of 500.

We can create a separate projection matrix with the exact same parameters as in the example but with 450 in the far parameter instead of 500.

This will render the object clipped to 450.

However, notice that we said the frustum only clips at -1 and 1 on the depth axis.

With the original projection matrix we projected 500 into 1. With the current matrix we project everything the same(the x and y into [-1..1]x[-1..1]) but the depth is projected from 450 to 1.

While rendering the object to the screen with the 450 depth matrix is rendered correctly, it’s depth values are rendered inconsistently with the depth values of the original matrix(with the far set to 500).

This will cause the Z Buffer to behave incorrectly.


In order to fix this we change the range of the depth values rendered into the ZBuffer from [-1..1] into [a..b] where we choose the new a, b.

We do this by using the function glDepthRangef.

glDepthRangef accepts values between 0 and 1. Our frustum box depth values are between -1 and 1.

Which means in glDepthRangef 0 is mapped to -1 and 1 is mapped to 1.

How do we choose the range for glDepthRangef so the depth values with the 450 depth projection will match all the other objects with the original projection?

We use the original matrix to calculate where 450 is projected into the frustum. Like so:


vector4 p = OriginalProjection.MulPosition (vector3(0, 0, 450));

float newFar = p.z/p.w;

newFar = 0.5*(newFar+1.0);

glDepthRangef (0, newFar);




Another thing to remember:

glDepthRangef change the depth values written to the Z Buffer.

This means the vertex shader still need to write into depth values of [-1..1].

In addition, the fragment shader will still see the depth values between -1 and 1 and not the ones glDepthRangef set them to.