Racing Game Custom Physics Simulation (Part 2)

Preface

In part 1 we have shown how to make a car(truck) glide on a track made out of 3D triangles.

We were able to steer the car and it was “glued” to the track to make it seem like there are physics involved.

In this part we are going to have an actual physics simulation(to some degree).

The Models

The model of the track remains the same as in part 1(a set of 3D triangles).

The car model is made of 5 points: 4 for the wheels and 1 for the bottom center. Just like in part 1.

However, we will also maintain variables to keep the orientation of the car and the gravity velocity of each of the 4 wheels(in addition to the car’s gravity velocity).

Steering

Since we now maintain the orientation of the car from the previous frame we can’t just assume the steering vector is calculated aligned to the xz plane.

To calculate the new steering vector we rotate the Look and Right vectors by a delta of an angle on the Look x Right plane:

				Look = LastRight*sin(DeltaAngle)+ LastLook*cos(DeltaAngle);
				Right = LastRight*cos(DeltaAngle)- LastLook*sin(DeltaAngle);

The delta of the angle is the angular speed multiplied by the frame’s time delta.

The forward velocity is calculated the same as in part 1 but notice the Look vector might not be on the xz plane this time.

Gravity

This time we are also going to take gravity into account.

Our gravity is the following vector (0, -9.8, 0).

It is a vector pointing downwards with an acceleration of 9.8 meters per square second.

We compose the car’s velocity from two components. The steering velocity from above and the Gravity velocity.

The reason we separate the two is because it would be easier to set the gravity velocity to 0 whenever a point in the model hits the ground or track.

We also need to maintain a separate gravity velocity for each of the 5 points in the car model to make the physics simulation of the car work as if we had an object with volume.

The Simulation

After we calculated the Look and Right orientation vectors and the steering velocity vector we need to apply gravity.

For each of the 5 gravity velocity vectors we add the Gravity vector multiplied by the frame’s time delta.

				CurrentGravityVelocity=CurrentGravityVelocity+Gravity*t;
				for (unsigned int i=0; i<WheelsGravityVelocity.size(); i++)
					WheelsGravityVelocity[i]=WheelsGravityVelocity[i]+Gravity*t;

The next step is to calculate the current position of the wheels and the position of the car center.

For the car center position, we add the current steering velocity vector and the current gravity velocity vector multiplied by the frame’s time delta.

Like so:

				Pos = Pos+(Look*CurrentFlatSpeed+CurrentGravityVelocity)*t;

For each of the 4 wheels we calculate the wheel’s position relative to the car’s center using the wheels base dimensions and the Look and Right orientation vectors.

We then add the car’s center position to each of the 4 wheels positions.

The last step is to add the steering velocity and the gravity velocity multiplied by the time delta. The steering velocity is the same as it was for the car’s center but notice the gravity velocity might be unique for each wheel point(We saved them in their own variables).

				for (unsigned int i=0; i<Wheels.size(); i++)
					Wheels[i] = Right*WheelsDelta[i].x+Look*WheelsDelta[i].z+Pos+(Look*CurrentFlatSpeed+WheelsGravityVelocity[i])*t;

In a similar fashion to part 1 we now check which triangles each one of the 5 points of the model intersect with.

The difference is that instead of setting the points to the intersection height of the respective triangles, we only set them to the intersection height in case their own height is lower.

In addition we set to zero the Gravity velocity of each point only if  it’s height was adjusted by a triangle.

Now we have 5 points of the car displaced into new heights. The car’s center point will be used for the new position, but like in part 1, we need to calculate the new orientation from the 4 wheel points.

The current car model simulates a rigid body. However while adjusting the wheels’ heights the car’s wheel base is now deformed.

We will restore the original form of the wheel base by treating the 4 wheels as if they have springs among themselves(a total of 6 springs).

This will make the 4 wheel points simulate the wheels base as a if it was a rigid body.

In order to restore the original form of the wheels base we go over all the 6 springs and adjust them to be closer to their original length.

We iterate over this process for ten times and at the end we would get something closer to the original form.

				for (unsigned int k=0; k<10; k++)
				{
					for (unsigned int i=0; i<Wheels.size(); i++)
						for (unsigned int j=i+1; j<Wheels.size(); j++)
						{
							Graphics2D::Position v = Wheels[i]-Wheels[j];
							Graphics2D::Position center = (Wheels[i]+Wheels[j])*0.5;
							double l = (WheelsDelta[i]-WheelsDelta[j]).Length();
							double radius = (0.1*l+0.9*v.Length())/2.0;
							Wheels[i] = (Wheels[i]-center).Normalize()*radius+center;
							Wheels[j] = (Wheels[j]-center).Normalize()*radius+center;
						}
				}

Now that we have the wheel points placed on the wheels base frame we can calculate the new Look and Right orientation vectors in a similar fashion we did in part 1.

				Look = (Wheels[0]-Wheels[3]).Normalize();
				Right = (Wheels[1]-Wheels[0]).Normalize();

Conclusion

We now have a more physically based simulation that also support falling off from edges.

The result of this simulation can be seen in this video:

For the sake of completion I am adding the entire code for the update function.

 

			void Update (double t)
			{
				double WheelFactor = 0;
				if (Input.GetLeft())
					WheelFactor = -1;
				else if (Input.GetRight())
					WheelFactor = 1;
				if (Input.GetThrust())
					CurrentFlatSpeed += MaxFlatSpeed*t/AccelLatency;
				CurrentFlatSpeed = std::max(std::min(CurrentFlatSpeed, MaxFlatSpeed), 0.0);

				double DeltaAngle = WheelFactor*t;
				Graphics2D::Position Look = LastRight*sin(DeltaAngle)+ LastLook*cos(DeltaAngle);
				Graphics2D::Position Right = LastRight*cos(DeltaAngle)- LastLook*sin(DeltaAngle);
				CurrentGravityVelocity=CurrentGravityVelocity+Gravity*t;
				for (unsigned int i=0; i<WheelsGravityVelocity.size(); i++)
					WheelsGravityVelocity[i]=WheelsGravityVelocity[i]+Gravity*t;
				CurrentVelocity = Look*CurrentFlatSpeed+CurrentGravityVelocity;
				CarParms->SetLook (Look, Graphics2D::Position(0, 1, 0));
				std::vector<Graphics2D::Position> Wheels;
				Wheels.resize(WheelsDelta.size());
				for (unsigned int i=0; i<Wheels.size(); i++)
					Wheels[i] = Right*WheelsDelta[i].x+Look*WheelsDelta[i].z+Pos+(Look*CurrentFlatSpeed+WheelsGravityVelocity[i])*t;

				Pos = Pos+CurrentVelocity*t;
				if (Pos.y<0.0)
					CurrentGravityVelocity.y = std::max(0., CurrentGravityVelocity.y);
				Pos.y = std::max(0.0, Pos.y);

				unsigned int StartX = std::min((unsigned int)(std::max((Pos.x-Min.x)/(Max.x-Min.x), 0.0)), TrackGrid[0].size()-1);
				unsigned int StartZ = std::min((unsigned int)(std::max((Pos.z-Min.z)/(Max.z-Min.z), 0.0)), TrackGrid.size()-1);

				std::list<unsigned int>::iterator q;
				for (q = TrackGrid[StartZ][StartX].begin(); q != TrackGrid[StartZ][StartX].end(); q++)
				{
					const math::Ray r(float3(Pos.x, 100.0, Pos.z), float3(0, -1, 0));
					float d = 0;
					math::float3 Point;
					if (TrackGeometry[*q].Intersects(r, &d, &Point))
					{
						if (r.pos.y-d>=Pos.y)
						{
							CurrentGravityVelocity.y = std::max(0., CurrentGravityVelocity.y);
							Pos.y = r.pos.y-d;
						}
					}
				}
				CarParms->SetPosition (Pos);
//				std::vector<bool> IsWheelContact;
//				IsWheelContact.resize(4, false);
				for (unsigned int i=0; i<Wheels.size(); i++)
				{
					Graphics2D::Position p = Wheels[i];
					unsigned int StartX = std::min((unsigned int)(std::max((p.x-Min.x)/(Max.x-Min.x), 0.0)), TrackGrid[0].size()-1);
					unsigned int StartZ = std::min((unsigned int)(std::max((p.z-Min.z)/(Max.z-Min.z), 0.0)), TrackGrid.size()-1);

					std::list<unsigned int>::iterator q;
					for (q = TrackGrid[StartZ][StartX].begin(); q != TrackGrid[StartZ][StartX].end(); q++)
					{
						const math::Ray r(float3(p.x, 100.0, p.z), float3(0, -1, 0));
						float d = 0;
						math::float3 Point;
						if (TrackGeometry[*q].Intersects(r, &d, &Point))
						{
							if (r.pos.y-d>=Wheels[i].y)
							{
								WheelsGravityVelocity[i].y = std::max(0., WheelsGravityVelocity[i].y);
//								IsWheelContact[i] = true;
								Wheels[i].y = r.pos.y-d;
							}
						}
					}
					if (FloorHeight>=Wheels[i].y)
					{
						WheelsGravityVelocity[i].y = std::max(0., WheelsGravityVelocity[i].y);
//						IsWheelContact[i] = true;
						Wheels[i].y = FloorHeight;
					}
				}
				for (unsigned int k=0; k<10; k++)
				{
					for (unsigned int i=0; i<Wheels.size(); i++)
						for (unsigned int j=i+1; j<Wheels.size(); j++)
						{
							Graphics2D::Position v = Wheels[i]-Wheels[j];
							Graphics2D::Position center = (Wheels[i]+Wheels[j])*0.5;
							double l = (WheelsDelta[i]-WheelsDelta[j]).Length();
							double radius = (0.1*l+0.9*v.Length())/2.0;
							Wheels[i] = (Wheels[i]-center).Normalize()*radius+center;
							Wheels[j] = (Wheels[j]-center).Normalize()*radius+center;
						}
				}
				Look = (Wheels[0]-Wheels[3]).Normalize();
				Right = (Wheels[1]-Wheels[0]).Normalize();
				LastLook = Look;
				LastRight = Right;
				Graphics2D::Position Up = Look.Cross(Right);
				CarParms->SetLook(Look, Up);
			}

Simple Truck Racing Physics(Part 1)

Preface

I am working on a new 3D racing game.

For this racing game I need a track with mounds, hills and ramps.

I am going to cover my progress in making this racing game’s physics simulation.

The Models

At this point the track is made of a series of 3D triangles.

The 3D triangles might be constructed in a way that they form a road with mounds, turns or slopes but they don’t have to.

At this point the track geometry is used for both rendering and representing the terrain geometry in the simulation.

The track dimensions I used for testing are 110×110 square meters.

We also have the truck which has a 3D model representing it visually.

Inside the simulation the truck is made out of 5 points. The center bottom of the truck and 4 more points representing the wheels.

The truck’s size is a 2x2x5 cubic meters box.

The wheels base is 1.8×4.4 square meters.

Steering

For the steering of the truck I am saving the truck’s absolute direction inside a single angle.

I calculate the Look vector from the angle like this:

				Look = Graphics2D::Position(sin(CarAngle), 0, cos(CarAngle));

When I want the truck to rotate I add an angular speed multiplied by the frame’s time to the angle I mentioned above.

I then recalculate the Look vector every new frame.

In order for the truck to move forward we need to add the movement vector to the truck’s current position.

The truck’s movement vector is calculated like this:

				Move = Look*CurrentFlatSpeed*t;
				Pos = Pos+Move;

We don’t want the truck’s speed to accelerate instantaneously so we add the maximum speed multiplied by the frame’s time step but divided by the latency we want it to take to reach maximum speed.

				if (Input.GetThrust())
					CurrentFlatSpeed += MaxFlatSpeed*t/AccelLatency;
				CurrentFlatSpeed = std::max(std::min(CurrentFlatSpeed, MaxFlatSpeed), 0.0);

Terrain checks

At this point we can drive and steer the truck but we are completely ignoring the track(or terrain).

In order for the truck to “glide” on the terrain we will go over every triangle in our track mesh and test to see if the (x, z) part of the center bottom of the truck is inside the projection of the triangle on the xz plane.

(The center bottom of the truck is actually it’s position).

In order to test that we use a ray to triangle intersection test while the ray is from (truck position X, 1000, truck position Z) to (truck position X, 0, truck position Z).

If the ray intersects the triangle then the truck’s center is inside the projection of the triangle. We can then extract the height of the intersection between the ray and the triangle and use that as the new height(y axis value) of our truck.

(For the ray/triangle intersection we use MathGeoLib by clb).

This will make our truck go over the track’s topology but the truck will remain aligned as if it was on a flat surface.

In order to recalculate the truck’s alignment we do the same test we did with the truck’s center but with the 4 wheels instead.

Before we do that we calculate the absolute position of the 4 truck wheels from the truck’s wheels base rotated by the truck’s steering angle and added to the truck’s center bottom. Like so:

 

				Look = Graphics2D::Position(sin(CarAngle), 0, cos(CarAngle));
				Right = Graphics2D::Position(0, 1, 0).Cross(Look);
				for (unsigned int i=0; i<4; i++)
					WheelPos[i] = Right*WheelBase[i].x+Look*WheelBase[i].z+Pos;

We now do the same calculation over all the triangles and calculate the new height for each of the 4 wheels.

We then calculate the new Look and Right vectors of the truck from two vectors.

The Look vector will be the vector pointing from the rear left wheel to the front left wheel and the Right vector will be the vector pointing from the front left wheel to the front right wheel.

Don’t forget we want the normalized vectors.

				Look = (WheelPos[0]-WheelPos[3]).Normalize();
				Right = (WheelPos[1]-WheelPos[0]).Normalize();

That’s it. This will give us the following simulation result.

Optimizations

You probably noticed that we went through all the triangles in the track for each of the 5 points in the truck model.

This might be problematic to the performance and most of the triangles won’t intersect with the truck model.

In order to optimize this we prepare a 2D array where each array cell contains a linked list.

The 2D array represents a grid on the xz plane. The grid divides the plane into squares.

Each cell of the 2D array contains a list of all the triangles that their xz plane Axis Aligned Bounding Square intersects with the square in the grid that the cell represents.

This way every square in the grid will have a list that will contain all the triangles that intersect with the square(and maybe a little bit more that don’t).

So every time we want to test a point in the truck model against the track’s triangles we only need to test it against the triangles in the list of the square the point is at.

For the sake of completion here is the code to calculate a 10 by 10 triangle test optimization grid:

 

				std::vector<math::Triangle> TrackGeometry;
				std::vector<std::vector<std::list<unsigned int> > > TrackGrid;

				std::vector<Graphics2D::Position> & Positions = TrackMesh->GetPosition(0);
				std::vector<unsigned int> & Indices = TrackMesh->GetIndex(0);
				TrackGrid.resize(10);
				for (unsigned int i=0; i<TrackGrid.size(); i++)
					TrackGrid[i].resize(10);
				TrackGeometry.resize (Indices.size()/3);
				Min = Positions[0];
				Max = Positions[0];
				for (unsigned int i=0; i<Positions.size(); i++)
				{
					Min.x = std::min(Min.x, Positions[i].x);
					Min.y = std::min(Min.y, Positions[i].y);
					Min.z = std::min(Min.z, Positions[i].z);
					Max.x = std::max(Max.x, Positions[i].x);
					Max.y = std::max(Max.y, Positions[i].y);
					Max.z = std::max(Max.z, Positions[i].z);
				}
				for (unsigned int i=0; i<TrackGeometry.size(); i++)
				{
					Graphics2D::Position LocalMin, LocalMax;
					LocalMin = Positions[Indices[i*3]];
					LocalMax = Positions[Indices[i*3]];
					for (unsigned int k=1; k<3; k++)
					{
						LocalMin.x = std::min(LocalMin.x, Positions[Indices[i*3+k]].x);
						LocalMin.z = std::min(LocalMin.z, Positions[Indices[i*3+k]].z);
						LocalMax.x = std::max(LocalMax.x, Positions[Indices[i*3+k]].x);
						LocalMax.z = std::max(LocalMax.z, Positions[Indices[i*3+k]].z);
					}
					TrackGeometry[i].a = float3(Positions[Indices[i*3]].x, Positions[Indices[i*3]].y, Positions[Indices[i*3]].z);
					TrackGeometry[i].b = float3(Positions[Indices[i*3+1]].x, Positions[Indices[i*3+1]].y, Positions[Indices[i*3+1]].z);
					TrackGeometry[i].c = float3(Positions[Indices[i*3+2]].x, Positions[Indices[i*3+2]].y, Positions[Indices[i*3+2]].z);
					unsigned int StartX = std::min((unsigned int)(std::max((LocalMin.x-Min.x)/(Max.x-Min.x), 0.0)), TrackGrid[0].size()-1);
					unsigned int StartZ = std::min((unsigned int)(std::max((LocalMin.z-Min.z)/(Max.z-Min.z), 0.0)), TrackGrid.size()-1);
					unsigned int EndX = std::min((unsigned int)(std::max((LocalMax.x-Min.x)/(Max.x-Min.x), 0.0)), TrackGrid[0].size()-1);
					unsigned int EndZ = std::min((unsigned int)(std::max((LocalMax.z-Min.z)/(Max.z-Min.z), 0.0)), TrackGrid.size()-1);
					for (unsigned int z1=StartZ; z1<=EndZ; z1++)
						for (unsigned int x1=StartX; x1<=EndX; x1++)
							TrackGrid[z1][x1].push_front(i);
				}
				WheelsDelta.resize(4);
				WheelsDelta[0] = Graphics2D::Position(-0.9, 0, 2.2);
				WheelsDelta[1] = Graphics2D::Position(0.9, 0, 2.2);
				WheelsDelta[2] = Graphics2D::Position(0.9, 0, -2.2);
				WheelsDelta[3] = Graphics2D::Position(-0.9, 0, -2.2);

 

What’s next?

Our current simulation doesn’t have much of a physics feel to it.

The car is basically glued to the terrain. We also don’t deal with ledges.

In part 2 the simulation will get more interesting.

Recognize ending contact with the walls and the floor for a platformer game using Box2D

I am working on a new 2D platformer game.

I have been using Box2D for the physics.

For the level itself I used a single b2ChainShape which includes both the floor and the walls.

For my game I need to recognize when the character contact the wall and when it contacts the floor but more importantly when the character ends the contact with the floor or wall.

I could have used two separate fixtures(one for the walls and one for the floor) but according to Box2D I wouldn’t get the perfect collision detection as I would with a single shape.

By implementing b2ContactListener you can listen to when two bodies have a  new contact point and when they end the contact.

To recognize if the character contacts either the wall or the floor I used GetWorldManifold on b2Contact which is provided as a parameter of BeginContact.

b2WorldManifold contains the normal of the contact surface. With the normal I can easily recognize if the contact point is with a wall or the floor.

However,  on EndContact you cannot get b2WorldManifold or the data you will get is garbage.

So how can we tell when we end the contact with the floor rather than the wall?

The solution is to keep 3 counters: The total contact points, the left walls contact points and the right wall contact points.

The total contact points counter includes all the walls and the floor contact points.

When we want to know if we are in contact with the walls and not the floor(like touching the wall mid air) we simply subtract the wall contact counters from the total contacts counter.

When the EndContact method is called we reduce the total contacts counter by one and zero out the walls contact counters.

This will work if the walls are perpendicular to the floor(like a  tile based game).

A more complex level might need a better solution.

 

Platformer screenshot

OpenGL Texture Size and performance? Texture Cache? (Android\iOS GLES 2)

I am working on my dragon simulation mobile game and I am at the stage of adding a terrain to the game.

I optimized the terrain render for my iOS devices but on my Android device I was getting bad performance.

Understanding texture size and performance is kind of elusive. I was being told many times that big textures are bad but I was never able to find the correlation between texture size and performance.

I am still not sure what is the correlation between texture size, bandwidth and performance on mobile devices and I am sure it’s a complex one.

However, I did find out something else.

As I mentioned, my first attempt to copy the OpenGLES shaders from my iOS code to my Android code gave me poor results. The same scene that ran at 60 FPS on my iPod was running on 25 FPS on my Android phone.

This is how the scene looked like on my Android phone:

Slow Terrain Render

Scene rendered at 25 FPS(40 ms per frame)

For the terrain I am using two 2048×2048 ETC1 compressed textures. One for the grass and one for the rocky mountain.

Maybe my phone’s performance is really not as good as my iPod? But then, something was missing.

On my iPod I was already using mipmapped textures while on the first attempt of the Android version I didn’t use mipmapped textures.

Mipmapped textures are texture which not only contain the texture itself but also all (or some) of the smaller versions of the same texture image.

If you have a texture of size 16×16 pixels then a mipmapped texture will contain both the 16×16 image but also the 8×8, 4×4, 2×2 and 1×1 resolutions of the same image.

This is useful because it’s hard to scale down a texture on the GPU without losing details. The mipmapped images are precalculated offline and may use the best algorithms to reduce the image.

When rendering with mipmapped textures the GPU selects the mipmapped image that is the most suitable for the current scaling in the scene.

But apart from looking better, there is another advantage. Performance.

The same scene using mipmapped version of the 2048×2048 textures runs a lot faster than before. I could get a scene render at about 50 to 60 FPS.

The reason for that is that textures have a 2D spatial cache.

In this scene the mountain and grass textures are scaled down considerabley. This in turn makes the GPU sample the textures in texel(texture pixels) that are far from each other making no use of the cache.

In order to make use of the cache the sampling of the texture must have spatial proximity.

When using the mipmapped version of the texture a much smaller layer of the 2048×2048 texture was sampled and thus it was possible to make use of the cache for this specific image.

For the sake of completion, here is the scene with the mipmapped textures:

Runs at about 50-60 FPS(17-20 ms)

Runs at about 50-60 FPS(17-20 ms)

Figuring Neural Network AI for a fighting game.

1. Preface

My latest project is a fighting game for the Android named “Concussion Boxing“.

Concussion Boxing

Get it on Google Play

The game will be of the Virtua Fighter\Tekkan\Street Fighter genre.

The game I played the most out of these three is Virtua Fighter. What was the most amazing thing about Virtua Fighter was the AI. The more you played against it the better it got and the better it learned your fighting style.

Random button mashing didn’t help in this game.

I don’t know how Virtua Fighter’s AI was done, but I decided to pick Neural Network as the algorithm to implement my own learning and adaptive AI.

Not only did I not know how Virtua Fighter’s AI worked, but I didn’t find almost any article explaining how to use Artificial Neural Networks in a real game. Most of the material on NN was either strictly Academic or didn’t explain the subject with an example of a real game.

I supported a Kickstarter from a guy called Daniel Shiffman who wrote a book about AI in games called “The Nature of Code“. This gave me a good start on the basics of Neural Networks.

(Source code for this article is available at: http://ideone.com/qyeZ1x )

2. The Perceptron

One thing you need to know about NN is that you need to get the algorithm code exactly right. Any small error is gonna make the NN non functional.

It is easy to make these errors when you are new to NN because it is not always clear what is right and what is not.

Lets start with a simple trivial NN called a Perceptron.

A Perceptron is made of a single Neuron, input connections and output connections. Being a single neuron it’s not really a network per se.

A Neuron has a single output but zero, one or many inputs. Each input is connected to the Neuron with a weight.

The Neuron sums the inputs according to their weights, process the result with an Activate function and pass that to the output.

Our Activate function will be simply max(min(x, 1.), 0.) where x is the weighted sum of the inputs for this Neuron.

In the game simulation we feed the input from the game to the neuron, calculate the output and then use the result to control our AI controlled character.

Specifically, in Concussion Boxing we will use the Perceptron to make the enemy boxer maintain a distance from the player character.

Neuron Calculation

xi inputs, wi weights

I mentioned the inputs are weighted, but what weights do we need to use? Different weights will make the Perceptron behave differently and thus the learning part of the NN is to adjust these weights.

Learning

So how do we adjust the weights and how learning occur?

We will focus on a learning method called Reinforced Training.

With reinforced learning we will examine the output that was produced with a specific input by our NN .

We will then calculate by other means what is our desired output for this specific input and we will adjust the weight by back propagating the delta between the desired output and the actual output.

For a single Neuron the equation is the following where delta is the output delta and learn is the  learning constant.

delta - output delta learn - learning constant xi - inputs wi - weights


xi – inputs
wi – weights

Notice we multiply the delta by the input xi. This is because the weighted connection has contributed to the final output of the Neuron by xi*wi.

This works in our case because of the specific Activate function we chose.

In other words we distribute the output delta between the weighted connections according to the input.

It is important to note that the learning phase is only temporary.

The NN is supposed to work with desired results even after we stop backward propagating, if it doesn’t then something is wrong.

Keeping a Distance

In my fighting game “Concussion Boxing” the boxers walk on a 1D axis. They can either go forward, backwards or stay still.

We want our Neuron to control the enemy boxer and make him keep a constant distance from the player boxer.

Our Neuron will have two inputs and thus two connections.

The first input is the distance between the AI boxer and the player boxer and the second input is a constant 1.

In this case our single Neuron system output can be expressed like so:

wi - weights

wi – weights

Our boxers can either go forward or backwards but we cannot tell them which speed to walk at. Instead we only tell them to go either backward or forward but the simulation calculates the speed and acceleration on its own.

We will decided which command to give AI boxer based on the output value.

Output Control

Output Control

Given the correct weights our output will be bigger than 0.5 when the distance is bigger than a constant value(the distance we want to maintain) and smaller than 0.5 if the distance is smaller than the same constant.

However, we have an issue. Since we have a range around 0.5 in which the command is ‘still’ the character will not respond immediately when the distance change.

The reason we do want this still range is because we don’t want the character to always flip between backward and forward when it’s really close to the correct distance and we do want the character to sometime be still.

In the training phase for our distance input we will provide the desired result like so:

Train Value

Train Value

We will back propagate the difference between the desired value and our current Neuron output.

If distance is bigger than const then the result will be bigger than 0.5 and if the distance is smaller than const it will be smaller than 0.5.

range is not particulary important but it will be useful in the future or for some minor tweaks. The important part is that we get a linear desired output between 0 and 1.

After the training phase we are supposed to get  a positive value for the distance input weight and a value for the weight of the constant input that will make the output be around 0.5 when the distance is around the const distance.

Improving maintaining the distance

Our Neuron isn’t so good at maintaining a distance. We have a pretty big latency in the decision of the Neuron to change the walking direction when the boxers suddenly change their direction.

What we want is to have our Neuron take the boxers’ velocity into consideration.

To do that we add two new inputs to our Neuron. The speed of the player boxer and the speed of the AI boxer.

And then… that’s it. Everything else stay the same.

This is the amazing thing! Simply adding two new inputs will already make the Neuron consider the velocities when maintaining the distance even without changing the desired training value.

Lets see in this youtube video the two Neuron systems for maintaining a distance. The counter at the top left of the screen is the number of frames left for the training phase.

The numbers below it are the connections weights.

3. A Network

Up until now we were working with a Perceptron.

A Perceptron is limited to giving a linear solution to the problem.

To be honest, I didn’t have the time to research beyond the Perceptron but I noticed that with a Neural Network of 10 neurons the behavior is more complex and interesting compared to the repetitive behavior of a Perceptron.

I will explain how is a network built and provide the source code but I will not explain the “Why”.

With a Perceptron we had all the inputs connected to a single Neuron and the Neuron had a single output. If we needed more than one output we simply used more perceptrons, one for each output.

A Neural Network adds a hidden layer of Neurons between the input and the output.

The hidden layer is a set of one or more Neurons. Each of these Neurons calculates its output from the weighted sum of all the inputs.

After we calculate all the outputs for each of the Neurons in the hidden layer we use this hidden layer as the input of the Neurons that produce the real output. You can think as if we use Perceptrons on the hidden layer instead of using them directly on the inputs.

Notice that you can have more than one hidden layer.

However, I was told that there is usually little benefit for having more than one hidden layer and you could simply add more Neurons to a single hidden layer network and get the same results.

That’s it. I am hoping that in the future I will update this article or add a new one about NN as I gain more understanding of the subject.

For the sake of completion, here is the code I am using for both a Perceptron and a NN.

http://ideone.com/qyeZ1x