Pitfalls and Errors When Using AVAssetWriter to Save the OpenGL Backbuffer Into a Video.

For my current game I need to record the OpenGL back buffer of GLKView into a video using AVAssetWriter.

I will not go over all the code to do this(which is not that much).

However, I will go quickly over some of the pitfalls that might make you go scratching your head when you try to create a video on iOS.

The first thing to consider is that AVAssetWriter might fail without throwing an exception and without returning any value pointing that out.

To test if there was an error within AVAssetWriter methods you need to read the status and error properties of AVAssetWriter.

You can print the status and error like so:

NSLog(@"%ld %@", (long)videoWriter.status, videoWriter.error);

The first pitfall I encountered is that after startWritting my AVAssetWriter status change to AVAssetWriterStatusFailed.

(Or the number 3)

The reason I was getting this error is because I didn’t delete the file I tried to write into from the previous run session.

That was an easy error but after that I was getting an error when trying to append the first(and following) CVPixelBuffer.

I got AVAssetWriterStatusFailed again but the error value was Unknown Error or the OSStatus was -12902.

After a while I have found out that nothing was wrong with my code except for the fact that something was wrong with the resolution.

Since I was running my app on an iPad3 the resolution was 2048×1536. I don’t know why but the codec or AVAssetWriter couldn’t handle this resolution.

Simply reducing the resolution to something smaller allowed me to write the video without getting any error.

 

For the sake of completion here is part of the code I was using:

 

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
    RenderGame();
    @synchronized(self) {
        if (Recording)
        {
 //           sample = [self sampleBufferFromCGImage:[self snapshotRenderBuffer]];
            GLubyte * data = [self snapshotRenderBufferToData];
            CVPixelBufferRef pixelBuffer = [self pixelBufferFromBytes: &data withSize:CGSizeMake(videoWidth, videoHeight) withSourceSize: CGSizeMake(screenWidth, screenHeight)];
            if (adaptor.assetWriterInput.readyForMoreMediaData)
            {
                RecordFrame++;
                [adaptor appendPixelBuffer:pixelBuffer withPresentationTime:CMTimeMake(RecordFrame*10, 600)];
            }
            //                [writerInput appendSampleBuffer: sample];
            CFRelease(pixelBuffer);
            free (data);
            NSLog(@"%ld %@", (long)videoWriter.status, videoWriter.error);
            //            NSLog(@"%ld", (long)videoWriter.status);
        }
    }
}
-(IBAction)selectRecord:(id)sender
{
    if (DoneRecord)
        return;
    @synchronized(self) {
        NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
        NSString *documentsDirectory = [paths objectAtIndex:0];
        NSString *sourcePath = [documentsDirectory stringByAppendingPathComponent:@"video.mov"];
        if (Recording)
        {
            [writerInput markAsFinished];
            NSLog(@"%ld", (long)videoWriter.status);
            [videoWriter finishWritingWithCompletionHandler:^{
                UISaveVideoAtPathToSavedPhotosAlbum(sourcePath,nil,nil,nil);
            }];
            DoneRecord = YES;
        }
        else
        {
            videoWidth = screenWidth/2;
            videoHeight = screenHeight/2;
            NSError * error = nil;
            NSFileManager *fileManager = [NSFileManager defaultManager];
            BOOL success = [fileManager removeItemAtPath:sourcePath error:&error];
            if (success) {

            }
            else
            {
                NSLog(@"Could not delete file -:%@ ",[error localizedDescription]);
            }
            RecordFrame = 0;
            videoWriter = [[AVAssetWriter alloc] initWithURL:
                                          [NSURL fileURLWithPath:sourcePath] fileType:AVFileTypeQuickTimeMovie
                                                                      error:&error];
            NSLog(@"%ld", (long)videoWriter.status);
            NSParameterAssert(videoWriter);

            NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                           AVVideoCodecH264, AVVideoCodecKey,
                                           [NSNumber numberWithInt:videoWidth], AVVideoWidthKey,
                                           [NSNumber numberWithInt:videoHeight], AVVideoHeightKey,
                                           nil];
            writerInput = [AVAssetWriterInput
                                                assetWriterInputWithMediaType:AVMediaTypeVideo
                                                outputSettings:videoSettings]; //retain should be removed if ARC

            NSParameterAssert(writerInput);
            NSParameterAssert([videoWriter canAddInput:writerInput]);
            writerInput.expectsMediaDataInRealTime = YES;
            [videoWriter addInput:writerInput];

            adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:nil];

            [videoWriter startWriting];
//            NSLog(@"%ld %@", (long)videoWriter.status, videoWriter.error);
            [videoWriter startSessionAtSourceTime:kCMTimeZero];
        }
        Recording = !Recording;
    }
}

OpenGL Texture Size and performance? Texture Cache? (Android\iOS GLES 2)

I am working on my dragon simulation mobile game and I am at the stage of adding a terrain to the game.

I optimized the terrain render for my iOS devices but on my Android device I was getting bad performance.

Understanding texture size and performance is kind of elusive. I was being told many times that big textures are bad but I was never able to find the correlation between texture size and performance.

I am still not sure what is the correlation between texture size, bandwidth and performance on mobile devices and I am sure it’s a complex one.

However, I did find out something else.

As I mentioned, my first attempt to copy the OpenGLES shaders from my iOS code to my Android code gave me poor results. The same scene that ran at 60 FPS on my iPod was running on 25 FPS on my Android phone.

This is how the scene looked like on my Android phone:

Slow Terrain Render

Scene rendered at 25 FPS(40 ms per frame)

For the terrain I am using two 2048×2048 ETC1 compressed textures. One for the grass and one for the rocky mountain.

Maybe my phone’s performance is really not as good as my iPod? But then, something was missing.

On my iPod I was already using mipmapped textures while on the first attempt of the Android version I didn’t use mipmapped textures.

Mipmapped textures are texture which not only contain the texture itself but also all (or some) of the smaller versions of the same texture image.

If you have a texture of size 16×16 pixels then a mipmapped texture will contain both the 16×16 image but also the 8×8, 4×4, 2×2 and 1×1 resolutions of the same image.

This is useful because it’s hard to scale down a texture on the GPU without losing details. The mipmapped images are precalculated offline and may use the best algorithms to reduce the image.

When rendering with mipmapped textures the GPU selects the mipmapped image that is the most suitable for the current scaling in the scene.

But apart from looking better, there is another advantage. Performance.

The same scene using mipmapped version of the 2048×2048 textures runs a lot faster than before. I could get a scene render at about 50 to 60 FPS.

The reason for that is that textures have a 2D spatial cache.

In this scene the mountain and grass textures are scaled down considerabley. This in turn makes the GPU sample the textures in texel(texture pixels) that are far from each other making no use of the cache.

In order to make use of the cache the sampling of the texture must have spatial proximity.

When using the mipmapped version of the texture a much smaller layer of the 2048×2048 texture was sampled and thus it was possible to make use of the cache for this specific image.

For the sake of completion, here is the scene with the mipmapped textures:

Runs at about 50-60 FPS(17-20 ms)

Runs at about 50-60 FPS(17-20 ms)

Location 0? location -1? glGetUniformLocation, C++ and bugs. (Android\iOS GLES 2)

In OpenGLES 2 glGetUniformLocation receives the program id and a string as parameters. It then attempts to return a location int that can be used to set uniform GLSL shader variables.

If the variable is found it will return a 0 or positive value. If it fails to find the uniform variable it will return -1.

In C++ we should initialize the location ints in the ctr. If we don’t initialize the locations we might have garbage values when in Release mode.

Using the locations with garbage values might overwrite uniform variables with values we did not intend them to have.

So what we should initialize the locations with? One might think that 0 is a good value to initialize but it is not.

Remember! 0 is a valid shader uniform variable location. If we set all the locations to 0 we might overwrite the uniform variable at location 0.

We should initialize the location ints with -1.

We should do this because -1  is the value that is returned in case the uniform variable was not found and setting a value at location -1 will be ignored.

glDepthFunc and GL_LEQUAL for background drawing in an OpenGL ES 2 Android\iOS game.

For the (semi secret)game I am currently working on(http://www.PompiPompi.net/) I had to implement the background or backdrop of the 3D world.

A simple method to draw the background is having a textured sphere mesh surround the camera.

The issue with this method is that the sphere is not perfect so there are distortions and it doesn’t completely fit the viewing frustum. Which means some geometry that is inside our frustum will be occluded by the sphere.

A different method is to draw a screen space quad at the exact back end of the frustum. This quad will have coordinates in screen space and won’t require transformation.

The back of the plane in screen space coordinates is 1.

You could disable writing into the ZBuffer with glDepthMask(false) and draw the background first. All the geometry that renders afterwards will overwrite the background.

However, what if we want to draw the background last and save on fill rate by using the ZBuffer?

Just drawing the background last should have done that but instead it might not render at all.

We usually clear the depth buffer part of the render buffer into 1.0 which is the highest value for the depth buffer. But our background screen mesh is also rendered into 1!

It turns out the default depth test function in OpenGLES is GL_LESS. Since the ZBuffer is already 1.0 our background screen mesh won’t render.

What we can do is before rendering the screen mesh, we can set the depth test function  into less or equal by calling: glDepthFunc(GL_LEQUAL);

This way our background buffer will not draw at pixels that we have drawn geometry before but will draw on the rest of the “blank” pixels.