Category Archives: iphone

#Unity3d hardware usage + implications – Summer 2015

There’s tonnes of blogs out there, so I only talk about the bits that other people have missed, or were too polite or inexperienced to cover. Often that means I’m the one pointing out the flaws (most people don’t want to write bad things. Screw that; ignoring the bad points does you no favours!).

Sometimes I get to talk about the good bits that – sadly – few people have noticed. Here’s one of those.
Continue reading

OpenGL ES 2 – Video on iPhone in 3D, and Texture-Map sources

This is a blog post I wrote 6 months ago, and my life got too busy to finish it off, finish testing the code, improving the text. So I’m publishing this now, warts and all. There are probably bugs. There are bits that I could explain better – but I don’t have time. Note that the code is ripped from apps where I ACTUALLY USED IT – so I know it does work, and works well!

In brief:

  1. Using different “sources” of textures in OpenGL, not just images!
  2. Using CALayer’s as a source — allows for video in 3D!
  3. Using “Video straight from the video decoding chip” as a source – faster video!

Continue reading

OpenGL ES2 – Shader Uniforms

There’s a famous animated GIF of an infinitely swirling snake (here’s one that’s been Harry Potterised with the Slytherin logo):

Impressive, right?

What if I said it only relies upon one variable, and that you can reproduce this yourself in 3D in mere minutes? It’ll take quite a lot longer to read and understand, but once you’ve grokked it, you’ll be able to do this easily.

Background reading

Make sure you understand:

  1. Draw Calls
  2. VAOs + VBOs
  3. and at least one kind of Texturing:

Uniforms vs. Vertex Attributes

In theory, you don’t need Uniforms: they are a special kind of Vertex-Attribute (which gives them their name: they are a “uniform attribute”).

In practice, as we’ve already seen, bitmap-based texture-mapping in GL ES requires Uniforms to make a link between Texture and Shader.

The code for sending them to the GPU is different from sending Vertex Attributes – annoyingly, it’s more complicated – but the concept is identical. Then why do we have them?


And, as a bonus: convenience.

How many items per Spaceship?

A typical 3D model of a spaceship has:

  • 5,000 polygons
  • 10,000 vertices
  • 5 bitmap tetures

OK, fine, so …

Performance basics

Your vertices each specify which texture(s) they’re using. If you want to change the textures, from “standard” ones to “damaged” ones, you’ll have to:

  1. Upload the new texture (one-time cost; once it’s on GPU you can fast-switch between textures)
  2. Re-Upload the “uses texture 1 (out of the 5)” vertex-attribute … once for every vertex (repeated cost: has to be done every time)

Uniforms bypass this by saying:

A GL Uniform is a Vertex Attribute that has the same value for EVERY vertex. It only needs to be uploaded ONCE and is immediately applied to every vertex

But there’s more …

Each vertex can hold up to 2kb of data (in OpenGL ES; more on desktop GL) – making our ship take 10 megabytes of GPU RAM. But that’s small by today’s standards – and as the model gets more complicated, the storage needed increases.

By contrast, the number of Uniforms needed for a model is typically constant.

The net effect is that GPU vendors can afford to use faster RAM for their Uniforms, boosting performance even further.

Convenience basics

Revisiting that spaceship, if we’re using it in a game, there’s a lot more things we’ll want to include in the 3D model:

  • 10 different versions, one for each player. They’re all the same, but some elements change colour to match the Player’s colour
  • Some of the textures animate: e.g. Landing strips, with lights that strobe
  • Gun turrets need to rotate hundreds of vertices at once, without affecting their neighbours
  • … and so on.

Each of these becomes trivial when your DrawCall has global-constants – i.e Uniforms. For instance:

  • 10 different versions…:
    • The vertices that might change colour have a vertex attribute signalling this; at render-time, the shader sees this flag on the verte, and reads from a Uniform what the “actual” colour should be. Change the uniform, and the colour changes everywhere on the model at once
  • Textures animate…:
    • When you read a U,V value from the texture-bitmap, add a number to U and/or V that comes from a Uniform.
    • Mark the teture as “GL_REPEAT”, so that GL treats it like an infinitely tiled teture
    • Increase that uniform by a tiny amount each time you render a frame (e.g. 0.001), and the texture appears to “scroll”
  • Gun turrets rotate…:
    • Use a second DrawCall to draw the turrets.
    • Each turret has a Uniform “rotation angle in degrees”
    • When rendering, your Shader pre-rotates ALL vertices in each turret by the Uniform’s value.
    • Per frame, change the Uniform for “rotation angle”, and the whole turret rotates at once
  • …etc…

Implementing and Using Uniforms in OpenGL

Render / Update cycle for Uniforms

With Vertex-Attributes, it was easy:

  1. CPU: Generate geometry (load from a .3ds file; algorithm for making a Cube; etc)
  2. GPU: Create 1 or more VBO’s to hold the data on GPU
  3. CPU->GPU: Upload the geometry from CPU to GPU in a single big chunk
  4. Every frame: GPU reads the data from local RAM, and renders it
  5. To change the data, re-do all the above

With Uniforms, it’s more tricky.

Firstly – like everything else in Shaders – Uniforms ignore any OpenGL features that already existed. The GPU intelligently selected the correct VBOs each frame, by using the data inside the VAO. But Shaders ignore the VAO, and need to be manually switched over.

Secondly – in VBO’s, OpenGL does not care what format your data is in. But with Uniforms, suddenly it does care: you have to specify, every time you upload them.

Thirdly – GL uses an efficient C-language mechanism for uploading Uniforms with minimal overhead. With VBO’s, the VAO took care of this automatically, but again: Shaders need you to do it by hand.

Together, these complicate the process:

  1. CPU: generate a value for the Uniform
  2. CPU: create an area in RAM that will hold the value, and place it there
  3. GPU: automatically creates storage for the Uniform when you compile/link the ShaderProgram
  4. CPU->GPU: switch to the specific ShaderProgram that will use the Uniform
  5. CPU->GPU: don’t send the data; instead, send the “memory-address” of the data
  6. CPU->GPU: upload using one of thirty three unique methods (instead of the one for Verte Attributes)
  7. Every frame: GPU reads the data from local RAM, but each ShaderProgram has its own copy
  8. To change the data, re-do all the above

Uploading a value to a Uniform

After you’ve linked your ShaderProgram, you can ask OpenGL about the Uniforms it found. For each Uniform, you get:

  1. The human-readable name used in the GLSL file
  2. The OpenGL-readable name generated automatically (an integer: GLint)
  3. The GLType (int, bool, float, vec2, vec3, vec4, mat2, mat3, mat4, etc)
  4. Is this one value, or an array of values? If it’s an array: how many slots in the array? (all GLSL arrays are fied length)

The GLType has to be saved, because when you want to upload, there’s a different upload method for each distinct type:

glUniform1i – sends 1 * integer

glUniform3f – sends 3 * floats

glUniformMatri4fv – sends N * 4×4-matrices, each using floats internally


To handle this automatically, I wrote three chunks of code:

  1. GLK2Uniform.h/m:: stores the GLType, is it a Matrix or Vector (or float?), etc
  2. GLK2ShaderProgram.h/m .. -(NSMutableDictionary*) fetchAllUniformsAfterLinking: parses the data from the Shader, and creates GLK2Uniform instances
  3. GLK2ShaderProgram.h/m .. -(void) setValue:(const void*) value forUniform:(GLK2Uniform*) uniform: uses the GLType data etc to pick the appropriate GL method to upload this value to the specified Uniform

That last method takes “const void*” as argument: i.e. it has no type-checking. I find this much simpler than continually specifying the type. It also intelligently handles dereferencing the pointer (for matrices and vectors) or not (for ints, floats, etc).

Uniforms and VAOs: a missing feature from OpenGL

So far, we’ve used VAO’s. They’re very useful, seemingly they:

…store all the render-state that is specific to a particular Draw call

Tragically: Shaders ignore VAO’s. Once you start using Uniforms, you find that VAO’s actually:

…store all the render-state that is specific to a particular Draw call, so long as that state isn’t in a ShaderProgram (i.e. isn’t a Uniform)

Shaders are the only place where VAO’s aren’t used, and it’s very easy to forget this and have your code break in weird and wonderful ways. If you find Shader state seems to be leaking between DrawCalls, you almost certainly forgot to eplicitly switch ShaderProgram somewhere.

Note: this applies not only for rendering a DrawCall, but also for setting the value of the Uniform. You must call glUseProgram() before setting a Uniform value.

Because we often want to set Uniform values outside of the draw loop – e.g. when configuring a Shader at startup – I added a method that automatically switches to the right program for you. If you use this repeatedly on every frame, it’ll damage performance, but it’s great for checking if you’ve forgotten a glUseProgram somewhere:


-(void) setValueOutsideRenderLoopRestoringProgramAfterwards:(const void*) value forUniform:(GLK2Uniform*) uniform 
	GLint currentProgram;
	glGetIntegerv( GL_CURRENT_PROGRAM, &currentProgram);
	[self setValue:value forUniform:uniform];

Uniforms in your Game Engine

Game-Engine code has to treat Uniforms a little specially:

  • Unlike Vertex-Attributes, we tend to update Uniforms very frequently – often every frame.
  • We have to reference them by human-readable name (instead of simply ramming them into a homogeneous C-array).
  • We have to remember to keep calling glUseProgram() each time we write to a Uniform, or render a DrawCall.

You can layer it in fancy OOP wrappers, but ultimately you’re forced to have a Hashtable/Map somewhere that goes from “human-readable Uniform name” to “chunk of memory holding the current value, that can be sent to the GPU whenever it changes”.

Desktop GL is different; they modified the GLSL / Shader spec so that it allowed for slightly tighter integration of variables with your main app. Sadly, they didn’t include those features in GL ES

I’ve tried it a few different ways, but the problem is that you have to store a “string” mapping to a “C-struct”. Worse, OpenGL ignores the value of the struct, it only uses the memory-address. So that struct has to be at a stable location in RAM.

This might not seem a problem, but Apple’s system for storing structs in NSDictionary is to create and destroy them on-the-fly (on the stack) – so there’s never a stable memory-address.

Here’s my current best workaround for ObjectiveC OpenGL apps…

An intelligent, C-based, “Map” class


  1. All our data will be structs
    1. C can easily store data if it’s homogeneous
    2. and OpenGL only has circa 10 unique structs for Uniforms
    3. …so: 10 arrays will be enough to store “all possible” Uniform values for a given ShaderProgram
  2. Data is unique per ShaderProgram
    1. C arrays-of-structs can’t change size once created :(
    2. But: the Uniforms for a ShaderProgram are hard-coded, cannot change at runtime
    3. …so: we can create one Map per ShaderProgram, and we know it will always be correct
  3. C-strings are horrible, and we want to avoid them like the plague
    1. We can easily convert C-strings into Objective-C strings (NSString)
    2. Apple’s NSArray stores NSString’s, and returns an int when you ask “which slot contains NSString* blah?”
    3. C allows int’s for direct-fetching of locations in an array-of-structs
    4. …so: we can have a Data Structure of NSString’s, and a separate C-array of structs, and they never have to interact


@interface GLK2UniformMap : NSObject

+(GLK2UniformMap*) uniformMapForLinkedShaderProgram:(GLK2ShaderProgram*) shaderProgram;

- (id)initWithUniforms:(NSArray*) allUniforms;

-(GLKMatrix2*) pointerToMatrix2Named:(NSString*) name;
-(GLKMatrix3*) pointerToMatrix3Named:(NSString*) name;
-(GLKMatrix4*) pointerToMatrix4Named:(NSString*) name;
-(void) setMatrix2:(GLKMatrix2) value named:(NSString*) name;
-(void) setMatrix3:(GLKMatrix3) value named:(NSString*) name;
-(void) setMatrix4:(GLKMatrix4) value named:(NSString*) name;

-(GLKVector2*) pointerToVector2Named:(NSString*) name;
-(GLKVector3*) pointerToVector3Named:(NSString*) name;
-(GLKVector4*) pointerToVector4Named:(NSString*) name;
-(void) setVector2:(GLKVector2) value named:(NSString*) name;
-(void) setVector3:(GLKVector3) value named:(NSString*) name;
-(void) setVector4:(GLKVector4) value named:(NSString*) name;


You create a GLK2UniformMap from a specific GLK2ShaderProgram. It reads the ShaderProgram, finds out how many Uniforms of each GLType there are, and allocates C-arrays for each of them.

Later, you can use the “setBLAH:named:” methods to set-by-value any struct. Importantly, this does NOT take a pointer! This ensures you can create a struct on the fly – all of Apple’s GLKit methods do this. e.g. you can do:

GLK2UniformMap* mapOfUniforms = ...
[mapOfUniforms setVector3: GLKVector3Make( 0.0, 1.0, 0.0 ) named:@"position"];

Connecting the GLK2UniformMap to a GLK2DrawCall

In previous posts, I created the GLK2UniformValueGenerator protocol. This is a simple protocol that uses the same method signatures as used by OpenGL’s Uniform-upload commands.

We etend GLK2UniformMap, and implement that protocol, to create something we can attach to a GLK2DrawCall, and have our rendering do everything else automatically:


@interface GLK2UniformMapGenerator : GLK2UniformMap <GLK2UniformValueGenerator>

+(GLK2UniformMapGenerator*) generatorForShaderProgram:(GLK2ShaderProgram*) shaderProgram;
+(GLK2UniformMapGenerator *)createAndAddToDrawCall:(GLK2DrawCall *)drawcall;


Internally, the methods are very simple, e.g.:


@implementation GLK2UniformMapGenerator
-(GLKMatrix2*) matrix2ForUniform:(GLK2Uniform*) v inDrawCall:(GLK2DrawCall*) drawCall
	return [self pointerToMatrix2Named:v.nameInSourceFile];

NB: in the protocol, I included the GLK2DrawCall that’s making the request. This is unnecessary. In future updates to the source, I’ll probably remove that argument.

Animated textures: the magic of Uniforms

Finally, let’s do something interesting: animate a texture-mapped object.

The sample code has jumped ahead a bit on GitHub, as I’ve been using it to demo things to a couple of different people.

Have a look around the project, but I’ve split into two projects. One contains the reusable library code, the other contains a Demo app that shows the library-code in use.

I simplified all the reusable render code to date into a Library class: GLK2DrawCallViewController (etends Apple’s GLKViewController)

I’ve also moved the boilerplate “create a triangle”, “create a cube” etc code into a Demo class: CommonGLEngineCode

The sample project – permanent link to branch for this article – has a simple ViewController that loads a snake image and puts it on a triangle:


@interface AnimatedTextureViewController ()
@property(nonatomic,retain) GLK2UniformMapGenerator* generator;

@implementation AnimatedTextureViewController

-(NSMutableArray*) createAllDrawCalls
	/** All the local setup for the ViewController */
	NSMutableArray* result = [NSMutableArray array];
	/** -- Draw Call 1:
	 triangle that contains a CALayer texture

	GLK2DrawCall* dcTri = [CommonGLEngineCode drawCallWithUnitTriangleAtOriginUsingShaders:
						   [GLK2ShaderProgram shaderProgramFromVertexFilename:@"VertexProjectedWithTexture" fragmentFilename:@"FragmentTextureScrolling"]];

That’s using the refactored CommonGLEngineCode class to make a unit triangle appear roughly in the middle of the screen.

Then we setup the UniformMapGenerator (no values yet):

	self.generator = [GLK2UniformMapGenerator createAndAddToDrawCall:dcTri];

NB: the generator class automatically detects requests for Sampler2D, and ignores them. Those are only used for texture-mapping, which we handle automatically inside the GLK2DrawCall class (see previous post for details).

	/** Load a scales texture - I Googled "Public Domain Scales", you can probably find much better */
	GLK2Texture* newTexture = [GLK2Texture textureNamed:@"fakesnake.png"];
	/** Make the texture infinitely tiled */
	glBindTexture( GL_TEXTURE_2D, newTexture.glName);
	/** Add the GL texture to our Draw call / shader so it uses it */
	GLK2Uniform* samplerTexture1 = [dcTri.shaderProgram uniformNamed:@"s_texture1"];
	[dcTri setTexture:newTexture forSampler:samplerTexture1];

…again: c.f. previous post for details of what’s happening here, nothing’s changed.

	/** Set the projection matrix to Identity (i.e. "dont change anything") */
	GLK2Uniform* uniProjectionMatrix = [dcTri.shaderProgram uniformNamed:@"projectionMatrix"];
	GLKMatrix4 rotatingProjectionMatrix = GLKMatrix4Identity;
	[dcTri.shaderProgram setValueOutsideRenderLoopRestoringProgramAfterwards:&rotatingProjectionMatrix forUniform:uniProjectionMatrix];
	[result addObject:dcTri];
	return result;


Finally, we now have to implement a callback to update our Generator’s built-in structs and ints and floats once per frame:

-(void)willRenderDrawCallUsingVAOShaderProgramAndDefaultUniforms:(GLK2DrawCall *)drawCall
	/** Generate a smoothly increasing value using GLKit's built-in frame-count and frame-timers */
	double framesOutOfFramesPerSecond = (self.framesDisplayed % (4*self.framesPerSecond)) / (double)(4.0*self.framesPerSecond);
	[self.generator setFloat: framesOutOfFramesPerSecond named:@"timeInSeconds"];

Run the project, tap the button, and you should see snakey skin scrolling along the surface of a 3D triangle:

Screen Shot 2014-02-20 at 01.51.20

From scales-on-a-triangle to realistic snake

The scrolling works by moving our offset across the surface of the triangle. Doing this with a Uniform means that the speed is constant relative to the corners of the triangle.

i.e. if you make the triangle smaller, it will take the same time to cover the distance, but it’s covering a shorter distance, so appears to move slower.

We’re getting the effect – for free! – of skin bunching up and stretching out. All you have to do is make your triangles shorter on the inside of a snake-coil, and longer on the outside.

If you model your snake the easiest possible way, this bunching will happen automatically. Simply take a cylinder and bend it with a transform – the vertex attributes (that force the texture to map across each triangle) won’t change, but the triangle sizes will, causing realistic bunching/stretching of the skin.

GLKit Extended: Refactoring the View Controller

If you’ve been following my tutorials on OpenGL ES 2 for iOS, by the time you finish Texturing (Part 6 or so) you’ll have a lot of code crammed into a single UIViewController. This is intentional: the tutorials are largely self-contained, and only create classes and objects where OpenGL itself uses class-like-things.

…but most of the code is re-used from tutorial to tutorial, and it’s getting in the way. It’s time for some refactoring.
Continue reading

OpenGL ES 2 – Textures 2 of 3: Texture Mapping

Recap: Texturing in OpenGL can be achieved in two separate ways, using different API’s and hardware features. The preferred, modern approach – procedural texturing – uses nothing more than a Fragment shader (this is really what Fragment shaders were created for: Texturing).

Part 5 (Textures: 1 of 3) covered that in detail – but there’s another option: Texture-Mapping (also known as “UV Mapping”, it’s identical).
Continue reading

SVGKit: Programmatic editing of SVG files on iOS

(this is a post mainly to document some new features and fixes I’ve added to SVGKit version 1.2 today – they are already on the main development branch (currently: 1.x), ready for use)

SVGKit overview, November 2013

While a group of us were cleaning up and improving SVGKit about 2 years ago, I did a quick-n-simple re-architect of the in-memory data structures to split it into three wholly independent sets of source code / classes:

  1. Parsing an SVG file
    1. Parsing a legal XML file, with a new conforming DOM parser I wrote from scratch (because SVG Spec requires you to use a DOM parser, and Apple won’t let you use/extend their one on iOS/OSX!)
    2. Parsing the core SVG spec where it differs from XML-DOM
    3. Adding a system for user-supplied custom-parsers so that you can parse your custom XML in-line with the SVG (this is a major feature of SVG, but difficult to parse!)
  2. Rendering an SVG file with pixels on-screen
    1. Converting in-memory SVG data into Apple’s CALayer rendering format (used on all Apple Operating Systems)
    2. Optionally converting Apple’s CALayer format into highly-optimized hybrid data that lets you render SVG’s *fast*
    3. Painstakingly implementing every feature of SVG, from radial gradients to rich text (we’re about 90% complete now, still features left to add – please help!)
  3. Outputting an SVG file back to disk
    1. …not supported … until now!

Continue reading

iOS Open GL ES 2: Multiple objects at once


  1. Part 1 – Overview of GLKit
  2. Part 2 – Drawcalls, and how OpenGL code is architected
  3. Part 3 – Vertices, Shaders and Geometry
  4. Part 4 – (additions to Part 3); preparing for Textures

…but looking back, I’m really unhappy with Part 4. Xcode5 invalidates almost 30% of it, and the remainder wasn’t practical – it was code-cleanup.

So, I’m going to try again, and do it better this time. This replaces my previous Part 4 – call it “4b”.

Drawing multiple 2D / 3D objects

A natural way to “draw things” is to maintain a list of what you want to draw, and then – when the OS / windowing library / whatever is ready to draw, you iterate over your “things” something like:
Continue reading

OpenGL ES 2 for iOS: glDebugging and cleaning-up our VBOs, VAOs, and Draw calls

UPDATE: This post sucks; it has some useful bits about setting up a breakpoint in the Xcode debugger – but apart from that: I recommend skipping it and going straight to Part 4b instead, which explains things much better.

This is Part 4, and explains how to debug in OpenGL, as well as improving some of the reusable code we’ve been using (Part 1 has an index of all the parts, Part 3 covered Geometry).

Last time, I said we’d go straight to Textures – but I realised we need a quick TIME OUT! to cover some code-cleanup, and explain in detail some bits I glossed-over previously. This post will be a short one, I promise – but it’s the kind of stuff you’ll probably want to bookmark and come back to later, whenever you get stuck on other parts of your OpenGL code.

Cleanup: VAOs, VBOs, and Draw calls

In the previous part, I deliberately avoided going into detail on VAO (Vertex Array Objects) vs. VBO (Vertex Buffer Objects) – it’s a confusing topic, and (as I demonstrated) you only need 5 lines of code in total to use them correctly! Most of the tutorials I’ve read on GL ES 2 were simply … wrong … when it came to using VAO/VBO. Fortunately, I had enough experience of Desktop GL to skip around that – and I get the impression a lot of GL programmers do the same.

Let’s get this clear, and correct…
Continue reading

Why is a Fragment Shader named a Fragment Shader?

I’m writing some KISS tutorials on using OpenGL ES 2.0 in iOS/mobile, because I was frustrated that most of the tutorials on GL ES used incorrect or outdated approaches.

(I’m using the OpenGL ES 2.0 Programming Guide as my baseline, along with my own experiences of game-programming, and checking it all against Khronos Group /’s docs as I go)

I use this stuff in iOS apps, but … I’m a relative novice with shader-pipelines. So I’ve been leaning on the guys at The Chaos Engine for proofreading and to help me avoid spreading misinformation. Most stuff people broadly agree on, but there’s been some interesting debate over a seemingly trivial question:

Why does OpenGL use the term “Fragment Shaders” to describe the shaders-that-generate-pixels? (where other API’s call the same thing “Pixel Shaders”, a much easier to understand/remember name)

Continue reading

OpenGL ES 2: Basic Drawing

UPDATED 24/09/13: Added some essential details to the class files at end of post, and corrected typos

UPDATED 24/09/13: Added Github project with full source from this article

This is Part 2. Part 1 was an overview of where Apple’s GLKit helps with OpenGL ES setup and rendering.

I’ve been using OpenGL ES 2 less than a year, so if you see anything odd here, I’m probably wrong. Might be a mix-up with desktop OpenGL, or just a typo. Comment/ask if you’re not sure.
Continue reading

GLKit to the max: OpenGL ES 2.0 for iOS – Part 1: Features

Apple uses OpenGL extensively – their 2D desktop libraries (Quartz, CoreAnimation) are OpenGL-based, and iOS has inherited that link (along with some funky tricks you can pull).

December 2013: I’ve converted the sample code from these articles into a standalone library on GitHub, with the article-code as a Demo app. It uses the ‘latest’ version of the code, so the early articles are quite different – but it’s an easy starting point

iOS developers often don’t have an OpenGL background, and most of the ones I work with like the idea of OpenGL, but feel they don’t have the time to learn and master it. However, the combination of “new API (OpenGL ES 2)” with “Apple’s helper classes (GLKit)” makes OpenGL development on mobile suprisingly fast and easy.

A few years ago, Jeff LaMarche wrote a series of simple, easy tutorials on getting started with OpenGL ES 1. In the same spirit, I’m going to write about GL ES 2. In the land of Khronos (the OpenGL standards group), version “2” is fundamentally different to version “1” (this is a good thing: ES 1 was outdated from the start, based on 1990s OpenGL. ES 2 is a modern API, based on 2000’s OpenGL)

I’ll be covering this from an iOS perspective. There are lots of GL ES tutorials out there, but for non-iOS platforms the challenges (and solutions) are different.

Quick links to all posts

iOS 5.0, iOS 6.0, and GLKit

With iOS 5.0, Apple introduced a new Framework: GLKit. GLKit’s apparent aim was to fix ES 2.0 to make it simpler and easier for non-OpenGL experts to use.

iOS 7.0…

It’s not live yet, and I’d hoped Apple might update GLKit to fix some of the holes below. I don’t know what will be in the final iOS 7 release, but I get the impression GLKit hasn’t changed yet. If so, everything that follows applies equally to iOS 7.

Summary of Apple’s GLKit and where it helps/hinders

Porting GL ES 1.x code
Full Apple provided a good, easy-to-use set of classes to make it trivial to port GL ES 1.x code. The documentation is very poor, though.

More importantly: it prevents you from using Shaders, which are one of the easiest and most powerful parts of OpenGL (once you get up and running). So, we’ll be ignoring GL ES 1.x

Classes: anything with the text “Effect” in the class name (GLKBaseEffect, GLKEffectProperty, GLKNamedEffect, etc)

Vector math
Full Every desktop OpenGL implementation assumes you have access to a good Vector Math library.

Until GLKit, Apple’s iOS was the exception: you had to make your own “vector” classes, you had to write all the Dot Product code, etc. (e.g. c.f. Jeff LaMarche’s first tutorial). Not any more :). Apple’s design here is good, and closely follows the conventions of their other frameworks (works the same way as CGRect, CGPoint etc from Quartz/CoreGraphics)

Structs: GLKVector2, GLKVector3, GLKVector4

Full Quaternions have a bad rep. Many people find them incomprehensibly hard to understand/code, and yet … they are essential once you start “rotating 3D objects”.

Apple’s implementation of Quaternions is excellent: you don’t need to understand the mathematics, just use the pre-built methods

Matrix math
Full Like Vector-math, Matrix math is tricky and time consuming to build for yourself and debug.
Apple’s done all of it, with an good set of methods.

Structs: GLKMatrix4, GLKMatrix3

OpenGL Projection
Partial (almost full) OpenGL uses 4-dimensions to deal with 3-dimensional rendering. That could get difficult, fast. Skipping the reasons for it, OpenGL used to be hardcoded to a set of special matries (M, V, and P – model, view, and projection).

GL ES 2 threw away the hard-coded matrices, and says “do it all yourself” (which, as we’ll see later, actually makes things easier in the long run). This is a pain … except Apple’s done it for us. Don’t go writing your own MVP stack code – use Apple’s.

Structs: GLKMatrixStack

Texture loading
Partial (poor) See post: “Part 6”

Before GLKit, you had to write long and complex methods, using CoreAnimation and Quartz, to convert “file.png” and upload it to the graphics chip as an “OpenGL texture”.

That code was hard to debug, and most iOS programmers aren’t familiar with CA/Quartz. Apple wrote a proper Objective-C texturing system that does the work of Quartz and y-flipping for you. For most “simple” code, this is perfect.

…but: they screwed up in a few places, some major bugs. When it works, it’s fine – and it only needs two lines of code! – so we’ll use it in the early articles, but we’ll throw it away and write a “fixed” version for the later articles.

Classes: GLKTextureInfo, GLKTextureLoader

Mixing UIKit with OpenGL
Partial (OK) See post: “Part 2”

There’s a lot of FUD on the web that says this is “impossible” or “slow” or … etc.

Don’t believe it. There are bugs in Apple’s CALayer / Quartz / CoreAnimation classes that make them slow *independent* of whether you’re running OpenGL. It’s just that the things people want to do when using UIKit with OpenGL are usually the ones that show up the bugs in UIKit/Quartz.

We’ll cover the main gotchas, and look at how to avoid or improve them. But for the most part: it works automagically. (there’s a reason for this: UIKit is implemented on top of OpenGL, so it’s already integrated to a high standard. It’s just that Apple hid the integration points)

Shaders (vertex, fragment)
None See post: Part 3
See post: “Part 6”

GLKit pretends that shaders don’t exist. The most important feature of OpenGL ES 2.0 – and Apple ignored it. Sad, but true. We’ll fix that.

Multithreading, context-switching
Full OpenGL supports multi-threading, background loading, all sorts of funky stuff.

Although it’s not strictly part of GLKit, Apple has re-used their old EAGLContext class to provide access to all of this. This is probably because it worked fine in the first place. However, to be clear: if you’re used to EAGLContext, it’s still used EVERYWHERE by GLKit.

Classes: EAGLContext

Multi-pass rendering
None See post: “Part 2”

You can make a cube appear on screen, textured, using single-pass rendering.

Just about everything else you ever want to do … needs more than one pass.

Apple provides no support for this, so you have to write it yourself (and there’s a surprisingly large amount of boilerplate you need to write here).

3D models, animation, data formats
Partial (very little) See post: Part 3
See post: Part 4b

GLKit does one great thing with 3D data formats: it provides a Vector class that all iOS apps/libraries/source can use, and be fully compatible with each other.

But it provides zero support for the rest: meshes, 3D models, VAOs, VBOs, etc.

Error handling and state management
None See post: Part 4

When you have a bug in your code, GL does nothing. Nor does Apple. Sooner or later you’ll find yourself weeping in frustration.

Performance analysis
Partial (some) Apple makes a lot of noise about how good Instruments is with OpenGL.

This is true, it’s good. But Apple also blocks you from accessing the hardware-manufacturers own performance tools, which may be better.

If you already know Instruments inside-out (e.g. you know about the invisible RHS-panel…), you’ll be right at home.

Next steps

If you know nothing at all about OpenGL, I recommend skim-reading the first 8 of Jeff LaMarche’s posts on GL ES 1.

NOTE: a *lot* of the detail in Jeffs’ posts is now unnecessary (or superceded). But it all helps with understanding the newer/cleaner GL ES 2 (and GLKit) versions. If you get stuck or struggle, skim that post and move on to the next. Each of his posts works well standalone.

Then head on to Part 2 – Drawing and Draw calls.

Also posted to #AltDevBlog at

Preview of a new game: “Peace by other means” – Screenshots1

I’ve just posted some screenshots + notes for Reddit’s Screenshot Saturday, for the iPad game I’ve been working on for almost 2 years now.


Doesn’t look like much given how long it’s been in development :(, but I’m hoping it’ll speed up from here!

[homepage for the game]

Along the way, I’ve:

  • built the game
  • taught myself advanced Quartz / CoreAnimation
  • wrote a detailed playable game with AI
  • discovered the hard way that Apple doesn’t use hardware-acceleration properly on iPhone/iPad
  • threw away the 2D renderer, re-designed the game around OpenGL + 3D
  • re-wrote everything in 3D with OpenGL 1.x
  • taught myself OpenGLES 2.0
  • re-wrote everything with shaders

Anatomy of a cluster-f*ck: Imagination’s SDK installer

iOS devices (iPhone, iPod Touch, iPad) are powered by 3D chips branded “PowerVR” from a company branded as “Imagination”.

If you want to develop 3D games/apps, you can do that using Apple’s free tools + SDK. But some of the good stuff – e.g. higher-res textures – you’ll need to dive into PowerVR specifics. This should *in theory* be very, very, very easy. But Imagination does not make it so :(.

All you really need is a few source files, but instead of putting them on their website to for you to download, Imagination has wrapped them up in a 1 gigabyte (broken) self-extractor. And it doesn’t work. It *really* doesn’t work. Read on for some of the joys of just how awful something as simple as a “unzip this file” program can get…

UPDATE: I realised after posting that I left out a very important point. Until this mess, I’d found Imagination’s tech guys to be friendly and helpful, and their tools to be useful and to work fine. They were always badly documented (e.g. very bad error handling, missing key facts like “a 64 megabyte texture requires 3 GIGABYTES of RAM to save”, etc – but they essentially “worked”). Maybe I just got lucky until now, but this installer seems a radical departure in terms of quality and testing. For anyone who’s *not* used the PowerVR stuff before, bear this in mind: IME, this experience is not normal. Also: use the forums – the support team seems pretty responsive.

TL;DR – if you want to load PVR textures on iOS, google for “PVR” and “iOS” and “copyright imagination” and find the header files and source that are embedded in a couple of open-source projects from a couple years back, before Imagination accidentally broke everything.
Continue reading install is broken; silently requires Facebook SDK

…and the last working version of’s SDK isn’t listed on their website any more.

So … if you see this (which you probably will, on any non-trivial project):

Undefined symbols for architecture i386:
“_FBTokenInformationExpirationDateKey”, referenced from:

…then you MUST download and install the Facebook API into your iOS / Xcode project. Especially if you’re not actually using Facebook!

Why? for iOS is currently setup so that you CANNOT use ANY LIBRARIES AT ALL, unless you ALSO use the Facebook library. Oops.

A little bit of naive linking by the engineers. C-linking is a PITA to get right, so I don’t blame them.

More problems

1. Facebook won’t let you download their API / library any more.

Instead you have to “install an application” on your system that spews to random places on your machine (where? Well … the app won’t tell you, but on the Facebook website they say it all goes in ~/Documents) – and you’re not allowed to change them.

Wrong place, wrong installer (shouldn’t be hard-coded, shouldn’t “hide” the location). And a pain to deal with, when all that was needed or wanted was a simple ZIP file…

2. Facebook’s latest SDK requires iOS 6 to even compile it – even if you’re not using iOS 6. No-one should be hard-coding to iOS 6, though – so I’m surprised that FB is targetting it as default. iOS 5 is the main target version of iOS for now.

3. Once you find a 6 SDK, you have to add a bunch of extra frameworks which don’t exist except on iOS 6, and set them to “Optional” in the “Project Settings > Build Phases > Link with Libraries” phase.

Details are on the FB iOS Getting Started page, although they’re pretty hard to find (they’re hidden inside a drop-down with an unrelated title).

(incidentally: the FB iOS install page has always been way too long, so I suspect someone decided to “tidy it up” by hiding 95% of it. I think a better solution would have been to remove all the cruft, and fix the install process :))

…anyway, once you get past all that, things go smoothly. NB: when I wrote this, I was on a hack-day at Facebook’s offices, and it took 30 minutes to get Parse’s API installed, because of the above problems. It would have been even longer if I’d not used Facebook in the past, and knew how to navigate their install page.