UPDATE: This post sucks; it has some useful bits about setting up a breakpoint in the Xcode debugger – but apart from that: I recommend skipping it and going straight to Part 4b instead, which explains things much better.
This is Part 4, and explains how to debug in OpenGL, as well as improving some of the reusable code we’ve been using (Part 1 has an index of all the parts, Part 3 covered Geometry).
Last time, I said we’d go straight to Textures – but I realised we need a quick TIME OUT! to cover some code-cleanup, and explain in detail some bits I glossed-over previously. This post will be a short one, I promise – but it’s the kind of stuff you’ll probably want to bookmark and come back to later, whenever you get stuck on other parts of your OpenGL code.
Cleanup: VAOs, VBOs, and Draw calls
In the previous part, I deliberately avoided going into detail on VAO (Vertex Array Objects) vs. VBO (Vertex Buffer Objects) – it’s a confusing topic, and (as I demonstrated) you only need 5 lines of code in total to use them correctly! Most of the tutorials I’ve read on GL ES 2 were simply … wrong … when it came to using VAO/VBO. Fortunately, I had enough experience of Desktop GL to skip around that – and I get the impression a lot of GL programmers do the same.
Let’s get this clear, and correct…
To recap, I said last time:
- A VertexBufferObject:
- …is a plain BufferObject that we’ve filled with raw data for describing Vertices (i.e.: for each Vertex, this buffer has values of one or more ‘attribute’s)
- Each 3D object will need one or more VBO’s.
- When we do a Draw call, before drawing … we’ll have to “select” the set of VBO’s for the object we want to draw.
- A VertexArrayObject:
- …is a GPU-side thing (or “object”) that holds the state for an array-of-vertices
- It records info on how to “interpret” the data we uploaded (in our VBO’s) so that it knows, for each vertex, which bits/bytes/offset in the data correspond to the attribute value (in our Shaders) for that vertex
Vertex Buffer Objects: identical to any other BufferObject
It’s important to understand that a VBO is a BO, and there’s nothing special or magical about it: everything you can do with a VBO, you can do with any BO. It gets given a different name simply because – at a few key points – you need to tell OpenGL “interpret the data inside this BO as if it’s vertex-attributes … rather than (something else)”. In practice, all that means is that:
If you take any BO (BufferObject), every method call in OpenGL will require a “type” parameter. Whenever you pass-in the type “GL_ARRAY_BUFFER”, you have told OpenGL to use that BO as a VBO. That’s all that VBO means.
…the hardware may also (perhaps; it’s up to the manufacturers) do some behind-the-scenes optimization, because you’ve hinted that a particular BO is a VBO – but it’s not required.
Vertex Buffer Objects: why plural?
In our previous example, we had only one VBO. It contained only one kind of vertex-attribute (the “position” attribute). We used it in exactly one draw call, for only one 3D object.
A BufferObject is simply a big array stored on the GPU, so that the GPU doesn’t have to keep asking for the data from system-RAM. RAM -> GPU transfer speeds are 10x slower than GPU-local-RAM (known as VRAM) -> GPU upload speeds.
So, as soon as you have any BufferObjects, your GPU has to start doing memory-management on them. It has its own on-board caches (just like a CPU), and it has its own invisible system that intelligently pre-fetches data from your BufferObjects (just like a CPU does). This then begs the question:
What’s the efficient way to use BufferObjects, so that the GPU has to do the least amount of shuffling memory around, and can maximize the benefit of its on-board caches?
The short answer is:
Create one single VBO for your entire app, upload all your data (geometry, shader-program variables, everything), and write your shaders and draw-calls to use whichever subsets of that VBO apply to them. Never change any data.
OpenGL ES 2 doesn’t fully support that usage: some of the features necessary to put “everything” into one VBO are missing. Also: if you start to get low on spare memory, if you only have one VBO, you’re screwed. You can’t “unload a bit of it to make more room” – a VBO is, by definition, all-or-nothing.
How do Draw calls relate to VBO’s?
This is very important. When you make a Draw call, you use glVertexAttribPointer to tell OpenGL:
“use the data in BufferObject (X), interpreted according to rule (Y), to provide a value of this attribute for EACH vertex in the object”
…a Draw call has to take the values of a given attribute all at once from a single VBO. Incidentally, this is partly why I made the very first blog post teach you about Draw calls – they are the natural atomic unit in OpenGL, and life is much easier if you build your source-code around that assumption.
So, bearing in mind the previous point about wanting to load/unload VBOs at different times … with GL ES 2, you divide up your VBO’s in two key ways, and stick to one key rule:
- Any data that might need to be changed while the program is running … gets its own VBO
- Any data that is needed for a specific draw-call, but not others … gets its own VBO
- RULE: The smallest chunk of data that goes in a VBO is “the attribute values for one attribute … for every vertex in an object”
…you can have the values for more than one Attribute inside a single VBO – but it has to cover all the vertices, for each Attribute it contains.
A simple VBO class (only allows one Attribute per VBO)
For highest performance, you normally want to put multiple Attributes into a single VBO … but there are many occasions where you’ll only use 1:1, so let’s start there.
GLK2BufferObject.h
[objc]
@property(nonatomic, readonly) GLuint glName;
@property(nonatomic) GLenum glBufferType;
@property(nonatomic) GLsizeiptr bytesPerItem;
@property(nonatomic,readonly) GLuint sizePerItemInFloats;
-(GLenum) getUsageEnumValueFromFrequency:(GLK2BufferObjectFrequency) frequency nature:(GLK2BufferObjectNature) nature;
-(void) upload:(void *) dataArray numItems:(int) count usageHint:(GLenum) usage;
@end
[/objc]
The first two properties are fairly obvious. We have our standard “glName” (everything has one), and we have a glBufferType, which is set to GL_ARRAY_BUFFER whenever we want the BO to become a VBO.
To understand the next part, we need to revisit the 3 quick-n-dirty lines we used in the previous article:
(from previous blog post)
glGenBuffers( 1, &VBOName );
glBindBuffer(GL_ARRAY_BUFFER, VBOName );
glBufferData(GL_ARRAY_BUFFER, 3 * sizeof( GLKVector3 ), cpuBuffer, GL_DYNAMIC_DRAW);
…the first two lines are simply creating a BO/VBO, and storing its name. And we’ll be able to automatically supply the “GL_ARRAY_BUFFER” argument from now on, of course. Looking at that last line, the second-to-last argument is “the array of data we created on the CPU, and want to upload to the GPU” … but what’s the second argument? A hardcoded “3 * (something)”? Ouch – very bad practice, hardcoding a digit with no explanation. Bad coder!
glBufferData requires, as its second argument:
(2nd argument): The total amount of RAM I need to allocate on the GPU … to store this array you’re about to upload
In our case, we were uploading 3 vertices (one for each corner of a triangle), and each vertex was defined using GLKVector3. The C function “sizeof” is a very useful one that measures “how many bytes does a particular type use-up when in memory?”.
So, for our GLK2BufferObject class to automatically run glBufferData calls in future, we need to know how much RAM each attribute-value occupies:
[objc]
@property(nonatomic) GLsizeiptr bytesPerItem;
[/objc]
But, when we later told OpenGL the format of the data inside the VBO, we used the line:
(from previous blog post)
glVertexAttribPointer( attribute.glLocation, 3, GL_FLOAT, GL_FALSE, 0, 0);
…and if you read the OpenGL method-docs, you’d see that the 2nd argument there is also called “size” – but we used a completely different number!
And, finally, when we issue the Draw call, we use the number 3 again, for a 3rd kind of ‘size’:
(from previous blog post)
glDrawArrays( GL_TRIANGLES, 0, 3); // this 3 is NOT THE SAME AS PREVIOUS 3 !
WTF? Three definitions of “size” – O, RLY?
Ya, RLY.
- glBufferData: measures size in “number of bytes needed to store one Attribute-value”
- glVertexAttribPointer: measures size in “number of floats required to store one Attribute-value”
- glDrawArrays: measures size in “number of vertices to draw, out of the ones in the VBO” (you can draw fewer than all of them)
For the final one – glDrawArrays – we’ll store that data (how many vertices to “draw”) in the GLK2DrawCall class itself. But we’ll need to store the info for glVertexAttribPointer inside each VBO:
[objc]
@property(nonatomic,readonly) GLuint sizePerItemInFloats;
[/objc]
Refactoring the old “glBufferData” call
Now we can implement GLK2BufferObject.m, and remove the hard-coded numbers from our previous source code:
GLK2BufferObject.m:
[objc]
…
-(void) upload:(void *) dataArray numItems:(int) count usageHint:(GLenum) usage
{
NSAssert(self.bytesPerItem > 0 , @"Can’t call this method until you’ve configured a data-format for the buffer by setting self.bytesPerItem");
NSAssert(self.glBufferType > 0 , @"Can’t call this method until you’ve configured a GL type (‘purpose’) for the buffer by setting self.glBufferType");
glBindBuffer( self.glBufferType, self.glName );
glBufferData( GL_ARRAY_BUFFER, count * self.bytesPerItem, dataArray, usage);
}
[/objc]
The only special item here is “usage”. Previously, I used the value “GL_DYNAMIC_DRAW”, which doesn’t do anything specific, but warns OpenGL that we might choose to re-upload the contents of this buffer at some point in the future. More correctly, you have a bunch of different options for this “hint” – if you look at the full source on GitHub, you’ll see a convenience method and two typedef’s that handle this for you, and explain the different options.
Source for: GLK2BufferObject.h and GLK2BufferObject.m
- GLK2BufferObject.h – link to GitHub because it would make the blog post too long to insert it here
- GLK2BufferObject.m – link to GitHub because it would make the blog post too long to insert it here
What’s a VAO again?
A VAO/VertexArrayObject:
VertexArrayObject: stores the metadata for “which VBOs are you using, what kind of data is inside them, how can a ShaderProgram read and interpret that data, etc”
We’ll start with a new class with the (by now: obvious) properties and methods:
GLK2VertexArrayObject.h
[objc]
#import <Foundation/Foundation.h>
#import "GLK2BufferObject.h"
#import "GLK2Attribute.h"
@interface GLK2VertexArrayObject : NSObject
@property(nonatomic, readonly) GLuint glName;
@property(nonatomic,retain) NSMutableArray* VBOs;
/** Delegates to the other method, defaults to using "GL_STATIC_DRAW" as the BufferObject update frequency */
-(GLK2BufferObject*) addVBOForAttribute:(GLK2Attribute*) targetAttribute filledWithData:(void*) data bytesPerArrayElement:(GLsizeiptr) bytesPerDataItem arrayLength:(int) numDataItems;
/** Fully configurable creation of VBO + upload of data into that VBO */
-(GLK2BufferObject*) addVBOForAttribute:(GLK2Attribute*) targetAttribute filledWithData:(void*) data bytesPerArrayElement:(GLsizeiptr) bytesPerDataItem arrayLength:(int) numDataItems updateFrequency:(GLK2BufferObjectFrequency) freq;
@end
[/objc]
The method at the end is where we move the very last bit of code from the previous blog post – the stuff about glVertexAttribPointer. We also combine it with automatically creating the necessary GLK2BufferObject, and calling the “upload:numItems:usageHint” method:
GLK2VertexArrayObject.m:
[objc]
…
-(GLK2BufferObject*) addVBOForAttribute:(GLK2Attribute*) targetAttribute filledWithData:(void*) data bytesPerArrayElement:(GLsizeiptr) bytesPerDataItem arrayLength:(int) numDataItems updateFrequency:(GLK2BufferObjectFrequency) freq
{
/** Create a VBO on the GPU, to store data */
GLK2BufferObject* newVBO = [GLK2BufferObject vertexBufferObject];
newVBO.bytesPerItem = bytesPerDataItem;
[self.VBOs addObject:newVBO]; // so we can auto-release it when this class deallocs
/** Send the vertex data to the new VBO */
[newVBO upload:data numItems:numDataItems usageHint:[newVBO getUsageEnumValueFromFrequency:freq nature:GLK2BufferObjectNatureDraw]];
/** Configure the VAO (state) */
glBindVertexArrayOES( self.glName );
glEnableVertexAttribArray( targetAttribute.glLocation );
GLsizei stride = 0;
glVertexAttribPointer( targetAttribute.glLocation, newVBO.sizePerItemInFloats, GL_FLOAT, GL_FALSE, stride, 0);
glBindVertexArrayOES(0); //unbind the vertex array, as a precaution against accidental changes by other classes
return newVBO;
}
[/objc]
Source for: GLK2VertexArrayObject.h and GLK2VertexArrayObject.m
- GLK2VertexArrayObject.h – link to GitHub because it would make the blog post too long to insert it here
- GLK2VertexArrayObject.m – link to GitHub because it would make the blog post too long to insert it here
Gotcha: The magic of OpenGL shader type-conversion
This is also a great time to point-out some sleight-of-hand I did last time.
In our source-code for the Shader, I declared our attribute as:
attribute vec4 position;
…and when I declared the data on CPU that we uploaded, to fill-out that attribute, I did:
GLKVector3 cpuBuffer[] =
{
GLKVector3Make(-1,-1, z)
…
Anyone with sharp eyes will notice that I uploaded “vector3” (data in the form: x,y,z) to an attribute of type “vector4” (data in the form: x,y,z,w). And nothing went wrong. Huh?
The secret here is two fold:
- OpenGL’s shader-language is forgiving and smart; if you give it a vec3 where it needs a vec4, it will up-convert automatically
- We told all of OpenGL “outside” the shader-program: this buffer contains Vector3’s! Each one has 3 floats! Note: That’s THREE! Not FOUR!
…otherwise, I’d have had to define our triangle using 4 co-ordinates – and what the heck is the correct value of w anyway? Better not to even go there (for now). All of this “just works” thanks to the code we’ve written above, in this post. We explicitly tell OpenGL how to interpret the contents of a BufferObject even thoughErrors – ARGH!
We’re about to deal with “textures” in OpenGL – but we have to cover something critical first.
In previous parts, each small feature has required only a few lines of code to achieve even the most complex outcomes … apart from “compiling and linking Shaders”, which used many lines of boilerplate code.
Texture-mapping is different; this is where it gets tough. Small typos will kill you – you’ll get “nothing happened”, and debugging will be near to impossible. It’s time to learn how to debug OpenGL apps.
OpenGL debugging: the glGetError() loop
There are three ways that API’s / libraries return errors:
- (very old, C-only, APIs): An integer return code from every method, that is “0” for success, and “any other number” for failure. Each different number flags a different cause / kind of error
- (old, outdated APIs): An “error” pointer that you pass in, and MAY be filled-in with an error if things go wrong. Apple does a variant of this with most of their APIs, although they don’t need to any more (it used to be “required”, but they fixed the problems that forced that, and it’s now optional. Exceptions work fine)
- (modern programming languages and APIs): If something goes wrong, an Exception is thrown (modern programming languages do some Clever Tricks that make this exactly as fast as the old methods, but much less error-prone to write code with)
Then there’s another way. An insane, bizarre, way … from back when computers were so new, even the C-style approach hadn’t become “standard” yet. This … is what OpenGL uses:
- Every method always succeeds, even when it fails
- If it fails, a “global list of errors” is created, and the error added to the end
- No error is reported – no warnings, no messages, no console-logs … nothing
- If other methods fail, they append to the list of errors
- At any time, you can “read” the oldest error, and remove it from the list
In fairness, there was good reason behind it. They were trying to make an error-reporting system that was so high-performance it had zero impact on the runtime. They were also trying to make it work over the network (early OpenGL hardware was so special/expensive, it wasn’t even in the same machine you ran your app on – it lived on a mainframe / supercomputer / whatever in a different room in your office).
It’s important to realise that the errors are on a list – if you only call “if( isError )” you’ll only check the first item on the list. By the time you check for errors, there may be more-than-one error stacked up. So, in OpenGL, we do our error checking in a while-loop: “while( thereIsAnotherError ) … getError … handleError”.
UPDATE: ignore the rest, use this (Xcode5)
Xcode5 now does 95% of the work for you, in 3 clicks – this is awesome.
Select your Breakpoints tab, hit the (hard to find) plus button at bottom left, and select “OpenGL ES Error”:
This is a magic breakpoint where OpenGL will catch EVERY GL error as soon as it happens and pause in the debugger for you. You should have this permanently enabled while developing!
(if you’re not familiar with Xcode’s catch-all breakpoints, the other one that most devs have always-on is “Exception Breakpoint”, which makes the debugger pause whenever it hits an Exception, and you can see the exact state of your program at the moment the Exception was created. It’s not 100% perfect – some 3rd party libraries (e.g. TestFlight, IIRC) create temporary Exceptions pre-emptively, and get annoying quickly. But it’s pretty good)
What follows is generic code (not dependent on IDE version). I’ll leave it here as an FYI – and in case you ever need to reproduce this logging at runtime, without the debugger (e.g. for remote upload of crash logs to TestFlight or Hockey). But for simple cases: use the Xcode5 feature instead
Using glGetError()
Technically, OpenGL requires you to alternate EVERY SINGLE METHOD CALL with a separate call to “glGetError()”, to check if the previous call had any errors.
If you do NOT do this, OpenGL will DELETE THE INFORMATION about what caused the error.
Since OpenGL ERRORS ARE 100% CONTEXT-SENSITIVE … deleting that info also MAKES THE ERROR TEXT MEANINGLESS.
Painful? Yep. Sorry.
To make it slightly less painful, OpenGL’s “getError()” function also “removes that error from the start of the list” automatically. So you only use one call to achieve both “get-the-current-error”, and “move-to-the-next-one”.
Here’s the source code you have to implement. After every OpenGL call (any method beginning with the letters “gl”):
[objc]
GLenum glErrorLast;
while( (glErrorLast = glGetError()) != GL_NO_ERROR ) // GL spec says you must do this in a WHILE loop
{
NSLog(@"GL Error: %i", glErrorCapture );
}
[/objc]
This (obviously) makes your source code absurdly complex, completely unreadable, and almost impossible to maintain. In practice, most people do this:
- Create a global function that handles all the error checking, and import it to every GL class in your app
- Call this function:
- Once at the start of each “frame” (remember: frames are arbitrary in OpenGL, up to you to define them)
- Once at the start AND end of each “re-usable method” you write yourself – e.g. a “setupOpenGL” method, or a custom Texture-Loader
- …
- When something breaks, start inserting calls to this function BACKWARDS from the point of first failure, until you find the line of code that actually errored. You have to re-compile / build / test after each insertion. Oh, the pain!
From this post onwards, I will be inserting calls to this function in my sample code, and I won’t mention it further
Standard code for the global error checker
The basic implementation was given above … but we can do a lot better than that. And … since OpenGL debugging is so painful … we really need to do better than that!
We’ll start by converting it into a C-function that can trivially be called from any class OR C code:
[objc]
void gl2CheckAndClearAllErrors()
{
GLenum glErrorLast;
while( (glErrorLast = glGetError()) != GL_NO_ERROR ) // GL spec says you must do this in a WHILE loop
{
NSLog(@"GL Error: %i", glErrorCapture );
}
}
[/objc]
Improvement 1: Print-out the GL_* error type
OpenGL only allows 6 legal “error types”. All gl method calls have to re-use the 6 types, and they aren’t allowed sub-types, aren’t allowed parameters, aren’t allowed “error messages” to go with them. This is crazy, but true.
First improvement: include the error type in the output.
[objc]
…
while( (glErrorLast = glGetError()) != GL_NO_ERROR ) // GL spec says you must do this in a WHILE loop
{
/** OpenGL spec defines only 6 legal errors, that HAVE to be re-used by all gl method calls. OH THE PAIN! */
NSDictionary* glErrorNames = @{ @(GL_INVALID_ENUM) : @"GL_INVALID_ENUM", @(GL_INVALID_VALUE) : @"GL_INVALID_VALUE", @(GL_INVALID_OPERATION) : @"GL_INVALID_OPERATION", @(GL_STACK_OVERFLOW) : @"GL_STACK_OVERFLOW", @(GL_STACK_UNDERFLOW) : @"GL_STACK_UNDERFLOW", @(GL_OUT_OF_MEMORY) : @"GL_OUT_OF_MEMORY" };
NSLog(@"GL Error: %@", [glErrorNames objectForKey:@(glErrorCapture)] );
}
[/objc]
Improvement 2: report the filename and line number for the source file that errored
Using a couple of C macros, we can get the file-name, line-number, method-name etc automatially:
[objc]
…
NSLog(@"GL Error: %@ in %s @ %s:%d", [glErrorNames objectForKey:@(glErrorCapture)], __PRETTY_FUNCTION__, __FILE__, __LINE__ );
…
[/objc]
Improvement 3: automatically breakpoint / stop the debugger
You know about NSAssert / CAssert, right? If not … go read about it. It’s a clever way to do Unit-Testing style checks inside your live application code, with very little effort – and it automatically gets compiled-out when you ship your app.
We can add an “always-fails (i.e. triggers)” Assertion whenever there’s an error. If you configure Xcode to “always breakpoint on Assertions” (should be the default), Xcode will automatically pause whenever you detect an OpenGL error:
UPDATE:As Chris Ross pointed out, I made a stupid mistake here. To use the __FILE__ etc macros, the way they work (auto-referencing actual lines in source code) you need to make the call itself into a macro, so the compiler re-embeds them in source each time you use them. Modifying code below
Header:
[objc]
#define gl2CheckAndClearAllErrors() _gl2CheckAndClearAllErrorsImpl(__PRETTY_FUNCTION__,__FILE__,__LINE__)
void _gl2CheckAndClearAllErrorsImpl(char *source_function, char *source_file, int source_line)
[/objc]
Class:
[objc]
#include <stdio.h>
void _gl2CheckAndClearAllErrorsImpl(char *source_function, char *source_file, int source_line)
{
NSLog(@"GL Error: %@ in %s @ %s:%d", [glErrorNames objectForKey:@(glErrorCapture)], __PRETTY_FUNCTION__, __FILE__, __LINE__ );
NSCAssert( FALSE, @"OpenGL Error; you need to investigate this!" ); // can’t use NSAssert, because we’re inside a C function
}
[/objc]
… see how we create a macro that looks like the function, but expands into the function we need it to be.
Improvement 4: make it vanish from live App-Store builds
By default, Xcode defines a special value for all Debug (i.e. development) builds that is removed for App Store builds.
Let’s wrap our code in an “#if” check that uses this. That way, when we ship our final build to App Store, it will compile-out all the gl error detection. The errors at that point do us no good anyway – users won’t be running the app in a debugger, and the errors in OpenGL are context-sensitive, so error reports from users will do us very little good.
(unless you’re using a remote logging setup, e.g. Testflight/HockeyApp/etc … but in that case, you’ll know what to do instead)
[objc]
void _gl2CheckAndClearAllErrorsImpl(char *source_function, char *source_file, int source_line)
{
#if DEBUG
…
#endif
}
[/objc]
Source for: GLK2GetError.h and GLK2GetError.m
- GLK2GetError.h – link to GitHub because it would make the blog post too long to insert it here
- GLK2GetError.m – link to GitHub because it would make the blog post too long to insert it here
Did this post help you?
If you’re finding these OpenGL ES tutorials useful, enter your email, and I’ll let you know the next time I post one. I’ll also send you some info about my current personal-project, a 3D game/app which uses these techniques:
End of part 4
Next time – I promise – will be all about Textures and Texture Mapping. No … really!