OpenGL ES 2: Basic Drawing

UPDATED 24/09/13: Added some essential details to the class files at end of post, and corrected typos

UPDATED 24/09/13: Added Github project with full source from this article

This is Part 2. Part 1 was an overview of where Apple’s GLKit helps with OpenGL ES setup and rendering.

I’ve been using OpenGL ES 2 less than a year, so if you see anything odd here, I’m probably wrong. Might be a mix-up with desktop OpenGL, or just a typo. Comment/ask if you’re not sure.

2D APIs, Windowing systems, Widgets, and Drawing

Windowing systems draw like this:

  1. The app uses a bunch of Widget classes (textboxes (UILabel/UITextArea), buttons, images (UIImageView), etc)
  2. Each widget is implemented the same way: it draws colours onto a background canvas
  3. The canvas (UIView + CALayer) is a very simple class that provides a rectangular area of pixels, and gives you various ways of setting the colours of any/all of those pixels
  4. A window displays itself using one or more canvases
  5. When something “changes”, the windowing system finds the highest-level data to change, re-draws that, and sticks it on the screen. Usually: the canvas(es) for a small set of views

Under the hood, windowing systems draw like this:

  1. Each canvas saves its pixels on the GPU
  2. The OS and GPU keep sending those raw pixels at 60Hz onto your monitor/screen
  3. When a widget changes, the canvas DELETES its pixels, re-draws them on the CPU, uploads the new “saved” values to the GPU, and goes back to doing nothing

The core idea is: “if nothing has changed, do nothing”. The best way to slow down an app is to keep telling the OS/windowing system “this widget/canvas has changed, re-draw it” as fast as you can. Every time you do that, NOT ONLY do you have to re-draw it (CPU cost), BUT ALSO the CPU has to upload the saved pixels onto the GPU, so that the OS can draw it to the screen.

OpenGL and Drawing

That sounds great. It leads to good app design. Clean OOP. etc.

But OpenGL does it differently.

Instead, OpenGL starts out by saying:

We’ll redraw everything, always, every single refresh of the monitor. If you change your code to re-draw something “every frame”, then with OpenGL … there is no change in performance, because we were doing that anyway.

(Desktop graphics chips usually have dedicated hardware for each different part of the OpenGL API. There’s no point in “not using” features frame-to-frame if the hardware is there and sitting idle. With mobile GPUs, some hardware is duplicated, some isn’t. Usually the stuff you most want is “shared” between the features you’re using, so just like normal CPU code: nothing is free. But it’s worth checking on a feature-by-feature basis, because sometimes it is exactly that: free)

When people say “it’s fast” this is partly what they mean: OpenGL is so blisteringly fast that every frame, at full speed, it can do the things you normally do “as sparingly as possible” in your widgets.

Multiple processors: CPUs vs GPUs … and Shaders

This worked really well in the early days of workstations, when the CPU was no faster than the motherboard, and everything in the computer ran at the same speed. But with modern computer hardware, the CPU normally runs many times faster than the rest of the system, and it’s a waste to “slow it down” to the speed of everything else – which is partly why windowing systems work the way they do.

With modern systems, we also have a “second CPU” – the GPU – which is also running very fast, and is also slowed down by the rest of the system. Current-gen phones have multiple CPU cores *and* multiple GPU cores. That’s a lot of processors you have to keep fed with data… It’s something you’ll try to take advantage of a lot. For instance, in Apple’s performance guide for iOS OpenGL ES, they give the example of having the CPU alternate between multiple rendering tasks to give the GPU time to work on the last lot:

Instead of this:

…do this:

OpenGL approaches this by running your code in multiple places at once, in parallel:

  1. A lot of your code runs on the CPU, like normal
  2. A lot of your code appears to run on the CPU, but is a facade: it’s really running on the GPU
  3. Some of your code runs on the GPU, and you have to “send” it there first
  4. Most of your code *could be* running on CPU or GPU, but it’s up to your hardware + hardware drivers to decide exactly where

Most “gl” functions that you call in your code don’t execute code themselves. Instead, they’re in item 2: they run on the CPU, but only for a tiny fraction of time, just long enough to signal to the GPU that *it* should do some work “on your behalf”, and to do that work “at some time in the future. When you’re ready. KTHNXBYE”.

The third item is Shaders (ES 2 only has: Vertex Shaders + Fragment Shaders; GL ES 3 and above have more), and GLSL (the GL Shading Language, a subset of OpenGL).

Of course, multi-threaded programming is more complex than single-threaded programming. There are many new and subtle ways that it can go wrong. It’s easy to accidentally destroy performance – or worse: destroy correctness, so that your app does something different from what your source code seems to be telling it to do.

Thankfully, OpenGL simplifies it a lot. In practice, you usually forget that you’re writing multi-threaded code – all the traditional stuff you’d worry about is taken care of for you. But it leads (finally) to OpenGL’s core paradigm: Draw Calls.

Draw calls (and Frames)

Combine multi-threaded code with parallel-processors, and combine that with facade code that pretends to be on CPU but actually runs on GPU .. and you have a recipe for source-code disasters.

The OpenGL API effectively solves this by organizing your code around a single recurring event: the Draw call.

(NB: not the “frame”. Frames (as in “Frames Per Second”) don’t exist. They’re something that 3D engines (written on top of OpenGL) create as an abstraction – but OpenGL doesn’t know about them and doesn’t care. This difference matters when you get into special effects, where you often want to blur the line between “frames”)

It’s a simple concept. Sooner or later, if you’re doing graphics, you’ll need “to draw something”. OpenGL ES can only draw 3 things: triangles, lines, and points. OpenGL provides many methods for different ways of doing this, but each of them starts with the word “draw”, hence they’re collectively known as “draw calls”.

When you execute a Draw call, the hardware could do anything. But conceptually it does this:

  1. The CPU sends a message to the GPU: “draw this (set of triangles, lines, or points)”
  2. The GPU gathers up *all* messages it’s had from the CPU (since the last Draw call)
  3. The GPU runs them all at once, together with the Draw call

Technically, OpenGL’s multiprocessor paradigm is “batching”: it takes all your commands, caches (but ignores) them … until you give it the final “go!” command (a Draw call). It then runs them all in the order you sent them.

(understanding this matters a lot when it comes to performance, as we’ll soon see)

Anatomy of a Draw call

A Draw call implicitly or explicitly contains:

  • A Scissor (masks part of the screen)
  • A Background Colour (wipe the screen before drawing)
  • A Depth Test (in 3D: if the object in the Draw call is “behind” what’s on screen already, don’t draw it)
  • A Stencil Test (Scissors 2.0: much more powerful, but much more complex)
  • Geometry (some triangles to draw!)
  • Draw settings (performance optimizations for how the triangles are stored)
  • A Blend function, usually for Alpha/transparency handling
  • Fog
  • Dithering on/off (very old feature for situations where you’re using small colour palettes)
  • Culling (automatically ignore “The far side” of 3D objects (the part that’s invisible to your camera))
  • Clipping (if something’s behind the camera: don’t waste time drawing it!)
  • Lighting/Colouring (in OpenGL ES 2: lighting and colouring a 3D object are *the same thing*. NB: in GL ES 1, they were different!)
  • Pixel output (something the monitor can display … or you can do special effects on)

NB: everything in this list is optional! If you don’t do ANY of them, OpenGL won’t complain – it won’t draw anything, of course, but it won’t error either.

That’s a mix of simple data, complex data, and “massively complex data (the geometry) and source code (the lighting model)”.

OpenGL ES 1 had all of the above, but the “massively complex” bit was proving way too complex to design an API for, so the OpenGL committee adopted a whole new programming language for it, wrapped it up, and shoved it off to the side as Shaders.

In GL ES 2, all the above is still there, but half of “Geometry” and half of “Lighting/Colouring” have been *removed* : bits of OpenGL that provided them don’t exist any more, and instead you have to do them inside your shaders.

(the bits that stayed in OpenGL are “triangle/vertex data” (part of Geometry) and “textures” (part of Lighting/Colouring). This also explains why those two parts are two of the hardest bits of OpenGL ES 2 to get right: they’re mostly unchanged from many years ago. By contrast, Shaders had the benefit of being invented after people had long experience with OpenGL, and they were simplified and designed accordingly)

Apple’s EAGLContext and CAEAGLLayer

At some point, Apple has to interface between the cross-platform, hardware-independent OpenGL … and their specific language/platform/Operating System.

[*]EAGL[*] classes are where the messy stuff happens; they’ve been around since the earliest versions of iOS, and they’re pretty basic.

These days, it only handles two things of any interest to us:

  1. Allow multiple CPU threads to run independently, without needing any complex threading code (OpenGL doesn’t support multi-threading on the CPU)
  2. Load textures in the background, while continuing to RENDER TO SCREEN the current 3D scene (the hardware is capable of doing both at once)

In practice … all you need to remember is:

All OpenGL method calls will fail, crash, or do nothing … unless there is a valid EAGLContext object in memory AND you’ve called “setCurrentContext” on it

and:

For fancy stuff later on, you might need to pre-create an EAGLContext, rather than create one on the fly

Later on, when we create a ViewController, we’ll insert the following code to cover both of these:

// ... in the header:

@property(nonatomic,retain) EAGLContext* localContext;

// ... in the viewDidLoad: (or anywhere that's guaranteed to happen first!)

/** Creating and "making current" an EAGLContext must be the very first thing any OpenGL app does! */
	if( self.localContext == nil )
	{
		self.localContext = [[[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2] autorelease];
	}
	NSAssert( self.localContext != nil, @"Failed to create ES context");

UPDATED October 2013: A bug!

I said:

“All OpenGL method calls will fail, crash, or do nothing … unless there is a valid EAGLContext object in memory AND you’ve called “setCurrentContext” on it”

But I forgot to include the “setCurrentContext” call in the example source code. This is a common bug with GLKit/iOS apps: there are NO ERRORS if you fail to setCurrentContext, your app silently draws nothing. This is a major flaw in Apple’s integration of OpenGL.

There are two things we do about this. First, fix the bug!

ViewController.m:

-(void) viewDidLoad
{
...
	NSAssert( self.localContext != nil, @"Failed to create ES context");
	
	[EAGLContext setCurrentContext:self.localContext]; // VERY important! GL silently stops working without this
...

Note: this code is *not* included in the GitHub commit linked at the top of this post; it’s not necessary to make “this” post work. You can fix it yourself, or use the next commit (which has all the code for the next blog post)

Secondly, we’ll add some code to “renderSingleFrame” (see below) that detects this bug in future.

Apple’s GLKit and OpenGL Rendering

We’re going to make some simple draw calls and see what happens. First we need an Xcode project to play with. We’re NOT going to use Apple’s “OpenGL Game” template – even Apple doesn’t expect anyone to use that. It’s a useful demo for OpenGL experts to glance at, but it’s the wrong place to start when writing a new iOS app.

And, of course: if you rely on that template, you won’t know how to add OpenGL onto an existing application. One of the great uses of OpenGL on iOS is to supplement an existing app with cheap, fast, awesome effects…

GLKViewController and GLKView

These two classes are the core of Apple’s GLKit framework. Everything else is supplementary, or inherited from iOS itself (which had basic OpenGL long before GLKit was written).

First-off, a warning:

GLKViewController isn’t a 100% safe UIViewController; GLKView isn’t a 100% safe UIView; “almost” but not quite

i.e. there are certain things that you can do / expect to do with a UIViewController (or UIView) that break GLKViewController (or GLKView). By “break” I mean everything from “your screen goes blank” through to “hard-crash of your app”. In simple cases you’ll never encounter these – but I’ll hilight the problem-cases I’ve run into while testing, so you don’t end up bashing your head against them.

Apple’s interpretation of OpenGL rendering

Despite the warning above, Apple has really done a very good job of integrating OpenGL with UIKit. Apple’s reasoning goes something like this (blind guess on my part):

  • UIViews dont actually render; every UIView has a CALayer that does the rendering
  • UIViewControllers are supposed to “control” UIViews
  • CALayers are (sort-of) raw OpenGL rendering anyway (that’s how Apple implemented their OS)
  • OpenGL is “active” rendering: you have to explicitly render, and re-render, and re-re-render … ad infinitum
  • OpenGL supports many different formats – colour modes, pixel sizes, aspect ratios, etc – but CALayer only does one at a time
  • OpenGL has powerful features that let you “draw” to something other than the screen, but require some effort to configure
  • Idea:
    • Make a special UIViewController that runs a loop, constantly going “render OpenGL … render OpenGL … render OpenGL”
    • Make a special UIView that ignores UIKit’s cached rendering, and does a raw “re-render” OpenGL every time it’s told to
    • The special UIView will *also* allow you to “switch” between different colour modes, aspect ratios, etc
    • The special UIView will *also* handle the code for “make OpenGL draw to the screen, rather than to something special”
    • Developers might not like our implementation, and we don’t Open-Source, so we’ll let them replace either class with their own

That last point is a pain: GLKView and GLKViewController both have a bunch of “extra” methods that only exist so you can remove either class and replace it with your own “equivalent”. Sounds good, right?

Not really. There’s a handful of showstopper bugs that only appear when you stop using one of these two classes. As with UIKit, where Apple only tests “the common use cases”, they appear to have only tested/fixed the situations where you’re using BOTH a GLKViewController AND a GLKView. You can write apps that dump one or both, but I highly recommend: don’t bother. Use the code Apple tested the most – as we’ll see, it’s easy to work into your own class structure anyway.

I’ve tried each of the methods Apple provides, and here’s the one I’ve found works with least friction:

  1. Instantiate a GLKViewController with a Storyboard or NIB that assigns a GLKView to its “self.view” property
  2. Subclass GLKViewController and override the “viewDidLoad” method to do OpenGL setup
  3. Subclass GLKViewController and override the “update” method to do your OpenGL rendering
  4. *Ignore* the “delegate callback” inside GLKView – it will prevent you using some advanced OpenGL features later on

GOTCHA: GLKView’s init method is slightly broken on iOS 5 and iOS 6. You *must not* change properties of GLKView inside the [GLKView init] method – wait until GLKViewController’s viewDidLoad method, *then* start altering your GLKView. This is the exact opposite of what UIKit programmers expect – but under the hood each integer property on GLKView triggers a huge amount of behind-the-scenes code and memory-management to take place. So it’s not suprising it has some subtle bugs.

The world’s simplest Draw call

Looking back at the list of what’s in a Draw call … only two items on that list render anything to the screen:

  • Background colour
  • Draw call

Everything else happens as a side-effect of the Draw call itself. The “background colour” is unique because it’s the basis of lots of powerful graphics techniques that require you to to set the background in cunning ways. It also holds a special place in the hardware / drivers: when you “wipe the screen” the GPU knows that everything will disappear – so GPUs often have special optimizations related to your use of clearing the screen, and the precise time/context of when you do this. So … OpenGL keeps it separate from the Draw call.

Our first Draw call won’t issue a “draw” command: it will simply clear the screen.

Converting Apple’s GLKViewController to standard OpenGL

The downside of Apple’s neat GLKViewController is that it hides the importance of Draw calls. If you look at part 1, you’ll see that “Draw calls” are one of the bits that is missing from GLKit. Because GLKit has no class/object for a draw call, GLKViewController provides a more abstract method: “update”, and leaves the rest to us.

We’ll replace “update” with a simple OpenGL rendering system using OOP.

Our first DrawCall class

Github project with full source from this article – this link is to the commit for this specific article

GLK2DrawCall.h

#import <Foundation/Foundation.h>
#import <GLKit/GLKit.h>

/**
 Version 1: c.f. http://t-machine.org/index.php/2013/09/08/opengl-es-2-basic-drawing/
 */
@interface GLK2DrawCall : NSObject

@property(nonatomic) BOOL shouldClearColorBit;

/**
 Defaults to:
 
 - clear color MAGENTA
 
 ... everything else: OFF
 */
- (id)init;

-(float*) clearColourArray;
-(void) setClearColourRed:(float) r green:(float) g blue:(float) b alpha:(float) a;

@end

GLK2DrawCall.m

#import "GLK2DrawCall.h"

@implementation GLK2DrawCall
{
	float clearColour[4];
}

-(void)dealloc
{
	[super dealloc];
}

- (id)init
{
	self = [super init];
	if (self) {
		[self setClearColourRed:1.0f green:0 blue:1.0f alpha:1.0f];
	}
	return self;
}

-(float*) clearColourArray
{
	return &clearColour[0];
}

-(void) setClearColourRed:(float) r green:(float) g blue:(float) b alpha:(float) a
{
	clearColour[0] = r;
	clearColour[1] = g;
	clearColour[2] = b;
	clearColour[3] = a;
}

@end

For now, it’s purely a data-holding class. There’s a bit of C in here to store our colour in a raw “array of floats”, and to pass it around without any allocation / deallocation. For our simple case, this is unnecessary – but when you’re doing this in *every* draw call, and you have thousands of draw calls per frame … you start to worry about these things.

This is also a good opportunity for you to revise your C if you normally code everything in high-level ObjectiveC. With OpenGL, we’ll be using pointers fairly often – OpenGL is a raw C API, and although it’s “easy” C, we still need simple pointers here and there.

So, if any of the code above looks strange to you, things for you to Google/revise:

  • “float clearColour[4]” is the same type as “float*” (C arrays are literally identical to “pointer-to-the data inside”)
  • To “return a C array from an Objective C method” we have to return “a pointer to the first element”
  • “& (anything)” means “a pointer to (anything)”
  • When you declare a variable inside the { curly braces } at the start of an ObjectiveC class, you DON’T HAVE TO DO memory management, it’s treated as if it’s “retained forever, until this object is dealloc’d”
  • Because of this “forever” part, the only things you can declare are “fixed amount of memory” things – i.e. we have to explicitly say our array is “4” floats. We cannot store a mutable array here (unless we do our own memory management)

Setting up a GLKViewController subclass to do OpenGL rendering of frames/draw-calls

First of all, we add a private array to hold our Draw calls:

@interface ViewController()
@property(nonatomic, retain) NSMutableArray* drawCalls;
@end

…we’ll initialize it in viewDidLoad, but come back to that in a bit.

To get something rendering, first we override Apple’s “update” method:

-(void) update
{
   [self renderSingleFrame];
}

Apple treats this as if it were “drawing one frame” – they call it N times per second, where N is a number you can configure, which Apple labels “framesPerSecond”. More on that later, but for now: we’re making it explicit that what Apple really means with this method name is “render one Frame”.

OK, how do we render a frame?

-(void) renderSingleFrame
{
   if( [EAGLContext currentContext] == nil ) // skip until we have a context
   {
      NSLog(@"We have no gl context; skipping all frame rendering");
      return;
   }

This causes our ViewController to detect the earlier bug (forgot to call “setCurrentContext”), and spam the console with error logs until you fix it. This might sound like overkill but eventually, you’ll end up switching between different EAGLContext instances. At that point, it’s very easy to break your code, and it’s very hard to debug (often, you’ll get “no errors” from Apple). As a protection, we check once per frame “did someone accidentally disable the context?”

   if( self.drawCalls == nil || self.drawCalls.count < 1 )
      NSLog(@"no drawcalls specified; rendering nothing");

More self-protection: if you make *any* mistake that leads to an empty/missing set of drawcalls (and note: internal crashes inside “gl” commands can cause that), Apple won’t complain – they’ll just give you a blank screen. This message is there to catch that and warn us, by spewing errors to the log.

Also, it serves a bonus: if for some reason your GLKViewController startsup faster than your own GL rendering code (this can happen when you start to do background loading of textures, for example), this log message will show that you’re issuing render commands before you were ready to draw anything. A warning to re-arrange the ordering of your code…

   for( GLMDrawCall* drawCall in self.drawCalls )
   {
      numActive++;
      [self renderSingleDrawCall:drawCall];
   }

Finally: render each draw call. Note that we do the rendering *inside* our GLKViewController, *not* in GLKView, and *not* in our DrawCall class. This seems like its breaking UIKit (where rendering happens “inside” the UIView) – but in fact, because OpenGL uses a different rendering paradigm, the viewcontroller is the right place to do this, long term. In UIKit, views/layers render themselves. In OpenGL, drawcalls are just “batches of data” that the viewcontroller orders, sorts, arranges, configures, and sends to the GPU each frame.

Using the GLK2DrawCall inside our ViewController

-(void) viewDidLoad
{
   [super viewDidLoad];

   self.drawCalls = [NSMutableArray array];
   GLK2DrawCall* simpleClearingCall = [[GLK2DrawCall new] autorelease];
   simpleClearingCall.shouldClearColorBit = TRUE;
   [self.drawCalls addObject: simpleClearingCall];
}

The defaults are fine – we made our draw call default to a lurid background colour / clear color – but we do need to explicitly turn on “clear the Color”, or else OpenGL won’t do anything.

-(void) renderSingleDrawCall:(GLMDrawCall*) drawCall
{
   /** clear (color, depth, or both) */
   float* newClearColour = [drawCall clearColourArray];
   glClearColor( newClearColour[0], newClearColour[1], newClearColour[2], newClearColour[3] );
   glClear( (drawCall.shouldClearColorBit ? GL_COLOR_BUFFER_BIT : 0) );
}

Our first two lines of pure OpenGL code: “glClearColor” and “glClear”. OpenGL designers realised that “setting a background colour” is the same hardware as “setting all the values in an array to zero before you start”, or “pre-seeding a cache with data in every slot”, etc. So they made “glClear” a general-purpose method that can “clear” anything – not just the screen, but internal data too.

So, we have to set the “clearColor” first (4 floats: red, green, blue, and alpha (1 = solid, 0 = fully transparent). Technically, OpenGL will remember the clearColor forever, and we don’t need to continually set it each frame – but for simplicitly, we’ll do it every frame for now.

Then we say “clear … the COLOR (rather than anything else)”

Frequently in OpenGL you want to temporarily enable/disable “wiping” the background colour, on a per-drawcall basis. That’s why we store it as a boolean on our DrawCall class.

Here’s the complete code for the custom ViewController:

Github project with full source from this article – this link is to the commit for this specific article

ViewController.h

#import <UIKit/UIKit.h>
#import <GLKit/GLKit.h>

@interface ViewController : GLKViewController

@property(nonatomic,retain) EAGLContext* localContext;

@end

ViewController.m

/**
 Version 1: c.f. http://t-machine.org/index.php/2013/09/08/opengl-es-2-basic-drawing/
 */
#import "ViewController.h"
#import "GLK2DrawCall.h"

@interface ViewController ()
@property(nonatomic, retain) NSMutableArray* drawCalls;
@end

@implementation ViewController

- (void)dealloc
{
    if ([EAGLContext currentContext] == self.localContext) {
        [EAGLContext setCurrentContext:nil];
    }
    
    self.localContext = nil;
	
    [super dealloc];
}

-(void) viewDidLoad
{
	[super viewDidLoad];
	
	/** Creating and "making current" an EAGLContext must be the very first thing any OpenGL app does! */
	if( self.localContext == nil )
	{
		self.localContext = [[[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2] autorelease];
	}
	NSAssert( self.localContext != nil, @"Failed to create ES context");
    
	/** All the local setup for the ViewController */
	self.drawCalls = [NSMutableArray array];
	GLK2DrawCall* simpleClearingCall = [[GLK2DrawCall new] autorelease];
	simpleClearingCall.shouldClearColorBit = TRUE;
	[self.drawCalls addObject: simpleClearingCall];
	
	/** Finally: enable GL rendering by enabling the GLKView (enable it by giving it an EAGLContext to render to) */
	GLKView *view = (GLKView *)self.view;
	view.context = self.localContext;
	view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
}

-(void) update
{
	[self renderSingleFrame];
}

-(void) renderSingleFrame
{
	if( [EAGLContext currentContext] == nil ) // skip until we have a context
	{
		NSLog(@"We have no gl context; skipping all frame rendering");
		return;
	}
	
	if( self.drawCalls == nil || self.drawCalls.count < 1 )
		NSLog(@"no drawcalls specified; rendering nothing");
	
	for( GLK2DrawCall* drawCall in self.drawCalls )
	{
		[self renderSingleDrawCall:drawCall];
	}
}

-(void) renderSingleDrawCall:(GLK2DrawCall*) drawCall
{
	/** clear (color, depth, or both) */
	float* newClearColour = [drawCall clearColourArray];
	glClearColor( newClearColour[0], newClearColour[1], newClearColour[2], newClearColour[3] );
	glClear( (drawCall.shouldClearColorBit ? GL_COLOR_BUFFER_BIT : 0) );
}
@end

Next steps

As noted above, now’s a good time to check you understand the C-code above – if not, do some basic C tutorials. No need to venture into complex stuff, just get a feel for the items listed above. Bear in mind that “Objective-C” is a *strict* superset of “C” – so all of this is already inside Objective-C, whether or not you’ve actually been using it / noticed it before now.

6 thoughts on “OpenGL ES 2: Basic Drawing

  1. adam

    I’ve only looked at ES 3 briefly. My impression: where ES 1 == OpenGL 1990’s, and ES 2 == early 2000’s, ES 3 == late 2000’s.

    e.g. it adds things long considered “standard” in desktop GL, like 3D textures, geometry instancing, cross-platform texture-compression etc.

    I *thought* it had Geometry shaders too – but someone pointed out this is incorrect, it’s not that modern ;). (and anyway: Geometry shaders still aren’t particularly fast on desktop last time I checked, so it’s probably way too soon to be hoping for them on mobile)

  2. Jesse Gomez

    Dude! This is super helpful! I’m greatly in need of this information right now, and you seem to be setting up a good structure for fast drawing in opengl. Can you post the next tutorial – preferably the next two now? Or, since I can’t wait, what resources would you recommend that would compliment this? I’ve already started the stuff you recommend in part 1, but it’s not as condensed as this, and doesn’t focus on glkit. I just need to draw points and lines to the points with the points slowly shifting around in 3 dimensions. preferably with a distance fade.

    thanks again.

  3. adam Post author

    Next one covers drawing geometry.

    But they take some time to write! Condensing and filtering the information isn’t trivial.

  4. Michele Pratusevich

    I like the approach of using a data structure to keep track of all the draw calls and then executing those one by one when you render – it’s an OOP approach to the problem of graphics and drawing on the screen. Do you know if that is how OpenGL works under the hood?

  5. adam Post author

    OpenGL doesn’t do anything under the hood. It is all directly exposed to you. There is a small amount of internal optimization – e.g. cheap hardware doing tile-based renderering – but mostly it acts “stupid”.

Leave a Reply

Your email address will not be published. Required fields are marked *