Categories
iphone programming

OpenGL ES 2 – Textures 2 of 3: Texture Mapping

Recap: Texturing in OpenGL can be achieved in two separate ways, using different API’s and hardware features. The preferred, modern approach – procedural texturing – uses nothing more than a Fragment shader (this is really what Fragment shaders were created for: Texturing).

Part 5 (Textures: 1 of 3) covered that in detail – but there’s another option: Texture-Mapping (also known as “UV Mapping”, it’s identical).

Cheaper ways to texture an object

Procedural Texturing (from the previous post) requires a lot of processing power on-board the GPU. OpenGL launched in 1992 – 5 years before GPU’s even existed for the PC. Texturing was mostly useful as a hack to make up for the poor raw power of hardware. Texture-Mapping in OpenGL comes out of that mindset – and it works a lot differently from the “simple, obvious” texturing we have today.

Texture-Mapping requires almost zero processing power, instead it sacrifices huge amounts of RAM. But the API is much lower-level than it needs to be.

Everything in texture-mapping comes out of this quest for performance at low cost. Texture-mapping is theoretically a subset of procedural texturing – but even modern GPU’s still contain custom hardware dedicated to texture-mapping. This has three main effects:

  1. Some things that can theoretically be done “either way” run faster when done using texture-mapping
  2. Texture-mapping saves GPU processing power so that you can “spend” more of it on procedural-textures (or other Shader effects) elsewhere
  3. Some features of OpenGL are only supported with texture-mapping, not with raw Fragment Shaders

Texturing has been added to OpenGL in fits and starts. I will try to consistently use these terms:

Texturing: the general concept of putting pixel-level detail – from any source! – onto geometry.

Texture-Mapping: a specific kind of Texturing that uses a source image usually but not always 2D and maps it to a 3D geometry

Texture(s): the “image” part of Texture-Mapping (on CPU: an image file; on GPU: an OpenGL TextureObject)

What is texture-mapping?

Google for more info, but the essence is:

  1. Take a piece of geometry (e.g. a triangle)
  2. Take a 2D image (e.g. PNG file)
  3. Make a cookie-cutter with the same number of corners as your 3D shape (i.e. another triangle)
    • NOTE: the cookie-cutter triangle DOES NOT need to be the same size/angles – so long as it has same number of corners
  4. When you draw the triangle to screen, use the cookie-cutter to cut out a piece of the 2D image
  5. …and STRETCH AND SQUISH it to fit: i.e. make the three corners line up

Texture-mapping often creates distortion of the original 2D image, unless you deliberately choose your cookie-cutter to prevent it.

Note that only the final step (the stretch/squish) uses processing-power on the GPU. For the rest of it, the GPU can “cut out” the shape from the 2D image when the app starts running, and keep “stamping” it onto the triangle in each frame.

Also, if you zoom-in on the triangle … the texture will get more and more blocky, because the 2D image has fixed resolution.

…Mapping?

This process of stretching and squashing the image until the corners match-up is mathematically known as a “map”, because each x,y co-ordinate in the image is “mapped” to a different x,y co-ordinate in the 3D-projected pixels.

There are two x’s and two y’s, which gets confusing. So, traditionally, the x and y co-ords in the source image are renamed “u” and “v” – so that in code and in docs you always know which x is which. Hence the other name for Texture Mapping: “UV Mapping”.

Before the invention of Shaders, you had almost no influence of the mapping function – it directly mapped u/v to x/y according to the corner-co-ordinates you attached to the 3D geometry. With Shaders, of course … the Fragment Shader “replaces” the mapping function, and lets you do any kind of weird and funky *even animated* ‘map function’ you want.

Vertex Textures and GL ES 2

There’s more to texture-mapping than meets the eye. Some of the hardware features of TextureMapping mean that you can achieve certain special effects and high performance boosts by storing 3D geometry inside special textures.

GL ES 2 supports Vertex Textures for this (textures you can access inside the Vertex Shader), but allows vendors to say “number supported = 0”. Apple used this to disable Vertex Textures on all iPhone/iPod/iPad’s – even though the hardware supported it. They “accidentally” enabled it for one version of iOS only (4.3), and then quickly killed it again.

Until now. iOS7 not only adds GL ES 3 support (for the iPhone 5 only :( ), but also “re-unlocks” Vertex Textures on all devices.

How do you TextureMap in OpenGL?

First, make sure you fully understand the previous post on Procedural Texturing, since TextureMapping is a special-case of that.

Then there are several stages to Texture-Mapping:

  • Preparation:
    1. Upload the image to the GPU (similar to uploading 3D geometry / vertices, creating a BufferObject/VBO)
    2. Convert the image format (e.g. PNG, JPEG, etc) to the GPU’s internal format (similar to telling the GPU how to interpret the contents of a BufferObject/VBO)
  • Rendering:
    • Attach each texture you need to a virtual “TextureUnit”
    • Enable all the TextureUnits you’re using (together, these steps are similar to switching to the next VAO)
    • Open the Texture inside a Shader – i.e. texture-map it to your triangles (similar to invoking a Draw call)
    • Read a different value of the Texture for each pixel in the triangle (VERY similar but NOT identical to the trick we did earlier of using “two” varying’s to give us a texture that varied across the whole 2D surface)
    • Prettify the output (unique to texturemapping: hardware tricks that make them prettier, or faster, or both)

From reading that list you may already have guessed at what we’ll use to implement this technically – it’s similar but not the same as ordinary Procedural Texturing. We’ll need:

  1. A “texture upload” system (sadly: SHOULD but DOESN’T re-use VBO’s and VAO’s)
  2. A system for dealing with the weird beast that is TextureUnits
  3. A way to tell the GPU which “Texture” goes inside which “TextureUnit”
  4. A way to tell the GPU which “TextureUnit” is being used by which Shader variable
  5. Two varying’s in our Shader (or a single vec2) that give us a 2D co-ordinate

Items 3 and 4 above require us to attach some data to the ShaderProgram – but we don’t want to duplicate this for every Vertex (huge waste of VRAM!). So we need a way of storing some variable data on the ShaderProgram itself…

Shader Variables: Uniforms

Uniforms are a special-case of Attribute. Where an Attribute has a separate value for every Vertex, a Uniform has the same value for every Vertex – they hold “global” data that affects the entire Draw call.

A Uniform is constant but only while executing a single Draw call. Each successive Draw call – even within the current Frame – can change the constant’s value.

I’ll come back to this in a later post, but the typical use of Uniforms (apart from Texture Mapping) is to efficiently re-use geometry and source code. It simplifies your source (less copy/paste code), and reduces memory usage (e.g. you can use one “copy” of a Monster on the GPU, but draw it in many different places with small changes between them (e.g. arms and legs at different positions)).

For now, we’re going to skim the topic lightly. A future blog post will cover them in more detail.

Compiling/Linking Shaders: Store the Uniforms

Uniforms are very similar to Attributes; just as we had to “store” all the Attributes when we linked our ShaderProgram, we must do the same with Uniforms. The code is identical, except that the method names and Constants use the text “Uniform” or “UNIFORM” instead of “Attribute/ATTRIBUTE”.

I won’t go into it here, look in the GitHub source if interested. The net result is a new method on GLK2ShaderProgram:

GLK2ShaderProgram.h:
[objc]

-(GLK2Uniform*) uniformNamed:(NSString*) name;

[/objc]

GLK2Uniform itself is modelled on our existing GLK2Attribute class:

GLK2Uniform.h:
[objc]
@interface GLK2Uniform : NSObject <NSCopying> /** Apple’s design of NSDictionary forces us to ‘copy’ keys, instead of mapping them */

+(GLK2Uniform*) uniformNamed:(NSString*) nameOfUniform GLType:(GLenum) openGLType GLLocation:(GLint) openGLLocation numElementsInArray:(GLint) numElements;

/** The name of the variable inside the shader source file(s) */
@property(nonatomic, retain) NSString* nameInSourceFile;

@property(nonatomic) GLint glLocation;
@property(nonatomic) GLenum glType;
@property(nonatomic) GLint arrayLength;

@property(nonatomic,readonly) BOOL isInteger, isFloat, isVector, isMatrix;

-(int) matrixWidth;
-(int) vectorWidth;

@end
[/objc]

Please note:

I have implemented NSCopying on GLK2Uniform – this is very, very useful (allows you to use them as Keys in an NSDictionary) – but easy to get wrong if you’ve not done it before in Objective-C. See the source of GLK2Uniform.m for details of the three methods Apple requires you to override.

Setting the Uniform with GL calls

With Attributes, the amount of data scales up with complexity of the 3D object, and quickly becomes measured in megabytes for complex 3D levels and characters/objects. It’s so rare to send small amounts that OpenGL only provided methods for “sending a whole load at once” – this gave us all the complexity of BufferObjects / VBO’s.

In contrast, the number of Uniforms is low – typically only a few tens per Draw call. So GL lets you set individual Uniforms with a single call:

glUniform1/2/3/4/i/f/v: sets a Uniform for the currently-selected ShaderProgram

GL knows what the uniform is, but you still have to (over-)specify it. For Texture-Mapping, only one of those methods is used – glUniform1i – because we’re only using it to set a single “tag” (magic number) at a time.

Creating and uploading Textures

Texture-Mapping is old. It pre-dates VBOs, VAOs, etc – otherwise it would probably have re-used those features. Instead, it reproduces them, with subtle differences

Today most software uses JPG, PNG, etc – but GPU’s save money by using simple hardware that doesn’t support these. So … Uploading textures to the GPU requires you to handle every low-level detail of parsing the image format and converting it into a “raw bytes” format that GL understands.

Lots of GL ES tutorials have you write a texture-loader that will convert e.g. PNG or UIImage into raw bytes, using arcane invocations of CALayer / CoreAnimation.

But … Apple’s already done it for you. Check out GLKit’s GLKTextureLoader class. As Apple put it: “The GLKTextureLoader class simplifies the effort required to load your texture data.”

Texture Units and … kill me now

I hate texture-units. Ideally you would say to GL:

“For this Draw call, use this Texture” (I wish!)

Unfortunately, they didn’t invent the GPU-side “Texture” object until OpenGL 1.1. OpenGL launched with a much crappier system: the TextureUnit.

Instead, you have to say:

“For this Draw call, take a Texture, put it in a TextureUnit. But only while rendering. When not rendering, don’t. Take a number – that is not the TextureUnit’s name, but “implies” the TextureUnit, and put it in a Shader-Uniform. In the Shader, use the Uniform in a magical way as if it’s a function (not a variable!). (I HATE TEXTURE UNITS)

TL;DR: We need a way to assign a magic-number to the ShaderProgram, that OpenGL understands to mean a specific TextureUnit. We also have to “load” that TextureUnit with the right Texture … and repeat this before every Draw call

Access a texture-map inside a Shader

Now we flee from the evils of TextureUnits, and zoom forwards in time by 10 years, to the advent of Shaders. Shaders do texture-mapping in a sensible and easy way.

Previously, to access an attribute, you simply read its value:

[c]
mediump vec4 attribute positionAttribute;

{
gl_Position = positionAttribute;
}
[/c]

On the CPU, Uniforms and Sampler2D’s are identical, and are “set” the same way: A Sampler2D is defined as a 1-dimensional integer Uniform.

On the GPU, “uniform” and “sampler2D” variables are again the same – except that sampler2D variables can be passed to the Shader function “texture2D(…)”, allowing it to “find and read from” the correct Texture on the GPU.

Shaders assume:

  1. Your Sampler2D “Uniform” is merely a pointer …
  2. … to the contents of a Texture Unit.
  3. The TextureUnit contains a Texture …
  4. … which is a 2D image you previously uploaded
  5. When you invoke “texture2D(…)”, OpenGL accesses the named TextureUnit, reads the Texture …
  6. … and Maps it to the current Draw call’s geometry
  7. … and gives you back the Colour for the current Pixel in the Fragment Shader

All of that is represented in Shader source code by:

[c]
texture2D( samplerForMyTexture, vec2( x, y ) ); // returns a Colour at position (x,y) inside the Texture
[/c]

Simples.

Putting it together: A TextureMap instead of a Procedural Texture

First, create a new Draw call, setup 6 vertices (for 2 triangles), and setup X,Y values for our Varying (all as per last time):

ViewController.m:
[objc]
-(NSMutableArray*) createAllDrawCalls
{

/* draw a TEXTURED pair of 2 triangles onto the screen, arranged into a square ("quad")
*/
GLK2DrawCall* drawTexturedQuad = [[GLK2DrawCall new] autorelease];
drawTexturedQuad.numVerticesToDraw = 6;

…NB: must create ShaderProgram here, and call glUseProgram, as in previous posts

GLKVector3 cpuBufferQuad[6] =
{
GLKVector3Make(-0.5,-0.5, z),
GLKVector3Make(-0.5, 0.5, z),
GLKVector3Make( 0.5, 0.5, z),
GLKVector3Make(-0.5,-0.5, z),
GLKVector3Make( 0.5, 0.5, z),
GLKVector3Make( 0.5,-0.5, z)
};
GLKVector2 attributesVirtualXY [6] =
{
GLKVector2Make( 0, 0 ), // note: we vary the virtual x and y as if they were x,y co-ords on a right-angle triangle
GLKVector2Make( 0, 1 ),
GLKVector2Make( 1, 1 ),
GLKVector2Make( 0, 0 ), // note: we vary the virtual x and y as if they were x,y co-ords on a right-angle triangle
GLKVector2Make( 1, 1 ),
GLKVector2Make( 1, 0 )
};

drawTexturedQuad.VAO = [[GLK2VertexArrayObject new] autorelease];

GLK2Attribute* attributePosition = [drawTexturedQuad.shaderProgram attributeNamed:@"position"];
[drawTexturedQuad.VAO addVBOForAttribute:attributePosition filledWithData:cpuBufferQuad bytesPerArrayElement:sizeof(GLKVector3) arrayLength:drawTexturedQuad.numVerticesToDraw];

GLK2Attribute* attXY = [drawTexturedQuad.shaderProgram attributeNamed:@"a_virtualXY"];
[drawTexturedQuad.VAO addVBOForAttribute:attXY filledWithData:attributesVirtualXY bytesPerArrayElement:sizeof(GLKVector2) arrayLength:drawTexturedQuad.numVerticesToDraw];

}
[/objc]

Part 1: Upload a texture using GLKit

Unlike OpenGL, Apple supports PNG and JPG. Grab any image file you like, and add it to your Xcode project. Then send it to the GPU. In this case, I’ve named my file “textureFile.png”:

GOTCHA: Apple hasn’t documented the formats they support. I filed a bug and got confirmation that PNG and JPG are supported – others might too, but be careful.

ViewController.m:
[objc]
-(NSMutableArray*) createAllDrawCalls
{

[drawTexturedQuad.VAO addVBOForAttribute:attXY filledWithData:attributesVirtualXY bytesPerArrayElement:sizeof(GLKVector3) arrayLength:drawTexturedQuad.numVerticesToDraw];

NSError* error;
GLKTextureInfo* appleTextureMetadata = [GLKTextureLoader textureWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"textureFile" ofType:@"png"] options:nil error:&error];

NSAssert( appleTextureMetadata != nil, @"Error loading texture: %@", error);

}
[/objc]

NB: textures can and do fail to load – and when they do, you’ll waste hours trying to trace the wrong bugs in your app. Much, much safer to put this nil check at load time, and Assert immediately if a texture failed for any reason.

Done! You don’t even need to “convert” the image from the incoming file-format – Apple does all this automagically. When it works, GLKTextureLoader is a wonderful utility.

Apple Engineering: Hmm…

Apple’s texture-loader creates Textures on the GPU, but the things it returns are GLKTextureInfo objects. These are not “textures” themselves, but a handle that lets you access the GPU-side texture later on. This is great – but for some reason they decided to make the TextureInfo class private and non-extensible.

This is tragic. Every GL app needs to “create” TextureInfo objects – Apple has only implemented one out of the 5 or more different ways to upload GL texture data. I don’t mind them failing to implement those – but blocking us from implementing them in a compatible way? That’s bad.

So, we have to create a class that reproduces the features of GLKTextureInfo – I’ve called it GLK2Texture (which says more about how we’ll end up using it, than it does about how we use it today). For this post, we’ll simply wrap the GLKTextureInfo object, but later on we’ll use it to create our own texture-loading:

GLK2Texture.h:
[objc]
@interface GLK2Texture : NSObject

+(GLK2Texture*) texturePreLoadedByApplesGLKit:(GLKTextureInfo*) appleMetadata;

/** OpenGL uses integers as "names" instead of Strings, because Strings in C are a pain to work with, and slower */
@property(nonatomic, readonly) GLuint glName;

/** Creates a new, blank, OpenGL texture on the GPU.

If you already created a texture from some other source, use the initWithName: method instead
*/
– (id)init;

/** If a texture was loaded by an external source – e.g. Apple’s GLKit – you’ll already have a name for it, and can
use this method
*/
– (id)initWithName:(GLuint) name;

@end
[/objc]

…so you can modify the ViewController code above, too:

ViewController.m:
[objc]

NSAssert( appleTextureMetadata != nil, @"Error loading texture: %@", error);
GLK2Texture* texture = [[GLK2Texture texturePreLoadedByApplesGLKit:appleTextureMetadata] retain];

[/objc]

Part 2: Modify the DrawCall to track Textures and TextureUnits

You can’t use a texture directly, but you do have to create and manage them. TextureUnits also have to be managed, on a Draw-call-by-Draw-call basis. I went for the “least possible coding” approach – but it’s still got a lot of boilerplate code here.

GLK2DrawCall.m:
[objc]
@interface GLK2DrawCall()
@property(nonatomic,retain) NSMutableArray* textureUnitSlots;
@property(nonatomic,retain) NSMutableDictionary* texturesFromSamplers;
[/objc]

The number of TextureUnits is fixed and small – on most iOS devices, it’s 8. This is how many textures can be used during a single Draw call. You can have thousands of textures on-screen – they just have to be rendered by different Draw calls. Note that there is a significant performance cost to “switching” textures in and out of TextureUnits (this is partly why they exist – so you can optimize your switching).

So … we pre-create an array with exactly that many slots. Each slot “represents” a TextureUnit.

We also create a dictionary mapping “Uniform from Shader” to “the Texture that the Uniform is connected to”. For reasons we’ll come onto later, Uniforms used by TextureMapping have a different type in Shader source: “sampler2D”. So we use the term “samplers” from here on in.

GLK2DrawCall.m:
[objc]

-(void)dealloc
{
self.texturesFromSamplers = nil;
self.textureUnitSlots = nil;

}

– (id)init
{
self = [super init];
if (self) {

self.texturesFromSamplers = [NSMutableDictionary dictionary];

GLint sizeOfTextureUnitSlotsArray; // MUST be fixed size, and have an entry for every index!
glGetIntegerv( GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, &sizeOfTextureUnitSlotsArray );
self.textureUnitSlots = [NSMutableArray arrayWithCapacity:sizeOfTextureUnitSlotsArray];
for( int i=0; i<sizeOfTextureUnitSlotsArray; i++ )
[self.textureUnitSlots addObject:[NSNull null]]; // marks this slto as "currently empty"

}
[/objc]

Next comes a big, fat, ugly chunk of boilerplate code…

What this will do is take a pair of “Uniform (Sampler2d)” and “Texture”, and save references to both of them in this Draw call.

It will also iterate across our “slots” (one per TextureUnit) until it finds one that is “empty” (i.e. not being used for a Texture on this Draw call already). If it can’t find one – you’ve tried to use too many textures, and we Assert.

NB: GL ridiculously uses two different incompatible ways of referencing TextureUnits – be very careful if you re-write this code manually. Pay close attention to TextureUnit name vs. TextureUnit index

GLK2DrawCall.m:
[objc]

-(GLuint)setTexture:(GLK2Texture *)texture forSampler:(GLK2Uniform *)sampler
{
NSAssert( sampler != nil, @"Cannot set a texture for non-existent sampler = nil");

if( texture != nil )
{
/** do we already have this sampler stored? */
int indexOfStoredSampler = -1;
int i=-1;
for( GLK2Uniform* samplerInUnitSlot in self.textureUnitSlots )
{
i++;

if( (id)samplerInUnitSlot != [NSNull null] && [samplerInUnitSlot isEqual:sampler])
{
indexOfStoredSampler = i;
break;
}
}

/** store the texture locally */
[self.texturesFromSamplers setObject:texture forKey:sampler];

/** choose a textureunit slot, if not already assigned for that sampler */
if( indexOfStoredSampler < 0 )
{
i = -1;
for( GLK2Uniform* samplerInUnitSlot in self.textureUnitSlots )
{
i++;

if( (id)samplerInUnitSlot == [NSNull null])
{
[self.textureUnitSlots replaceObjectAtIndex:i withObject:sampler];
indexOfStoredSampler = i;

/** Inform the embedded shader program that this slot is the new source for this sampler */
// save the current program, since GL requires this magicness…
GLint currentProgram;
glGetIntegerv( GL_CURRENT_PROGRAM, &currentProgram);

NSAssert( self.shaderProgram != nil, @"Cannot set textures on a drawcall until you’ve given it a shader program (it’s possible, but not implemented here)");
GLint textureUnitOffsetOpenGLMakesThisHard = [self textureUnitOffsetForSampler:sampler];

/** Set the program, set the Uniform, then restore the program */
glUseProgram(self.shaderProgram.glName);
glUniform1i( sampler.glLocation, textureUnitOffsetOpenGLMakesThisHard );
glUseProgram(currentProgram);
break;
}
}

NSAssert( indexOfStoredSampler >= 0, @"Ran out of texture-units; you cannot assign this many texture samplers to a single ShaderProgram on your hardware" );
}

return indexOfStoredSampler;
}
else
{
[self.texturesFromSamplers removeObjectForKey:sampler];

int i=-1;
for( GLK2Uniform* samplerInUnitSlot in self.textureUnitSlots )
{
i++;

if( (id)samplerInUnitSlot != [NSNull null] && [samplerInUnitSlot isEqual:sampler])
{
[self.textureUnitSlots replaceObjectAtIndex:i withObject:[NSNull null]];
break;
}
}

return -1;
}
}

-(GLint)textureUnitOffsetForSampler:(GLK2Uniform *)sampler
{
int i=-1;
for( GLK2Uniform* samplerInUnitSlot in self.textureUnitSlots )
{
i++;

if( (id)samplerInUnitSlot != [NSNull null] && [samplerInUnitSlot isEqual:sampler])
{
return i; // NB: sometimes you need i, sometimes you need GL_TEXTURE0 + i. OpenGL API is evil. Don’t mix them up!
}
}

return -1;
}
[/objc]

Now we can use this method to “add” a texture to our Draw call:

ViewController.m:
[objc]
-(NSMutableArray*) createAllDrawCalls
{

GLK2Uniform* uniformTextureSampler = [drawTexturedQuad.shaderProgram uniformNamed:@"s_texture"];

/** … store the sampler and texture in the Draw call, and configure it to use them */
[drawTexturedQuad setTexture:texture forSampler:uniformTextureSampler];

}
[/objc]

Part 3 + 4: Use the texture inside the Fragment Shader

First, we need a Varying (just as with Procedural Texturing).

Unlike Procedural Texturing … the values of this Varying are special, and you have to use numbers pre-ordained by the GL spec:

  1. You will have two floating-point values, one for “X inside the texture”, one for “Y inside the texture”
  2. X and Y are 0 at the image’s top-left corner, and 1.0 at the bottom right corner
  3. If you render a pixel with X or Y less than 0, or greater than 1, GL gives you multiple options on how it converts those numbers into the 0..1 range. The default is to “clamp” (anything less than 0 becomes 0, anything greater than 1 becomes 1)

For the Vertex Shader, we’ll re-use our existing … except: we have to change the range of Varying values back to the original 0..1:

VertexTextureMappingUnprojected.vsh:
[c]
attribute vec4 position;
attribute vec2 a_virtualXY;

varying mediump vec2 v_virtualXY;

void main()
{
v_virtualXY = a_virtualXY;
gl_Position = position;
}
[/c]

The Fragment Shader needs only one line changing, we’ll make a new Shader:

FragmentTextureMapOnly.fsh
[c]
uniform sampler2D s_texture; // this name MUST match the one used for "uniformNamed:" above
varying mediump vec2 v_virtualXY; // MUST match the one in Vertex Shader

void main()
{
gl_FragColor = texture2D( s_texture, v_virtualXY );
}
[/c]

…and use the new Shader pair in our app:

ViewController.m:
[objc]

-(NSMutableArray*) createAllDrawCalls
{
…this gets inserted just after we create the GLK2DrawCall itself,
…so that we have the ShaderProgram active before we try to read
…Uniforms, Samplers, etc out of it…

drawTexturedQuad.shaderProgram = [GLK2ShaderProgram shaderProgramFromVertexFilename:@"VertexTextureMappingUnprojected" fragmentFilename:@"FragmentTextureMapOnly"];
glUseProgram( drawTexturedQuad.shaderProgram.glName );

}
[/objc]

Finally: update the RenderFrame method to use Texture Mapping

ViewController.m:
[objc]

-(void) renderSingleDrawCall:(GLK2DrawCall*) drawCall
{

for( GLK2Uniform* sampler in drawCall.texturesFromSamplers )
{
GLK2Texture* texture = [drawCall.texturesFromSamplers objectForKey:sampler];
NSLog(@"RENDER: Binding and enabling texture sampler ‘%@’, putting texture: %i into texture unit: %i", sampler.nameInSourceFile, texture.glName, GL_TEXTURE0 + [drawCall textureUnitOffsetForSampler:sampler] );

glActiveTexture( GL_TEXTURE0 + [drawCall textureUnitOffsetForSampler:sampler] );
glBindTexture( GL_TEXTURE_2D, texture.glName);
}

/** Finally: kick-off the draw-call, telling GL how to interpret the data we’ve given it (triangles, lines, points – or a variation of one of those) */
glDrawArrays( GL_TRIANGLES, 0, drawCall.numVerticesToDraw );
}
[/objc]

…which is where all that boilerplate code in GLK2DrawCall saves our butts: the “management” of the texture/texture-unit stuff is all happening there, and our Render call remains nice and simple (as it should be).

But, but! … my texture is upside-down!

iOS’s implementation of OpenGL really messes with the concept of “up”. You can Google for “OpenGL image upside down” and find many hits that explain the basic problem (OpenGL draws from y=0 at BOTTOM of screen, UIKit draws with y=0 at TOP of screen).

…but then you discover that UIKit *sometimes* automatically does the switch for you.

…and that CALayer (which UIKit uses internally) uses the OpenGL version of “up” (because it’s implemented on top of OpenGL, in fact).

The very very short answer is this: change the U,V numbers you attached as vertex attributes, to “flip” your texture upside down. i.e. wherever the second value is “1.0” replace it with “-1.0”.

Did this post help you?

If you’re finding these OpenGL ES tutorials useful, enter your email, and I’ll let you know the next time I post one. I’ll also send you some info about my current personal-project, a 3D game/app which uses these techniques:

[smlsubform emailtxt=”” showname=” mailinglist=”pif-opengles2.6″]

Source code…

In case you hadn’t noticed, I’ve converted this series of blog posts into a complete, standalone, library and hosted it on GitHub.

For this post, I’ve created a Branch containing the exact source of each file used here: https://github.com/adamgit/GL2KitExtensions/tree/Part6TextureMapping

(note: the “master” branch on GitHub has some code from the next few (unpublished!) blog posts; to avoid confusing yourself, use the branch above. But going forwards, you might want to play with the master branch sometime too)