Categories
Uncategorized

Making a night-vision shader for animals in #unity3d

In the FPS roguelike I’ve been working on, a core feature is that each ability modifies how you interact with the world – you see things differently, you move differently, etc.

Night Vision

I’m working on a tiny level demo to show some of my ideas together. I want one character that can see in the dark, something like the “InfraVision” of D&D.

Spec:

  • Character can see independent of the existence of lights in scene
  • Vision is imperfect, significantly weaker than daylight vision, in terms of details, colours, etc
  • Vision enables easy location + identification of other creatures

Screenshots

Here’s my orc (a test avatar I already had) sitting on top of a building:

Screen Shot 2016-01-19 at 03.23.56

Step 1: cancel the lighting

We use Unity’s “GetComponent.SetReplacementShader( Shader, “” )” to change the shader to a simple “Unlit” shader (Unity provides a built-in one).

[csharp]
[ContextMenu("Enable dark vision")]
public Shader shaderDarkVision;

public void Start()
{
if( shaderDarkVision == null )
shaderDarkVision = Shader.Find("Unlit/Texture");
}
public void EnableDarkVision()
{
GetComponent<Camera>().SetReplacementShader( shaderDarkVision, "" );
}
[/csharp]

Screen Shot 2016-01-19 at 03.23.45

Step2: Add a vignette to nerf it

I want the DarkVision to be significantly less perfect than daylight vision, so let’s add a simple vignette (Unity has downloadable examples of this shader in their docs, but I only found it after I’d got mine working!).

This took me a long time to get working, just to get accurate screen-positions of shader pixels – so much of Unity’s semi-proprietary Shader language is still undocumented, after many years. I knew all the maths, and had implemented the effect in OpenGL / GLSL in minutes, but it took an hour or so in Unity. And once it was working, I got sidetracked in tweaking the exact curve of darkness of vignette.

Screen Shot 2016-01-19 at 03.24.22

Step3: Convert to grayscale (infravision should be colourblind)

This was easy – simply average the r/g/b components of your Color, somethign like:

[c]
avg = (color.r + color.g + color.b ) / 3f;
color = float4( avg, avg, avg, color.a );
[/c]

Screen Shot 2016-01-19 at 03.21.43

Step 4: several hours of experimenting with wild effects

I had some problems with this:

  1. It was too easy to see “everything”
  2. With no lighting-calcs, it was fugly
  3. With no lighting-calcs, it was VERY hard to do depth-perception. In a complex street, you got lost trying to tell which objects were near, far, blocking your way, or not.
  4. WAY too much detail on the textures is showing through: DarkVision should lose most detail, while keeping overall shapes clear

Here’s some of the things I tried and discarded. Note how the screenshots are from many positions – I ran around a lot, testing how effective the different shaders were for hunting a quarry (or fleeing an angry mob of AI’s).

Screen Shot 2016-01-19 at 02.57.08 Screen Shot 2016-01-19 at 01.34.27 Screen Shot 2016-01-19 at 02.26.02

Step5: FAILED: use depth-buffer shading

Unity staff: please write clear, correct, up-to-date documentation on depth-buffering as supported (or not) by your proprietary Shader Lab language, and with clear details on how this works in Unity5, not some hand-waving Unity4-maybe-Unity5-not-sure gumph.

I simply couldn’t get it to work. Using the proprietary built-in depth-buffer samplers either failed outright (0.5 across the sampler), or gave bizarre results.

Rumours suggest that we have to write 3 shaders: 2 for your proprietary system, and a third where we re-implement depth-buffer access by hand. This seems to me unlikely – I am sure Unity can handle that for you. This is the kind of feature that Unity’s ShaderLab does well (but no-one documents, and so it goes under-used by devs).

When you find or reverse-engineer instructions, ShaderLab is excellent. It’s such a pity that so often … you can’t.

Step6: Combine object-normals with plain greyscale

Ultimately, I fell back on “Fine, I’ll use the most basic bits of Shaders that even Unity can’t make confusing” – object normals. I faked shading Old Skool: each surface is darkened by dot-producting the forwards vector ( which by definition is (0,0,1) after the MVP multiplication stage) with the normal vector. That gives a float from 0 to 1, showing how much the surface is facing towards/away from the viewer.

Screen Shot 2016-01-19 at 03.26.07

I’m not delighted with this – it’s not quite the effect I was aiming for (I REALLY wanted to wipe more of the diffuse detail, but…). However, it does work quite nicely, and for now it’s more than good enough.

Benefits of final version

Here’s why I like it:

  1. Runs 100% independently of my main lighting – I never have to re-write it when changing lighting.
  2. Provides huge advantages at night time, or in low-light situations. More so the less lighting there is
  3. Provides depth-cues to the player, so that running quickly through streets (or tunnels) is still pretty easy to do
  4. Limits the over-powered nature by cutting down peripheral vision
  5. Runs very fast on GPU, despite crappy hand-written shader code (mostly because it doesn’t use any lighting calculations)