Author Archives: adam

Changing scene/level smoothly in Unity 5 (5.5+) #unitytips

Changing scenes in Unity is harder than I expected. Since version 2 of Unity, we’ve had a deceptively simple and easy to use API call:

Application.LoadLevel() // Easy to use, but wrong name, wrong parameter, no status

It’s easy, but the player-experience was horrible. Unity Pro users had access to a better API. Then, in Unity 5.3, they added the new SceneManager class, and deprecated all the old methods.

So … what’s the best practice for loading a level / changing a scene in Unity 5?
Continue reading

A real IDE for #unity3d: how to install JetBrains Rider C# editor on Mac (2017) #unitytips

I joined the EAP for Rider about a year ago, but it had too many problems for me (personally) to make it my main editor on Mac (mostly: it coped poorly with running on non-Windows machines, integration with Unity was wonky). These were all workaroundable, but life’s too short to use a wonky IDE, so I waited, and periodically re-tested to see if it was easy yet.

That day has come! Installing the C# IDE on Mac is finally quick and easy.

Pre-requisites

  1. A Mac, running a recent-ish copy of OS X (I tested on latest: 10.12.3)
  2. A copy of Unity 5.5.0 or later installed on your machine

If you only make games, you probably have 5.5.x or 5.6 installed by now. My game is built in 5.4, and Unity’s backwards compatibility is non-existent (my experience so far: most versions of Unity 5 seem to corrupt each other’s project files), but you only need a copy of 5.5.x+ installed, you don’t need to be actually using it.

(as an Asset Store developer, I have copies of: 4.6, 5.0, 5.1, 5.2, 5.3, 5.4, 5.4.3, 5.4.4, 5.5.0, 5.5.2 currently installed. Unity is cool with this – no problems. Everything “Just Works”)

Alternatively, you can download and install a separate version of Mono, for your Mac, but I had a lot of problems when I did this – it got MonoDevelop confused, it got Rider confused, it got OS X confused. My advice: just don’t. Mono is not supported well enough on OS X yet.

Steps

This bit was confusing. There is no documentation anywhere from Rider explaining how to install Rider with Unity. Instead, there are partial docs, in different places, which pretend that you know everything and that you will read the minds of JetBrains dev-team and guess what to do.

So, to be really clear:

  1. Install Rider (obviously)
  2. Install the Resharper-Unity plugin (not obvious: you’re not using Resharper, and the docs suggest you don’t need it. But you do. NB: this is “auto suggested” by the installer – the installer is great)
  3. Configure Unity to use Rider
  4. Configure Rider to use Unity
  5. Install the Unity3DRider plugin

Every step is required! (as of April 2017) Do not be fooled into thinking you need fewer than “all” of them. i.e. “don’t do what I did the first time round”.

Install Rider

Easy … https://www.jetbrains.com/rider/download/

Install Resharper-Unity plugin

For me, this was automatically offered during the install process, so I accepted it. It Just Works, so I recommend you accept it too.

Configure Unity to use Rider

From the official Jetbrains instructions at bottom of this page:

Set Rider as the default External Script Editor

“This only needs to be done once.

  1. Open Unity.
  2. Go to Edit → Preferences → External Tools.
  3. Select “Browse” in the External Script Editor dropdown and select the Rider application.
  4. On Windows, navigate to %APPDATA%\Microsoft\Windows\Start Menu\Programs\JetBrains Toolbox and select “Rider”
  5. On Mac, select ~/Applications/Jet Brains/Toolbox/Rider.app or /Applications/Rider.app
  6. On Linux, select rider.sh

Install the plugin into your project

From the same official page – except their instructions are slightly wrong (they work, but are overkill).

This needs to be done for each project.

Copy the folder Assets/Plugins/Editor/JetBrains from this repository into Assets/Plugins/Editor/JetBrains in your project.

…Note: you do NOT need to copy the linked files (currently: two .cs files) into “Assets/Plugins/Editor/JetBrains”. The code works fine if you follow whatever standard you’re using in your project. The only requirement is that ONE of the parent/grandparent/etc folders is named “Editor” (this is a Unity feature).

e.g. I put mine in: Assets/Plugins/3rd-party/RiderEAP/Editor/

Configure Rider to use Unity

Tricky. First of all:

Quit Rider and restart it via Unity

Do this by finding any of your scripts in Unity Editor and double-clicking them. If you’ve done the steps above correctly, a new copy of Rider will open, and will open the script automatically.

If you don’t do this, Rider will try and get you to create a new project, which is a road of pain and suffering you want to avoid (you want Unity to managing the project, not Rider).

Nothing works; Fix the bug

If you wait a bit, and watch the status-bar at bottom of the screen, you’ll see an error appear. Click on the status bar for more info (this is standard JetBrains IDE behaviour). You’ll see this error:

Solution ‘MY GAME HERE’ load failed
Rider was unable to detect a Mono runtime on this machine.

Don’t click the link in the error message! The error comes because OS X doesn’t include Mono, and nor does Rider. But this is where your copy of Unity 5.5+ comes in handy. Courtest of this Rider bug report, we learn that we can use the copy of Mono in Unity 5.5, and how.

Open up Rider’s preferences (cmd-, as normal for OS X):

UPDATE June 2017: Jetbrains has made a breaking-change to Rider, and this setting no longer exists. If the old way was working for you, you must now re-do it with the following setting instead:

…thanks to Ilya for figuring this out, and Kirill from JetBrains for telling us the new setting-name.

…hit the button on the right to browse, and navigate to the folder: " /Applications/Unity/Unity.app/Contents/MonoBleedingEdge" (NB: if you have different versions of Unity installed, you’ll want to navigate to /Applications/Unity-5.5, or Unity5.6, instead – wherever you installed your 5.5+ version to).

Rider will think for a bit – watch that status bar – and you should see the left-hand pane auto-update to look exactly like it does in MonoDevelop (sorry no “before” screenshot, I didn’t think to capture it in time):

Party

Final step: realise you now have a real IDE, with wonderful, 21st-century, source control options, formatting THAT WORKS (@MonoDevelop/Xamarin: don’t call us, we’ll call you…never), intelligent code-hints and auto-fixes, etc. Knock yourselves out…

ECS in Unity: Integration and Execution 1

Background

(originally published for Patreon backers November 2016)

General background for all these articles… I’m writing about major areas I’ve ignored in the past, using Unity as a convenient demo-environment for the ideas and tests:

  1. Multi-threading
  2. Querying / filtering
  3. Networking
  4. Unity integration:
    • As easy-to-use as the Unity Inspector (direct editing of components in 3D editor)
    • Easy use of Unity MonoBehaviours/Components on ECS Entities

I don’t want to be too language-centric. I ran a survey to see which languages people are using for ECS’s, and which they wanted more info on – wow, big spread! But talking to people over the past year, Unity kept coming up time and again as a particularly popular one where people want more info.

Screen Shot 2016-11-11 at 16.25.00

Overview: ECS Integration and Execution

Goal

We’re going to build a simple, effective ECS in the same way as you would inside any non-ECS game-engine. At the end of this article, we’ll have an ECS in Unity that interacts with legacy (non-ECS) code cleanly, and demonstrates simple isolation (runs on its own thread). The approach, the test-data, and the problems, should generalize well to other engines.

Steps

  • Make a simple scene in pure Unity
  • Add collision-detection and avoidance
  • Add visualizations so we can see what’s happening
  • Re-create the scene using our ECS
  • Add collision-detection in the ECS

It’s very long, so I’m splitting it into two articles (although Patreon backers will get both at once). The first article has most of the planning, demo-scene creation, and Unity testing. If you know Unity well already, skim-read that. The second article has all of the ECS-specific parts and covers most of the porting from Unity (or: random game-engine) into ECS.

Let’s get started…

Make a simple scene in pure Unity

Creating a new code-architecture inside an existing system is risky and time-consuming (wasting). To make this as smooth and manageable as possible, we’ll create a baseline project in legacy code, port it to ECS, and use that to check assumptions and planning around the ECS design. MVP FTW.

The demo scene

We’ll make thousands of squares move around in 2D on a plane (or, since Unity is a 3D engine: cubes moving on a flat surface, viewed from above).

Screen Shot 2016-11-11 at 17.17.42

Any demo scene will do, but I chose one so that:

  • It uses more than one type of Component on each Entity
  • Different types of Component will ideally be updated by different Processors
  • It’s easy to visualise whether the code is working at all
  • It’s easy to visualise bugs in the code (it’s working, but it’s wrong)
  • An excuse to do some CPU-intensive calculations that benefit from an ECS

In Unity…

Because this is our baseline, needed for lots of testing and ongoing development, we’re going to build most of the scene procedurally. This makes it easy for us to tweak parameters later on (how many cubes? how big? how much room can they move around in? where is the camera? … etc).

Class: CreateScene

This class will run even before the Start() methods, and guarantee we have a flat surface to work upon.

We use Unity’s CreatePrimitive command, that automatically creates not only a mesh (something that gets rendered), but also attaches a physical collider to it, so that any physics objects dropped on the mesh will sit on top of it.

Finally, we grab the scene camera (or create one if there isn’t one already) and move it to sit above the plane, looking downwards.

using UnityEngine;
using System.Collections;

public class CreateScene : MonoBehaviour
{
	public int _planeWidth = 50; // user can edit this in-editor
	public static int planeWidth; // when app starts, this copies the user-chosen value and exposes it to other classes

	public void Awake()
	{
		planeWidth = _planeWidth;
		GameObject plane = GameObject.CreatePrimitive( PrimitiveType.Cube );
		plane.name = "plane";
		plane.transform.localScale = new Vector3( planeWidth + 20f, 1f, planeWidth + 20f );
		plane.transform.position = new Vector3( 0f, -1f, 0f );

		Camera cam = Camera.main;
		if( cam == null )
		{
			GameObject _goCam = new GameObject( "Camera" );
			cam = _goCam.AddComponent<Camera>();
		}
		cam.transform.position = new Vector3( 0f, 50f, 0f );
		cam.transform.LookAt( Vector3.zero );
	}
}

Create an empty GameObject, attach that script, and run it. You should get a camera correctly auto-positioned looking down on the plane.

Screen Shot 2016-11-11 at 17.18.56

Class: PostSceneCreateCubes

This class runs once the game has started, and generates a random number of cubes that are dropped somewhere on the plane. It conveniently re-parents them into a GameObject so they don’t clutter the scene-hierarchy window. The positions are automatically set using the static variable that tells us how big the plane was to start with.

using UnityEngine;
using System.Collections;
using System.Collections.Generic;

public class PostSceneCreateCubes : MonoBehaviour
{
	public int targetNumCubes = 1000;

	private List<GameObject> _internalCubes = new List<GameObject>();
	private GameObject _cubeHolder;

	void Start()
	{
		if( _cubeHolder == null )
			_cubeHolder = new GameObject( "Auto-created cubes" );

		for( int i = 0; i < targetNumCubes; i++ )
		{
			GameObject cubeN = GameObject.CreatePrimitive( PrimitiveType.Cube );
			cubeN.name = "Cube-" + i;
			cubeN.transform.position = new Vector3( Random.Range( -CreateScene.planeWidth / 2f, CreateScene.planeWidth / 2f ), 0.1f, Random.Range( -CreateScene.planeWidth / 2f, CreateScene.planeWidth / 2f ) );
	
			_internalCubes.Add( cubeN );
			cubeN.transform.SetParent( _cubeHolder.transform, true );
		}
	}	
}

Make a new GameObject for this, attach it, and run – you should get a random scattering of cubes. Great! Exciting! Easy (that’s the point ;)).

Screen Shot 2016-11-11 at 17.21.18

Class: UnityCubeMover

And the core: something that will move the cubes.

In a Unity game, traditionally this code would be placed in a “Cube” component attached to the cubes themselves. This isn’t a good way to write games/apps, and is a primary reason for adding an ECS to Unity. Even in Unity we often ignore it and place the code into a single shared class, like we do here. So we’re cheating a little: we know that eventually we want the code in a single class (that will be ported to become an ECS Processor).

First we need a way of knowing how fast we want each cube to move, and in which direction. I could try pre-rotating each cube to a random direction, and moving it “forwards” … but I chose to store an X,Y pair for the direction + speed instead.

using UnityEngine;
using System.Collections;

public class UnityCubeVelocity : MonoBehaviour
{
	public Vector3 velocity;
}

…if that class looks suspiciously like it’ll become a Component during the porting later on, then good.

Now we can make the main class which once per frame looks at all the cubes, and moves each one by random amounts.

using UnityEngine;
using System.Collections;
using System.Collections.Generic;

public class UnityCubeMover : MonoBehaviour
{
	public float maxXYSpeedPerSecond = 10f;

	protected List<GameObject> _internalCubes;

	public void Update()
	{
		if( _internalCubes == null )
			return;
		
			int i = -1;
			foreach( GameObject cube in _internalCubes )
			{
				i++;
				UnityCubeVelocity v = cube.GetComponent<UnityCubeVelocity>();
				if( v == null )
				{
					v = cube.AddComponent<UnityCubeVelocit>();
					v.velocity = new Vector3( Random.Range( -maxXYSpeedPerSecond, maxXYSpeedPerSecond ), 0f, Random.Range( -maxXYSpeedPerSecond, maxXYSpeedPerSecond ) );
				}
	
				Vector3 _translationThisFrame = v.velocity * Time.deltaTime;
				
				cube.transform.position += _translationThisFrame;		
			}
		}
	}	
}

This class needs to find the cubes we created, and since they’re created procedurally, we can’t use Unity’s normal mechanism for sharing this data (don’t use Unity’s Find; never use Unity’s Find unless you are absolutely desperate)

So, we’ll add a method to assign the cubes, and move the “AddComponent” call into it:

public class UnityCubeMover : MonoBehaviour
{
...
	public void SetInternalCubes(List<GameObject> cs)
	{
		_internalCubes = cs;

		foreach( GameObject cube in _internalCubes )
		{
			cube.AddComponent<UnityCubeVelocity>();
		}
	}
...
}

…and we’ll call that method from our PostSceneCreateCubes class:

public class PostSceneCreateCubes : MonoBehaviour
{
...
	public UnityCubeMover cubeMover;
...
	void Start()
	{

... original contents of this method go here ...

		if( cubeMover == null )
		{
			GameObject go_cubemover = new GameObject( "AUTO-CREATED CUBE MOVER" );
			cubeMover = go_cubemover.AddComponent<UnityCubeMover>();
		}
		cubeMover.SetInternalCubes( _internalCubes );
		_internalCubes = null;
	}
}

Add a new GameObject for the mover, and add UnityCubeMover. Run this, and … your cubes will jiggle like tiny ants.

Sadly, they’re not moving around smoothly – because we keep changing them. Every. Single. Frame. Forever. Without. Giving. Any. Time. To. Breathe.

Improving the scene

Smoother movement

It’s too quick for us to know if things are working. For a quick hack, let’s only change the direction/velocity once every N frames

I’m not happy with this hack: I’m sure there’s cleaner/clearer code ways of achieving this. I was looking for something engine-agnostic, it was quick to hack together, worked first time, hasn’t caused any problems yet. But it’s ugly/hard to read.

public class UnityCubeMover : MonoBehaviour
{
...
	protected static int _frameSkip = 60;
	protected int _framesSkippedSoFar = _frameSkip;
	
...
	public void Update()
	{
		if( _framesSkippedSoFar >= _frameSkip )
		{
			_framesSkippedSoFar = 0;

			foreach( GameObject cube in _internalCubes )
			{
				UnityCubeVelocity v = cube.GetComponent<UnityCubeVelocity>();
	
				v.velocity = new Vector3( Random.Range( -maxXYSpeedPerSecond, maxXYSpeedPerSecond ), 0f, Random.Range( -maxXYSpeedPerSecond, maxXYSpeedPerSecond ) );
			}
		}
		else
		{
			_framesSkippedSoFar++;
		
			... original code from the Update method goes here ...
		}
	}
}

Run again, and you should get smooth movement, with them all changing direction suddenly every 60 frames (every 2 seconds, if you’ve set Unity to a 30 FPS cap).

Colouring-in the cubes

White cubes on a white background are annoyingly hard to see. We’ll make a Unity Component that handles all the visual and animation on each individual cube, and attach it per-cube.

class: WanderingCube

using UnityEngine;
using System.Collections;

public class WanderingCube : MonoBehaviour
{
	public Vector3 maxAbsOfPositions;

	public static Color[] availableColors = new Color[] { Color.blue, Color.grey, Color.blue, Color.cyan, Color.yellow };
	public static Material[] availableMaterials;

	public void Awake()
	{
		if( availableMaterials == null )
		{
			availableMaterials = new Material[ availableColors.Length ];
			for( int i = 0; i < availableColors.Length; i++ )
			{
				availableMaterials[ i ] = new Material( Shader.Find( "Standard" ) );
				availableMaterials[ i ].color = availableColors[ i ];
			}
		}
		GetComponent<MeshRenderer>().sharedMaterial = availableMaterials[ Random.Range( 0, availableMaterials.Length - 1 ) ];
	}
}

…and attach it when we create our initial cubes:

public class PostSceneCreateCubes : MonoBehaviour
{
...
	public UnityCubeMover cubeMover;
...
	void Start()
	{
...
		for( int i = 0; i < targetNumCubes; i++ )
		{
			GameObject cubeN = GameObject.CreatePrimitive( PrimitiveType.Cube );
			cubeN.name = "Cube-" + i;
			cubeN.transform.position = new Vector3( Random.Range( -CreateScene.planeWidth / 2f, CreateScene.planeWidth / 2f ), 0.1f, Random.Range( -CreateScene.planeWidth / 2f, CreateScene.planeWidth / 2f ) );
			WanderingCube wcube = cubeN.AddComponent<WanderingCube>();
			wcube.maxAbsOfPositions = CreateScene.planeWidth / 2f * Vector3.one;
		}
...
	}
}

Run this, and you get multiple colours. Lovely.

Screen Shot 2016-11-11 at 17.22.29

Stopping them leaving the play-area

With these random procedural simulations we’ll often want to leave the code running for a while, looking out for problems. At the moment, the cubes eventually wander off the playing area. The simplest old-school solution is to teleport them back onto the other side whenever they fly off.

I’m putting this into Unity’s LateUpdate() so that it happens AFTER all of our other game-logic, and we don’t have to worry about it interfering with collision detection (much).

using UnityEngine;
using System.Collections;

public class WanderingCube : MonoBehaviour
{
...
	public void LateUpdate()
	{
		Vector3 pos = transform.position;
		
		if( pos.x > CreateScene.planeWidth / 2f )
			pos.x -= 2f * CreateScene.planeWidth / 2f;
		else if( pos.x < -1f * CreateScene.planeWidth / 2f )
			pos.x += 2f * CreateScene.planeWidth / 2f;
	
		if( pos.z > CreateScene.planeWidth / 2f )
			pos.z -= 2f * CreateScene.planeWidth / 2f;
		else if( pos.z < -1f * CreateScene.planeWidth / 2f )
			pos.z += 2f * CreateScene.planeWidth / 2f;

		transform.position = pos;
	}
}

Collision Detection 1: using Unity’s in-game CD for debugging

We’re going to use two parallel forms of collision detection. The first one is for visualisation and is purely observational: all collisions are allowed, interpenetration happens all the time, no effect on gameply. This will use Unity’s built in “Physics Trigger” concept.

The second form is for gameplay, and we’ll put more effort into it. It will be a full collision detection-and-prevention system that actually blocks things from moving, makes them bounce off each other, etc.

Visual debugging: cubes that detect collisions

We could set a single colour to every cube – or randomize the colours – but we’re going to go one better. Each cube will have a colour that indicates what’s happening to that cube.

Visual debugging is infinitely faster and more efficient than console-debugging or using the debugger (if you like this idea, go read @redblobgames’ articles on game programming; Amit likes to use visual interactive diagrams in his articles. Very time-consuming to make, but wonderful to play with).

To do this, we:

  1. Make all cubes blue to start with
  2. Create a “collided” material, and colour it red
  3. Add the necessary physics things in Unity to detect collisions
  4. When a cube collides, we change it material to the collided one
using UnityEngine;
using System.Collections;

public class WanderingCube : MonoBehaviour
{
...
	public static Color[] availableColors = new Color[] { Color.blue };
	public static Material[] availableMaterials;
	public static Material errorMaterial;
...
	public void Awake()
	{

... original contents of this method goes here ...

		BoxCollider col = gameObject.GetComponent<BoxCollider>(); // needed to receive OnTrigger callbacks
		if( col == null )
			col = gameObject.AddComponent<BoxCollider>();
		col.isTrigger = true;

		Rigidbody rb = gameObject.AddComponent<Rigidbody>(); // needed to receive OnTrigger callbacks
		rb.useGravity = false;
	}
...
/** This is a piece of GUI/UX: we use this to make cubes intelligently warn us whenever they
    inter-penetrate; our code should prevent this from happening .... "should" ... muahahahahaha!
    
    (if you've ever had the pain of race conditions in multi-threaded code, you may see where I'm
    going with this...)
    */
	public void OnTriggerEnter(Collider other)
	{
			GetComponent<MeshRenderer>().sharedMaterial = errorMaterial;
	}

}

This will work, but … the cubes go red and never go back to blue.

And there’s another problem – depending upon your version of Unity, the cubes MIGHT detect the plane itself as “colliding” – NB: due to Unity’s generally weird physics, it’s not guaranteed whether this will or won’t happen on your install.

So we do two changes:

  1. Use a co-routine to automatically animate a change: switch back to the original material after a few seconds
  2. Place a Unity “tag” on each cube and tell the physics to ignore collisions unless they are “tagged” as cubes

Pre-tag objects

First, because Unity’s editor is old-fashioned about this, manually pre-create the tag in your editor:

Menu: Edit – Project Settings – Tags and Layers – … and select “Tags” at the top, then hit the plus button to create a new one

Call your tag WanderingCube, and then add code to the WanderingCube class so that new cubes automatically tag themselves:

public class WanderingCube : MonoBehaviour
{
...
	public void Awake()
	{
		tag = "WanderingCube";
		...
	}
...
}

Flash the collision indicator – but only for cube/cube collisions

using UnityEngine;
using System.Collections;

public class WanderingCube : MonoBehaviour
{
...
	private Material _originalMaterial;
	private Coroutine _coro_ResetMaterial;
...
	public void OnTriggerEnter(Collider other)
	{
		if( other.gameObject.tag == tag )
		{
			if( _coro_ResetMaterial != null )
			{
				StopCoroutine( _coro_ResetMaterial );
				GetComponent<MeshRenderer>().sharedMaterial = _originalMaterial;
			}

			_originalMaterial = GetComponent<MeshRenderer>().sharedMaterial;

			GetComponent<MeshRenderer>().sharedMaterial = errorMaterial;

			_coro_ResetMaterial = StartCoroutine( "ResetMaterialToOriginal" );
		}
	}

    /** Using a co-routine allows us to display it for more than a single frame, ie human-friendly
    but fire-and-forget in the original method that triggers it
    */
	public IEnumerator ResetMaterialToOriginal()
	{
		yield return new WaitForSeconds( 0.4f );
		GetComponent<MeshRenderer>().sharedMaterial = _originalMaterial;
	}
}

Run that.

Now you should have cubes merrily running around, flashing red when they collide, but otherwise happily passing through each other. This is ideal – it sets the stage for us to add our own collision-detection in the next stage.

Screen Shot 2016-11-11 at 17.23.38

Collision Detection 2: using Unity’s in-game CD for Gameplay

We’ll use Unity at first, to test the code, then later switch to our own implementation (which can execute entirely inside the ECS). However, there’s another reason not to rely upon Unity’s CD…

Unity3D has no collision detection system. Instead, it re-uses the 3rd-party physics engine built-in to Unity, and uses an approximation of CD from that.

If you move a transform.position in Unity, there is no direct way to find out if it will collide, where it will collide, or what it collides with.

Most of the time, no-one cares about the difference between “collision detection” and “approximate collision detection by asking the physics engine”. You use the physics methods outside the physics loop (which you should never do – it’s inaccurate) and Unity doesn’t complain. Occasionally it’ll give you noticeably wrong results.

In particular:

  • RayCast
  • BoxCast
  • RayCastAll
  • …etc

…all run instantly, using approximated data from the most-recent physics tick – which is usually (always?) out of synch with rendering. I’ve got some interesting physics-heavy projects that demo this and some of the odd side-effects. IMHO the engine should throw an exception when any of these are used outside the physics loop – you need to be aware that the data is wrong.§

Making Collision Detection easy: the rise of the AABB

Writing CD is well-documented in computer games for more than 30 years. However, doing it correctly is a bit of a pain, and lots of code (that I can’t be bothered to write). We’re going to use a cheap hack to the scene/game design that allows us to write a much simpler (but 100% correct) CD later on.

Google it for more info, but the absolute minimum is that you have to “sweep” your objects through space, and compare the overlapping swept shapes. Rather than asking “do the squares overlap?” you have to ask “do the swept squares overlap?”.

No collision detected MISSING collision!
2dsweep-0 2dsweep-1

…using swept shapes instead, you detect the collisions because your new shapes overlap:

2dsweep-2

We can massively simplify this: if we only use cubes (oooh! What a coincidence!) and we only allow AA (axis-aligned) movement, we can trivially calculate the swept-cube as a rectangular, stretched cube. Or, in 2D, we move from comparing squares to comparing rectangles – and all programming languages have built-in methods for comparing if pairs of rectangles overlap (i.e. collide).

2dsweep-3

This requires a tweak:

public class UnityCubeMover : MonoBehaviour
{
...
	public void Update()
	{
...
			foreach( GameObject cube in _internalCubes )
			{
				UnityCubeVelocity v = cube.GetComponent<UnityCubeVelocity>();

				if( Random.Range(0,2) < 1 )
				{
					v.velocity = new Vector3( Random.Range( -maxXYSpeedPerSecond, maxXYSpeedPerSecond ), 0f, 0f );
				}
				else
				{
					v.velocity = new Vector3( 0f, 0f, Random.Range( -maxXYSpeedPerSecond, maxXYSpeedPerSecond ) );
				}
			}
...
	}
...
}

…now the cubes should only move up-down or left-right.

Preventing collisions

We need to do three things: detect collisions, prevent them, and … show that we’ve prevented them.

To detect collisions, we’ll create a “will moving this cube cause a collision?” method, and if the answer is “yes”, we’ll simply block it.

public class UnityCubeMover : MonoBehaviour
{
...
	public bool blockCollisions = true;
...
	public void Update()
	{
...
				Vector3 _translationThisFrame = v.velocity * Time.deltaTime;
				
				GameObject collidee;
				bool _willCollide = wouldMovingCubeCollide( cube, _translationThisFrame, out collidee );
				
				if( _willCollide && blockCollisions )
				{
					//Debug.Log( "Cube " + cube + " (bounds.extends = " + cube.GetComponent<BoxCollider>().bounds.extents + ") WOULD HAVE hit (but I prevented it) " + collidee + " alng translation = " + _translationThisFrame );
					Debug.DrawRay( cube.transform.position + Vector3.up, _translationThisFrame );
				}
				else
					cube.transform.position += _translationThisFrame;
...
	}
}

…to fulfil that, we use the simplest possible collide-check: we ask Unity physics to do all the hard work (although we know this will be slightly inaccurate):

public class UnityCubeMover : MonoBehaviour
{
...
	/**
	Uses Unity physics to do arbitrary collision.	
	In testing, it ends up being wrong about 1% of the time, due to the frame rate diff of physic vs rendering.
	*/
	protected bool wouldMovingCubeCollide( GameObject movingCube, Vector3 translation, out GameObject objectCollidedWith )
	{
		UnityCubeVelocity v = movingCube.GetComponent<UnityCubeVelocity>();
	
		RaycastHit hit;
		if( blockCollisions && Physics.BoxCast( movingCube.transform.position, /* Unity's docs for this method are bad */ movingCube.GetComponent<BoxCollider>().bounds.extents,  translation, out hit, Quaternion.identity, /** check 10% ahead because Unity floats are rather inaccurate; feel free to use 1.0 instead */ 1.1f * translation.magnitude ) )
		{
			objectCollidedWith = hit.collider.gameObject;
			return true;
		}
				
		objectCollidedWith = null;
		return false;
	}
}

You can test it by clicking the “blockCollisions” method on and off while it’s running – but that’s not really clear enough for me.

If you run this code, the cubes will move around, and gradually come to a halt as they avoid collisions. Then, when the directions re-randomize, they’ll (mostly) all start moving again. Since we’re preventing potential collisions by stopping dead, even cubes that look like they might not collide get stopped – instead of going “as far as I could and stopping at the last moment”. Use the Rays that I’m drawing in the Scene view to confirm this in cases where it’s unclear why individual (pairs of) cubes have stopped.

Screen Shot 2016-11-05 at 20.43.14

Displaying prevented-collisions

Let’s make this easier to see. We’ll allow cubes to go an extra colour: yellow, for when they are frozen to avoid a collision.

public class UnityCubeMover : MonoBehaviour
{
...
	public void Update()
	{
...
		if( _framesSkippedSoFar >= _frameSkip )
		{
...
		}
		else
		{
			_framesSkippedSoFar++;
...

...original method contents here...

			foreach( GameObject cube in _internalCubes )
			{
				if( _willCollide && blockCollisions )
				{
					cube.GetComponent<WanderingCube>().OnCollisionWasPrevented();
				}
			}
		}
	}
...
}

…and upgrade the WanderingCube to support that:

using UnityEngine;
using System.Collections;

public class WanderingCube : MonoBehaviour
{
...
	public static Material errorMaterial, almostCollidedMaterial;
...
	public void Awake()
	{
...
		if( availableMaterials == null )
		{
			errorMaterial = new Material( Shader.Find( "Standard" ) );
			errorMaterial.color = Color.red;

			almostCollidedMaterial = new Material( Shader.Find( "Standard" ) );
			almostCollidedMaterial.color = new Color( 0.5f, 0.5f, 0 );
			...
		}
	}
...
/** This is a piece of GUI/UX: we use this to make cubes intelligently tell us when
they've been FORCED to stop moving by the collision-detection system
*/
	public void OnCollisionWasPrevented()
	{
		if( _coro_ResetMaterial != null )
		{
			StopCoroutine( _coro_ResetMaterial );
			GetComponent<MeshRenderer>().sharedMaterial = _originalMaterial;
		}

		_originalMaterial = GetComponent<MeshRenderer>().sharedMaterial;

		GetComponent<MeshRenderer>().sharedMaterial = almostCollidedMaterial;

		_coro_ResetMaterial = StartCoroutine( "ResetMaterialToOriginal" );
	}
}

Now you should find that cubes are blue when moving, flash red briefly when they collided, and stop and turn yellow when they MIGHT collide:

Screen Shot 2016-11-11 at 17.40.28

Wait – why are we seeing red? We should never see red!

Correct. There are three things that allow collisions to happen despite our precautions.

At the end of each frame we’re teleporting cubes who’ve gone off the edge of the playing field back on to the other side. Since we ignore this in the CD code, we expect to see a few random red flashes around the edges of the playing area. We could fix that, but … it’s useful. It’s a known error, and it proves that our “VISUALLY detect collision, even if the CD code has failed” code is actually running and working correctly. If we ever get a version of the demo where nothig is ever going red, we know we accidentally broke something.

Secondly, there’s the problem of Unity physics approximating the CD. Each case where you get a flash of red is where there was a near-miss that wasn’t going to happen during the last physics step (which is where Unity’s CD approximation gets its data from), but did happen during the next physics step (which happens out of sync with our game-logic).

Finally … there’s a bug. I said at the start:

“I’m putting this into Unity’s LateUpdate() so that it happens AFTER all of our other game-logic, and we don’t have to worry about it interfering with collision detection (much).”

…that LateUpdate that’s doing the teleport is also overriding the on-screen position of cubes, but NOT changing the ECS’s version of the data. Over time, this gets more and more corrupt. We’ll delete that code from WanderingCube.LateUpdate(), and re-create it inside the (new, ECS version of) CubeMover.

When we port this identical code to an ECS, and stop using Unity physics, you expect to see all the red flashes in the middle of playing area go away…

Converting this demo into an ECS demo

With all the setup out of the way, onto the ECS stuff. There’s some obvious parts to this, and some non-obvious ones. We’re NOT going to throw everything away – we’ll be keeping some of the legacy code (things that are specific to the game-engine, and work well living outside the ECS). In particular: the animation of cube collisions (blue, red for collided, yellow for blocked) is useful to keep outside the ECS – it lets us debug our ECS without worrying that the debugging code is broken.

Next…

Patreon backers can access part 2 and later articles immediately – link here: requires password.

In part 1, we created a test scene for doing collision detection and rendering in Unity, in a way that we can then port to an ECS. In part 2, we’ll do the actual conversion, and create a simple ECS that exists (and runs) independently inside Unity, but is managed by Unity classes.

#Gamification, #StackOverflow: How they create new sites from the community while blocking spam

If you’re not a StackOverflow user …

  • It’s rapidly become the go-to place for answers to precise technical questions
  • It has a bold, “points-based reputation controls everything”, moderation system (e.g. get upvoted enough and you become a moderator – there is no human intervention!)
  • It worked so well they expanded to infinitely many clone sites, for any topic you can think of, collectively termed “StackExchange”
  • …and the process for creating a new clone is itself points-based (not human moderated)

Historically this kind of setup has been a recipe for disaster – too easily gamed (taken advantage of), both by selfish users and purely malicious griefers. SO has had, and still has, many problems – some of the design choices that worked early on caused more problems than they solved as it scaled up in size. But overall it worked, and continues to work, very well.

The main site is easy to understand – by the time you’ve earned 10,000 reputation you are probably so enmeshed in the community it’s probably safe to give you moderator tools. (Note “probably” – it causes serious damage in a large minority of cases, but overall it works well, and it’s very cheap. This generally means no corporate sponsors/advertisers/subscriptions are needed)

But the process for creating new sites is less obvious, more convoluted, and – for the brand – potentially a lot more dangerous. I’m involved in two sites going through the process right now, and it’s interesting to compare them.

Site creation overview

  1. A special site – Area51 – allows you to create new sites, with this FAQ
  2. Your new site goes into phase 1: “Definition”
  3. If it passes, it goes to phase 2: “Commitment”
  4. Once it’s live, it’s monitored for a while, and if the site was a bad idea / disaster / failure, it gets shutdown
  5. If it passes the probationary period, the site is live

SO is worried about a bunch of things. From game-design and community-management perspective, I’d expect them to focus on e.g.

  • Will anyone use the site? (ask questions)
  • Are there any experts around to answer the questions?
  • Is the community large enough to be self-sustaining?

“Definition”

This checks the first worry – will anyone ask any questions?

To prove this programmatically, they force you to ask 40 “Good” questions.

To determine if a question is good, it has to receive 10 more upvotes than downvotes. This is arbitrary, but it means e.g. 10 different people felt it was a good question, and no-one thought it bad (or more for both sides, so long as there’s 10 more up than down votes).

Riiight … so you write 40 questions, get 9 friends to upvote all of them, and away you go … right? Easily gamed/abused.

Wrong. Area51 uses a modified version of SO’s voting. Each user is limited to asking 5 questions – which is fair and harsh. Fair because: if there’s really a community ready to go, it will have questions from many people (at least 8 people will have to pose good questions to get the 40 needed). Harsh because: most people will struggle to think up more than 1 or 2 good questions.

Still easily gameable, but now enough to dissuade idle / bored people.

Similarly each user can only upvote 5 questions. Choosing to up/downvote is very easy, so this isn’t “harsh” at all. It’s (almost) equally gameable: other users (anyone, anywhere) can register and counter-game by downvoting bad questions, forcing the collaborators to work harder.

Sadly, this scheme has a non-obvious element – the need to get 40 questions to 10 upvotes – that MOST users fail to understand, and SO has done nothing to fix. On SO.com, more upvotes is always better; on Area51 the 11th upvote (and all afterwards) are not only worthless, but actively delay the proposal because they squander upvotes that the user could have used on other questions.

So, for instance, the Computer Science Educators proposal was popular but spent many months failing to pass this phase because people arrived, upvoted the top questions, felt they’d helped … and left. Because of the design-flaw, not only were they “not helping” but the surge of high voted questions above the fold encouraged the next wave of newcomers to do the same. #facepalm.

By comparison, the IMHO less valuable Microbit proposal, for a politically driven educational tool that distracts from CS education, appears to have been supported by people with better understanding of the rules, and got through much more quickly. I don’t mind a microbit SE site, but … not if something so niche and political gets through at the cost of the CS Educators site (because it diverts attention away).

Solution: SO should change Area51’s visual design so that any question with 11 or more upvotes is displayed as “accepted” instead of a number, and the total number for each question is shown in smaller type somewhere else on the page.

“Commitment”

Guarding against the second concern is phase 2. With 80 people signed up as “committed” to microbit, and only 65 “committed” to CS Educators, microbit has now overtaken the older proposal. Right? Wrong.

Microbit proposal CS Educators proposal
Screen Shot 2016-10-05 at 12.33.01 Screen Shot 2016-10-05 at 12.32.53

Clicking the link at bottom right of each info panel shows that SO has a different approach at this stage. They use not one but three measurements to pass – it’s got harder.

Microbit would be winning here, except … SO judges you on the weakest link in the chain. And two of the three criteria are biased against people who don’t use SO much:

Microbit proposal CS Educators proposal

Screen Shot 2016-10-05 at 12.06.03

Screen Shot 2016-10-05 at 12.05.59

Measure 1: Total number of committers

Raw score of “how many people have clicked a button to say they believe this is a good site worth adding to the web”. Very much in the vein of SO.

EXCEPT: I see no “anti-commitment”; true SO ideals would suggest that we have a way to say “no, I don’t think this deserves a site” – and have it cost you some of your positive influence on committing to sites you do like. This is the essence of what made SO successful, and it’s intriguing that they’ve dropped it here.

I suspect (guess) that the relative infrequency of people proposing/committing to new sites (a few a year) vs voting/asking/answering SO questions (hundreds a year) means that the danger of people putting in negative votes at no effective personal cost was considered too likely. Or the cost to individuals of gaining enough positive commitments to “earn” the right to anti-commit was too high.

Measure 2: Require committers with > 200 reputation elsewhere

This is the classic gating strategy SO doubled-down on when they became too successful / too big: a requirement for minimum amount of positive reputation before enabling basic features that spammers tried to (Ab)use.

Earning 200 rep on “any” site is much too easy, and I think they’ve made a mistake here. It ought to be something like “earn 200 rep on 2 different sites, or 500 rep on one site”. I say this because over time SO rep has continual inflation problems, just like real-world currencies in live economies. A 200 rep barrier on one site is no barrier to people gaming the system – there are now so many obscure SE sites that it’s easy to find a gameable one. But finding multiple gameable ones would be substantially harder.

Measure 3: fudge factor

…this is the solution to the problems with Measure 2. I’d prefer a better Measure 2, but I can see value in having Measure 2 be very simple to describe, and then to fix the problems later.

Official stance on measure 3

The one piece of data we have that tells us a lot and is hard to game is a user’s reputation on the existing sites.

If you have a lot of reputation, you’re much more likely to actively use the site, because you’ve shown that you actively use similar sites

If you have a significant amount of reputation across multiple sites, you’re even more likely to actively use the site, because you’ve shown that you actively use many such sites

On the other hand, if you’re some random person off the internet with no reputation, you’re very hard to quantify but there’s a good chance that you won’t contribute very much

Here’s the formula we have right now. It’s almost certainly wrong and we’ll be tweaking it as we go:

Correct code to add a mouseover/mouse-hover/pointer-enter to #unity3d

This should be a 1-line feature, but Unity screwed it up. You can do it in approx. 12 lines of code, but I couldn’t find anywhere showing how, so I’ve written it up and you can copy/paste.

This is missing from the Unity docs (as of summer 2016).

This is missing from the Unity API’s and SDK (they implemented the code for “onClick” but failed to implement the code for the other GUI events – onHover / onEnter / onExit – etc).

The most-popular code on the internet is much too long and over-complicated, and requires creating new classes (that you don’t need) which pollute your code-base.

The less-popular but mostly correct code only works for Unity 4.6, and has some bugs that will prevent it working in most cases.

So I fixed it…

How it should work – but doesn’t

This is Unity’s code for adding a “click” handler to a button in their new (post-4.5) GUI:

Button b = ... // your button, from your code
b.onClick.AddListener( () => { YOUR_CODE_HERE } );

So the code for adding a “hover” handler (or, in 99% of SDK’s and platforms, an “enter/exit” pair of handlers) should be:

Button b = ... // your button, from your code
b.onEnter.AddListener( () => { YOUR_CODE_HERE } );
b.onExit.AddListener( () => { YOUR_CODE_HERE } );

Or, if they wanted to go with the 1% solution, that is only used by CSS:

b.onHover.AddListener( () => { YOUR_CODE_HERE } );

None of these work, because Unity’s API is incomplete.

Correct solution, Unity v5.0 – v5.4

Instead, you have to implement the missing code from Unity yourself. You can use this to do manual click detection, and to distinguish between mouse down and mouse up (a “click” is traditionally a pair of events: down followed by up. This is standard in all windowing systems and GUI API’s).

The most commonly needed case is hovering, so you’ll need two pieces of code, one for “mouse moves over” (enter) and one for “mouse moves away” (exit).

/**
 * replace the code "YOUR_CODE_HERE_1" and "YOUR_CODE_HERE_2"
 */
Button b = ... // your button, from your code

EventTrigger trigger = b.GetComponentInParent<EventTrigger>();
if( trigger == null ) trigger = b.gameObject.AddComponent<EventTrigger>();

EventTrigger.Entry entryEnter = new EventTrigger.Entry();
entryEnter.eventID = EventTriggerType.PointerEnter;
entryEnter.callback.AddListener( (eventData) => { YOUR_CODE_HERE_1(); } );
trigger.triggers.Add(entryEnter);

EventTrigger.Entry entryExit = new EventTrigger.Entry();
entryExit.eventID = EventTriggerType.PointerExit;
entryExit.callback.AddListener( (eventData) => { YOUR_CODE_HERE_2(); } );
trigger.triggers.Add(entryExit);

#unity3d #missingdocs: CanvasRenderer.SetMesh() – making it work (mostly)

This once-obscure method, that – I guess – is the low-level call used by most of the new Unity GUI … is now the only way of drawing meshes in GUIs. The previous options have been removed, with brief comments telling you to use .SetMesh instead. Half of the DrawMesh / DrawMeshNow methods have also been removed (no explanation given in docs), which were my other go-to approach.

Unfortunately, no-one has documented SetMesh, and it has significant bugs, and breaks with core Unity conventions. This makes it rather difficult to use. Here’s the docs I’ve worked out by trial and error…

Continue reading

What one #unity3d private class should be made public? @shawnwhite

Shawn asked on Twitter:

We only get to pick ONE? :).

How do we decide?

There’s two ways to slice this. There’s “private APIs that would hugely benefit us / our projects / everyone’s projects / 3rd party code that I use (e.g. other people’s Asset Store plugins I buy/use)”. We can judge that largely by looking at what private API’s I’ve hacked access to, or decompiled, or rewritten from scratch.

Then there’s “what CAN’T I access / hack / replace?”. That’s a harder question, but leads to the truly massive wins, I suspect.

Stuff I’ve hacked access to

The Project/Hierarchy/Scene/Inspector panels

So, for instance, I made this (free) little editor extension that lets you create new things (scripts, materials, … folders) from the keyboard, instead of having to click tiny buttons every time.

There are no public API’s for this; that’s a tragedy. Most of these Unity panels haven’t been improved for many years, and are a long way behind the standard with Unity’s other improvements. They “work”, but don’t “shine”.

What could I do with this?

Well … a few studios I know have completely rewritten the Scene Hierarchy panel, so that:

  • it does colour-coding of the names of each gameobject
  • clicking a prefab selects both the prefab and any related prefabs, or vice versa, or hilights them
  • added (obvious) new right-click options that are missing from default Unity Editor
  • automated some of the major problems in Unity’s idea of “parenting” (parenting isn’t always safe to do; you can enforce / protect this with a custom scene hierarchy)
  • made it put an “error” icon next to each gameobject that is affected by a current error.
  • …etc

All massively useful stuff that helps hour-to-hour development, reducing dev time and cost.

It’s all “possible” right now by writing lots of horribly ugly and longwinded boilerplae code, and using the antiquated Editor GUI API.

But to make it play nicely with the rest of Unity requires also hacking Unity API’s for the various panels/windows, and detecting popups (and adding your own popup classes, since Unity keeps most of theirs private), and detecting drags that started in one panel but moved to another, detecting context-sensitive stuff that is not exposed by current API’s, … etc.

A better List editor

The built-in sub-editor (like a PropertyDrawer – see below) is very basic – really a “version 0.1” interface.

There is a much nicer one, that does what most Unity developers need – but it’s private and buggy (last time I tried, it corrupted underlying data. That’s presumably why it’s still private?)

Editor co-routines

Co-routines work perfectly in the Editor. (EDIT: thanks to ShawnWhite for the info): Unity doesn’t use co-routines outside of runtime; what appears to use them is OS-provided multi-threading. Strangely, when using that, I haven’t seen any of Unity’s ERRORs that are usually triggered by accessing the Unity not-threadsafe code from other threads – something weird happening in the OS?

Why doesn’t Unity support co-routines in the Editor?

I’ve no idea. There are many people who’ve re-implemented co-routines in editor, exactly as per Unity’s runtime co-routines. As a bonus, you end up with a much better co-routine: you can fix the missing features. But there’s some strange edge-cases, e.g. when Unity is reloading assemblies (which it does every time you save any source file), for a few seconds it presents a corrupt data view to any running code, and you if start running a co-routine in that time, it will do some very odd things.

Unity recently exposed some API’s to detect if Unity was in the middle of those reloads, but last time I tried it I couldn’t 100% reliably avoid them. An official implementation of Unity’s own co-routine code, that was automatically paused by Unity’s own reload-script code, would neatly fix this.

Until we have something like that, we’re forced to write two copies of every algorithm (C# doesn’t allow co-routine code to be run as a non-co-routine) so we can test in Editor, do level editing, debug and improve runtime features, etc … which is silly.

Stuff I CANNOT hack into/around

Serialization

Unity is the only engine I’ve worked with where the core data structures and transformations are opaque, hidden, can’t be extended, can’t be debugged. Tragically: also has many missing features, bugs, and serious performance issues.

There are good reasons for why this remains in such a bad state (It’s hard to fix. Meanwhile … it sort-of works, enough to write games in – you just have to occasionally write a lot of bad code, have to rewrite some ported libraries, have to know a lot of Unity-specific voodoo, etc).

But if it were exposed – we could (I would start on it tomorrow!) fix most of the problems. I’ve done proof-of-concepts with some terrifying hackery that show it’s possible – and a lot of the architecture is well explored, other ways it can be implemented, that could be given to developers as options (some would work better for your game, others might not; but you could pick and choose).

It’s too much to ask for (it intersects so much of the engine, and it would unleash a horror of potential bugs and crashes), but my number 1:

Callbacks for ALL core Unity methods

This sounds small but would have positive impact on a lot of projects.

c.f. my reverse-engineered callback diagram for EditorWindow in Unity:

…but we have the same problems for MonoBehaviour, for GameObject, etc. Not only are lifecycles poorly documented, but they’re inconsistent and – in multiple places (c.f. above diagram “Open Different Scene”) – they’re not even deterministic! It’s random what methods the Editor will call at all, let alone “when”.

Under the hood there must be reliable points for doing these callbacks … somewhere.

Undo

Undo has never worked in Unity. The worst stuff I narrowed down to ultra-simple demos that Unity’s own code was broken, I’ve logged bugs and Unity fixed them – but the current system is a horrible mess, much too hard to use. Many methods only randomly do what they’re supposed to, and there’s no way to debug it, because the internals are hidden.

If Unity exposed the actual, genuine, underlying state-change points, we could correctly implement editor extensions that support Undo 100%. I’d be happy to also use them to write an Asset that implements “easy to use Undo”, based on how other platforms have implemented it (e.g. Apple’s design of NSDocument is pretty clear and sensible, based on lists of Change Events).

Unity could then make “Undo that works” a mandatory requirement on the Asset Store. Currently it’s listed as mandatory, but no Asset has ever been checked for it (so far as I can tell).

Not least because Unity’s own code has had such problems supporting it!

PropertyDrawer: doesn’t quite do what it claims to (yet)

Recall what I said above: most of the Editor GUI/UX itself “hasn’t been improved for many years”. Unity made it user-extensible/replaceable many years ago – so in theory you could update / replace whatever you want. There’s a huge amount we’ve been able to update and customise (although it’s very expensive in coding time, due to a lack of modern GUI API’s, sometimes it’s well worth it).

But you can only replace the Inspector for a particular Component/MonoBehaviour. You cannot say “I want to replace the Inspector for GameObject’s that have Components X Y Z”.

Worse, if you wanted to replace e.g. the part of the Inspector that automatically draws a Vector … you can’t.

Unity had a great idea to solve one of these: Property Drawers. These would let you customise the rendering of sub-parts of an Inspector – the rendering of individual labels for member variables, list items etc.

IN THEORY this would let you write your own list-renderer that would work everywhere, and make lists very easy to use in the Editor – but only write the code once.

IN PRACTICE it was only implemented in a very basic way, and most of the things you want to use it for are blocked / inactive. There is NO WAY to fix this in user code.

(well, actually there is … c.f. . But this is a horrendous amount of work – AI’s author did a Herculean task! – and means you’ll never the benefit of future Unity UX / GUI updates, if there are any).

So: big upvote for exposing more of PropertyDrawer

How to fix: upgrading Apache to 2.4 / PHP 7 breaks WordPress

WordPress had a critical update recently, and I got tonnes of emails (one from each blog I run) demanding I upgrade NOW. So I did, and upgraded Apache to latest while I was at it.

Oh dear. All sites offline. First:

Unable to connect

…then, when I fixed Apache, I got:

“Your PHP installation appears to be missing the MySQL extension which is required by WordPress.”

What happened, and how do I fix it?

Apache 2.4 upgrade is a bit dodgy in Debian

The Powers That Be decided to mess around with core parts of the config files. The right thing to do would have been to add some interactive part in the upgrade script that said: “By the way, I’ve made all your websites broken and inaccessible, because they need to be in a new subfolder. Shall I move them for you?”

Here’s the reason and the quick-fix too

Apache 2.4 brings in PHP 7.0, replacing PHP 5

PHP 5 is old, very old. Historically, PHP has also been managed in a fairly shoddy manner, very cavalier with regards to upgrades, compatibility, safety, security.

So … the standard way to run PHP is to have a separate folder on your server for each “version” of PHP. Everyone does this; PHP is so crappy that you have little alternative.

But this also means that when Debian “upgrades” to PHP7, there is no warning that the new config file – speciic to PHP7 – has been created and ignores the existing config file

This is wrong in all ways, but it’s forced upon linux user by the crapness of PHP. If PHP weren’t so crap, we’d have a single global PHP config file – /etc/php/config.ini – and maybe small override files per version. But nooooooo – can’t do that! PHP is far too crap.

(did I say PHP is crap yet? Decent language, great for what it was meant for – but the (mis)management over the years is truckloads of #facepalm)

So, instead, you need to copy your PHP5 ini over the top of your PHP 7 ini – or at least “diff” them, find the things that are “off by default” in PHP 7 but must be “on” … e.g. MySQL!

Enable them, e.g. change this:

;extension=php_mysqli.dll

to this:

extension=php_mysqli.dll

…and restart Apache. Suddenly WordPress is back online!

/etc/init.d/apache2 restart

WordPress plugin: insert link to latest post (in category) on your menu

Instructions:

  1. Copy/paste this into your functions.php (TODO: convert it to a standalone php file, and make it into a plygin you can activte/deactivate)
  2. Create a new menu item of type “custom URL”
  3. Make your URL “http://#latestpost:category_name”
    • where “category_name” is the name of the category whose latest post you want to link to
  4. Make the name whatver you want to appear on the menu
  5. Profit!

Based on an idea (with some upgrading + bugfixes for latest WordPress in 2016) from http://www.viper007bond.com/2011/09/20/code-snippet-add-a-link-to-latest-post-to-wordpress-nav-menu/

/** Adam: add support for putting 'latest post in category X' to menu: */
// Front end only, don't hack on the settings page
if ( ! is_admin() ) {
    // Hook in early to modify the menu
    // This is before the CSS "selected" classes are calculated
    add_filter( 'wp_get_nav_menu_items', 'replace_placeholder_nav_menu_item_with_latest_post', 10, 3 );
}

// Replaces a custom URL placeholder with the URL to the latest post
function replace_placeholder_nav_menu_item_with_latest_post( $items, $menu, $args ) {

        $key = 'http://#latestpost:';

    // Loop through the menu items looking for placeholder(s)
    foreach ( $items as $item ) {
 
        // Is this the placeholder we're looking for?
        if ( 0 === strpos( $item->url, $key ) )
        {
 
        $catname = substr( $item->url, strlen($key) );
        // Get the latest post
        $latestpost = get_posts( array(
            'posts_per_page' => 1,
                'category_name' => $catname
        ) );

        if ( empty( $latestpost ) )
            continue;

        // Replace the placeholder with the real URL
        $item->url = get_permalink( $latestpost[0]->ID );
        }
    }

    // Return the modified (or maybe unmodified) menu items array
    return $items;
}

Better than Civ6? Bookmarkable links for Civ4 mod: Master Of Mana sources

Master of Mana was a great game – much better than Civ5, and from what we’ve seen of Civ6, Firaxis is still playing catch-up in a few areas :).

The author has disappeared, and his website has been taken over by scammers (not even going to link it), but the community has kept going the SourceForge-hosted copy of the source and continues to update it. The files are ordered confusingly (inherited from previous projects, and Civ4 itself, which was mainly shipped as a commercial game, not as a moddable game!). Here’s a few key links to find interesting / useful game-design gems:

The age-old question of Civ games: Roads and rivers in center of tiles, or edges?

Centers of tiles Edges of tiles
Screen Shot 2016-04-18 at 22.31.21

Screen Shot 2016-04-18 at 22.31.10

Pros and cons

  • Centers gives you STRAIGHT things (on a hex grid, it’s the only way to get straights!)
    • Roman Roads
    • Canals
    • Large rivers
  • Edges gives you meandering things (on a hex grid, centers only give wiggles at very large scale)
    • River valleys
    • Realistic medieval roads
    • Modern roads in mountains and hills (tend to wiggle crazily)
  • Movement is simplified with centers: If you’re on the tile, you’re on the road/river
  • Inhibition of movement is simplified with edges: Civilization games have traditionally given a move penalty AND a combat penalty to any tile-to-tile move that crosses an edge containing a river

My leanings…

One thing in particular that struck me from looking at the pictures:

Straight roads look so terrible that every single Civilization game since Civ1 has artifically wiggled them when rendering!

In particular, with 3D games (Civ4, Civ5 especially) this actively damages gameplay – it’s much too hard for the player to see at a glance which tiles are connected by roads, and to what extent. So much so that they cry-out for a “disable the wiggling effect on road-rendering” setting.

Also: I’m happpy to solve the “movement” problem by saying that if you’re in a tile that borders a road or a river, you are assumed to be “on” that road/river, with special-case handling under the hood that handles cases where two roads/rivers border the same tile. It increases the connectedness “for free” – but that’s how Civ games tend to do it anyway: encourage the player to put roads everywhere!

Thoughts on a postcard…

#unity3d remove yellow warnings you don’t need #unitytips

Screen Shot 2016-04-16 at 11.48.08

Warnings are very, very important in any compiled language: they tell you that the computer has checked your code and realised you “probably” created a bug; they even tell you something about what the bug might be.

..but the computer isn’t sure – if it could be sure, it would be a compiler Error (in red). So (in Unity) it’s yellow, and “optional”. But in those cases where it’s not a bug – and you know it! – it’s very annoying. Most IDE’s let you turn them on and off, Unity doesn’t … here’s how to fix it.
Continue reading

Simple #civ5 clone in Unity: hexes, movement, unit selection

Current features

commit 26eafb7865965fd5ef5ee3ad4863f00acf8d10a2

  • Generates hexes landscapes, with heights (Civ5 bored me by being flat-McFlat-in-flatland)
  • Every hex is selectable, using custom fix for Unity’s broken mouse-click handler (see below)
  • Any object sitting on landscape is selectable (ditto)
  • Selected units move if you click any of the adjacent hexes (shown using f-ugly green arrows on screenshot)

The green “you can move here” arrows look like spider-legs at the moment. #TotalFail. Next build I’m going to delete them (despite having spent ages tweaking the procgen mesh generation for them, sigh) and do something based on wireframe cages, I think.

Screen Shot 2016-04-11 at 22.59.25

Techniques

Hexes

I started with simple prototyping around hexes, but soon found that it’s worth investing the time to implement all the primitives in Amit’s page on Hexagon grids for games: http://www.redblobgames.com/grids/hexagons/

In practice, especially the ability to create a class that lets you do “setHex( HexCoord location, GameObject[] items )” and “getContentsOfHex( HexCoord location )” and things like “getNeighboursOf” … is very rapidly essential.

Mouse clicks in Unity

IMHO: work pretty badly. They require the physics engine, which – by definition – returns the WRONG answer when you ask “what did I click on?” (it randomises the answer every click!). They also fundamentally oppose Unity’s own core design (from the Editor: when you click any element of a prefab, it selects the prefab).

So I wrote my own “better mouse handler” that fixes all that. When you click in scene, it automatically propagates up the tree, finds any listeners, informs them what was clicked, and lets you write good, clean code. Unlike the Unity built-in version.

Procedural meshes for arrows

With hindsight, I should have just modelled these in blender. But I thought: I want a sinusoidal curve arrow; how hard can it be? I may want to animate it later, by destroying/adding points – that would be a lot of work with Unity’s partial animation system (it’s great for humanoids, less great for geometry) – but animating points in a mesh from C# code is super-easy.

In the end, I spent way too long tweaking the look, and on having 2-sided polygons that broke the Unity5 Standard shader by being too thin (on the plus side: I now know what that mistake looks like, and I’ll recognize it in future Unity projets. It has a very peculiar, unexpected, look to it).

I should have just made them in Blender, and – if I got as far as wanting to animate them – re-modelled in source code later (or found a “convert blender file to C vertices array” script, of which I’m sure there are hundreds on the web. Doh!

#lessonLearned.

Office suites (Word, Excel, Apple, Google) in 2016: Power-user experience

Every week, I have to use six different Office Software Suites:

  1. At school: Microsoft Office 2013
  2. At university: Microsoft Office 365
  3. At work: OpenOffice
  4. At home: LibreOffice
  5. Everywhere: Apple Keynote
  6. Everywhere: Google Docs

As an expert computer user (former SysAdmin), I’m often asked for help by people with non-computing backgrounds. When they see how many different suites I’m using, they’re … surprised, to say the least. Here’s a quick snapshot of what and why.
Continue reading

What makes a great #Unity3d asset? Which do you recommend?

Unity is still the only major game-engine with an effective, established Asset Store. This is an enormous benefit to game developers – but do you feel you’re making full use of it?

I’ve bought and used hundreds of Unity plugins, models, scripts, etc from 3rd parties. I’ve found some amazing things that transformed my development.

TL;DR: please share your recommended assets using this form: http://goo.gl/forms/G3vddOdRL3

Things we want to improve

This is a shortlist; if you’ve got areas you want to improve, please add a comment.
Continue reading

Which languages need Entity Systems libraries right now?

A few months ago I ran a survey to find out which programming-languages people were using with Entity Systems:

https://docs.google.com/forms/d/18JF6uCHI0nZ1-Yel76uZzL1UfFMI21QvDlcnXSGXSHo/viewform

I’m about to publish a Patreon article on Entity Systems (here if you want to support me), but I wanted to put something up on my blog at the same time, so here’s a quick look at the stats.
Continue reading

LFG: I’m looking for CTO/TechDirector/Head of Mobile/Consulting roles in CA, TX, London, and Asia

TL;DR: experienced CEO/CTO/TechDirector with long background in programming, sales, and business management (Corporate, iPhone/Android, Games, Education) looking for strategic roles in USA, UK, and Asia.

After a year-out to do a post-graduate degree in Education, I’m looking for something new and exciting to do next. My primary goal is to boost a company or team rapidly and show significant outcomes – increased revenue or other KPI’s – either through Consulting or full/part-time senior leadership.
Continue reading