A Simple Mind-Reading Machine

Just for fun, I decided to implement a "Mind Reading Machine" based on the work of Claude Shannon and D.W. Hagelbarger. If you’re not familiar with it, there’s a good summary of Shannon’s idea here (together with descriptions of some of his other inventions).

Essentially, it’s an algorithm that allows a computer to play a simple game against a human opponent. In each round of the game, the player has a free choice between two options – left/right, or heads/tails, for example. The algorithm attempts to "guess" which the player will choose. And, more often than not, the computer guesses correctly.

How does it work? Not by mind-reading, obviously, but by exploiting the fact that humans do not behave as "randomly" as they think they do. The computer maintains a very simple memory that records the pattern of results of the last two rounds – whether the player won or lost, whether they switched strategy, and then whether they then won or last the following round. The computer then uses this to choose its own best strategy based on the way the player behaved last time the current pattern occurred. If the computer loses twice in a row using the current strategy, it picks a random response in the next round.

There are ways to beat the algorithm (by ensuring that each time a pattern of plays comes up, you always behave the opposite way than you did last time), and I suspect there are many ways in which it could be improved (perhaps using Context-Tree Weighting, for example – but it’s a still an interesting experiment in its current form…

Here’s some C# code with a simple implementation of the algorithm – simply attach to an empty gameobject in a Unity scene (or the camera will do), and have a go at outwitting the computer by creating a "random" pattern using the left/right arrow keys:

using UnityEngine;
using System.Collections;

public class MindReader : MonoBehaviour {
     
    // Used to record play history
    int[,,] v;
    int lw1, lw2;
    int losingstreak;
    
    // The prediction of the player's next turn
    int prediction;
    
    // Keep track of scores
    int cpuScore, playerScore;

    string outText;
    
    // Use this for initialization
    void Start () {
        
        v = new int[2,2,2] { {{0,0},{0,0}}, {{0,0},{0,0}} };
        
        // No prior knowledge, so set initial prediction based on random coin flip
        prediction = flip ();
        
        // Set scores to 0
        cpuScore = playerScore = 0;
        
        // Won or lost last play and play before last
        lw2 = lw1 = 0;
        
         // Output
        outText = "";
        
    }
    
    void TakeTurn(int playerChoice) {

        // Display each player and computer's choices
        string outTextString = "You pressed " + playerChoice + ", " + "I guessed " + prediction + "\n";
         
        // Computer correctly guessed
        if(playerChoice == prediction) {
            cpuScore++;
            losingstreak = 0;
            outTextString += " I WIN!\n";
        }
         // Computer incorrectly guessed
        else {
            playerScore++;
            losingstreak++;
            outTextString += " YOU WIN!\n";
        }
        
        outText = outTextString;
         
        // Record latest scores in the matrix
        v[lw2,lw1,1] = (v[lw2,lw1,0] == playerChoice ? 1 : 0);
        v[lw2,lw1,0] = playerChoice;
        lw2 = lw1;
        lw1 = playerChoice;
        
        // If lost more than twice in present state, chose random strategy
        // Otherwise, keep same strategy
        prediction = v[lw2,lw1,1]==1 && losingstreak < 2 ? v[lw2,lw1,0] : flip();
    }
     
    // Simulate a coin flip to produce 50:50 chance of {0,1}
    int flip() {
        return Random.Range(0,2);
    }
    
    
    // Update is called once per frame
    void Update () {
    
         if(Input.GetKeyDown(KeyCode.LeftArrow)){
            TakeTurn(0);
        }
        if(Input.GetKeyDown(KeyCode.RightArrow)){
            TakeTurn(1);
        }    
    }
    
    void OnGUI() {
         GUIStyle style = new GUIStyle();
        style.fontSize = 36;
        
        GUI.Label(new Rect(0,0,Screen.width,100), outText, style);    
        GUI.Label(new Rect(0,100,Screen.width,200), "Player:" + playerScore + "CPU:" + cpuScore, style);
     }
    
}

There’s also a Java implementation of the same algorithm which you can try online at http://seed.ucsd.edu/~mindreader/

Posted in AI | Tagged , , | Leave a comment

Project Spark : First Impressions

Project Spark is an upcoming game/game creation environment/educational coding tool/interactive story creator/all/none of the above. Actually, I’m not quite sure what it is (nor exactly who it’s aimed at), but recently I’ve been doing a lot of computing with kids (both my own and others!) and, amongst the relative dirge of other latest-gen releases, it really stood out as something of interest to both me and them – it’s more pretty to look at than Scratch, say (which is excellent in many other respects), and it’s more educational and programming-like than other creative games like LittleBigPlanet or Minecraft.

It’s currently available only in private beta (due for release on Windows, XBox360, and XBoxOne sometime later this year I think), but I got offered a key so thought I’d take it up and give it a go.

You can play Project Spark games that others have created, but my interest was in creating my own project, so I started “Creative” mode. The tools to sculpt and paint your game world are pretty intuitive to use – my 6yr old had no difficulty dragging up mountains, creating rivers, grasslands, and placing trees – it’s essentially just a 3d painting program, but the results certainly do look pretty.

sculpt

However, what I was more interested in was the approach to creating game logic. The difficulty with any tool like this is getting the right balance of freedom –v- complexity. Oversimplify and, while you make the tool easy to use, it’s frustratingly limiting when trying to implement a feature not foreseen by the developers. Give the user more power and choice, however, and they become overwhelmed by complexity.

And that’s where Project Spark seems to be rather well-designed: there’s a gallery of game characters and other props that you can simply drag into your game world and, out-of-the-box, they pretty much behave as you’d expect in a game: “Hero” characters swing their sword with the X button on a gamepad, the camera can be moved with the mouse, doors can be opened by standing next to them and pressing space, goblins chase the hero, etc. etc. All without any additional effort.

But what’s smart is that all those behaviours only exist because that’s the default rules specified in those objects’ “brains”. Those brains can be edited: individual brain “tiles” can be swapped, rules added or deleted, or just completely rewritten from scratch. Here, for example is part of the brain that comes with the default player character:

brain

The code tiles are arranged to read pretty much like English: “When A pressed, do jump”, “When X pressed or left mouse button pressed, do attack”, “When an interactable object is detected in front, highlight it in yellow”. Changing or extending behaviour can be done by simply dragging alternative tiles into the brain.

Not only does that mean that it’s easy for novice programmers to create a brain that produces desired behaviour but, as a learning tool it also provides the reverse: having dropped a goblin (or, rather, a horde of them) into the game and witnessed their default behaviour, my son could go into their brains and see the set of rules that produced the behaviour in question. That, I think, is an incredibly powerful way to learn programming logic.

Technologically, and aesthetically, I’ve been very impressed, and if you get the chance to try Project Spark out for yourself, I’d highly recommend it. My only concern relates to its financing/distribution model:although this might be premature since the game hasn’t even been released yet, it appears that  the game itself will be free, and come with a base set of in-game objects from the gallery. Additional objects to make your games more varied and exciting can then be purchased through in-game microtransactions. Perhaps I’m just stuck in my ways, but I am not a fan of IAP, especially not in games that have such obvious appeal to kids as Project Spark does. I for one hope there will be an option just to pay £40 (heck, maybe £50) for the game up-front replete with a full complement of game objects and then let my children loose to create their own imagined game world, without fear of them being barraged by “Pay 1,000 credits to use this enchanted stick of awesome power?”-style messages…

Posted in Game Dev | Tagged , | Leave a comment

Glass Shader

I’ve seen a lot of posts recently from folks trying to create shaders that recreate the appearance of glass. I guess the reason is that glass is a pretty common material, but modelling its appearance is surprisingly complex – requiring transparency, reflection, specular highlights, texture maps, and more.

For what it’s worth, here’s my attempt at a transparent specular shader suitable for use in a glass material. It’s pretty self-explanatory I think:

Shader "Custom/Transparent Specular" {
	Properties {
		_MainTex ("Base (RGB)", 2D) = "white" {}
		
		// Colour property is used only to set influence of alpha, i.e. transparency
		_Colour ("Transparency (A only)", Color) = (0.5, 0.5, 0.5, 1)
		// Reflection map
		_Cube ("Reflection Cubemap", Cube) = "_Skybox" { TexGen CubeReflect }
		// Reflection Tint - leave as white to display reflection texture exactly as in cubemap
		_ReflectColour ("Reflection Colour", Color) = (1,1,1,0.5)
		// Reflection brightness
		_ReflectBrightness ("Reflection Brightness", Float) = 1.0
		// Specular
		_SpecularMap ("Specular / Reflection Map", 2D) = "white" {}
		// Rim
		_RimColour ("Rim Colour", Color) = (0.26,0.19,0.16,0.0)
	}
	SubShader {
		// This will be a transparent material
		Tags {
			"Queue"="Transparent"
			"IgnoreProjector"="True"
			"RenderType"="Transparent"
		}
		CGPROGRAM
		// Use surface shader with function called "surf"
		// Use the inbuilt BlinnPhong lighting model
		// Use alpha blending
		#pragma surface surf BlinnPhong alpha

		// Access the Shaderlab properties
		sampler2D _MainTex;
		sampler2D _SpecularMap;
		samplerCUBE _Cube;
		fixed4 _ReflectColour;
		fixed _ReflectBrightness;
		fixed4 _Colour;
		float4 _RimColour;

		struct Input {
			float2 uv_MainTex;
			float3 worldRefl; // Used for reflection map
			float3 viewDir; // Used for rim lighting
		};

		// Surface shader
		void surf (Input IN, inout SurfaceOutput o) {
		
			// Diffuse texture
			half4 c = tex2D (_MainTex, IN.uv_MainTex);
			o.Albedo = c.rgb;
			
			// How transparent is the surface?
			o.Alpha = _Colour.a * c.a;
			
			// Specular map
			half specular = tex2D(_SpecularMap, IN.uv_MainTex).r;
			o.Specular = specular;
			
			// Reflection map
			fixed4 reflcol = texCUBE (_Cube, IN.worldRefl);
			reflcol *= c.a;
			o.Emission = reflcol.rgb * _ReflectColour.rgb * _ReflectBrightness;
			o.Alpha = o.Alpha * _ReflectColour.a;
			
			//Rim
			// Rim lighting is emissive lighting based on angle between surface normal and view direction.
			// You get more reflection at glancing angles
			half intensity = 1.0 - saturate(dot (normalize(IN.viewDir), o.Normal));
			o.Emission += intensity * _RimColour.rgb;
		}
		ENDCG
	} 
	FallBack "Diffuse"
}

The Colour property is used only to set the overall transparency of the surface. The MainTex can be used to show grime, fingerprints, smudges, and smears on the surface. The SpecularMap should be set to match this by adjusting the degree to which specular highlights are shown. The Cube map contains the reflected environment image (which can be created using the script in my previous post). Here’s example textures for the maintex and spec map:

glass glass_s

 

When rendered in a scene, this gives a result as follows:

image

Posted in Game Dev | Tagged , | Leave a comment

Keeping Track of Targets in a Trigger

Unity provides the OnTriggerEnter() and OnTriggerExit() events that fire whenever a collider enters or exits the designated trigger, and also an OnTriggerStay() that fires in every frame in which the collider is within the trigger. However, it doesn’t provide a one-off “What objects are in this trigger” function, which is what I needed earlier in order to create a “lock-on” targetting system (Note that you can emulate this functionality somewhat with a Physics.Raycast / Physics.SphereCast, but you’d have to call it every frame to keep the list updated)

image

I could have sworn that OnTriggerExit() was buggy when my code wasn’t behaving the way I thought it should have, but it turns out my logic was faulty. For my own reference, here’s a description of the problem and the solution:

The “lock-on” target system is as follows: if the player presses a button while an enemy is within the target trigger zone, they lock-on to that enemy (by setting their own transform.parent to the transform of the enemy). Seemed simple enough, and my first thought was simply to keep track of the object currently in the target trigger like this:

GameObject enemyInTarget;

OnTriggerEnter(Collider other) {
  if(other.CompareTag(“Enemy”){
    enemyInTarget = other.gameObject;
  }
}

OnTriggerExit(Collider other){
  if(other.CompareTag(“Enemy”){
    enemyInTarget = null;
  }
}

void Update() {
  if(Input.GetButton(“LockOn”)){
    // Lock on by parenting to the enemy transform
    transform.parent = enemyInTarget.transform;
  }
}

 

The problem with this is that I have many enemies in my scene. So it’s perfectly possible after the player has already locked onto one enemy for another enemy to enter the trigger. The code above would immediately change the target to whatever the latest enemy to enter the trigger was (i.e. the one that most recently caused OnTriggerEnter to fire), causing it to jump around which wasn’t what I wanted. This is a pretty simple fix by adding an extra enemyInTarget == null condition to the OnTriggerEnter function to make sure we haven’t already got a target:

GameObject enemyInTarget;

OnTriggerEnter(Collider other) {
  if(other.CompareTag(“Enemy”) && enemyInTarget == null ){
    enemyInTarget = other.gameObject;
  }
}

OnTriggerExit(Collider other){
  if(other.CompareTag(“Enemy”)){
    enemyInTarget = null;
  }
}

void Update() {
  if(Input.GetButton(“LockOn”)){
    // Lock on by parenting to the enemy transform
    transform.parent = enemyInTarget.transform;
  }
}

 

But there’s still a problem. Although when a second enemy enters the trigger it will no longer cause the target to lock on to it, when it exits it will still clear the target. So enemies simply “passing through” the trigger caused the player to lose their target focus. Another fix – this time to the OnTriggerExit:

GameObject enemyInTarget;

OnTriggerEnter(Collider other) {
  if(other.CompareTag(“Enemy”) && enemyInTarget == null ){
    enemyInTarget = other.gameObject;
  }
}

OnTriggerExit(Collider other){
  if(other.CompareTag(“Enemy”) && other.gameObject == enemyInTarget){
    enemyInTarget = null;
  }
}

void Update() {
  if(Input.GetButton(“LockOn”)){
    // Lock on by parenting to the enemy transform
    transform.parent = enemyInTarget.transform;
  }
}

 

Getting better, but still not right. This broke in the following scenario (although this is the bug in my logic that took me a while to realise…)

  • Scene start. enemyInTarget = null; transform.parent = null;
  • EnemyA enters the trigger. enemyInTarget = EnemyA; transform.parent = null;
  • Player presses lock on button. enemyInTarget = EnemyA; transform.parent = EnemyA;
  • EnemyB also enters the trigger. Code fixes above mean still enemyInTarget = EnemyA; transform.parent = EnemyA;
  • Player stops locking onto EnemyA. enemyInTarget = EnemyA; transform.parent = null;
  • EnemyA leaves the trigger. enemyInTarget = null; transform.parent = null;

At the end of this series of events, the game state is identical to that at the start, except that EnemyB is already in the trigger, and pressing the “LockOn” button will have no effect because there is no record of that fact (since, at the point they entered the trigger, we were locking onto something else).

Sigh. I did consider changing the code logic completely at this point to ditch triggers and instead use a RayCast hit test to find the best target at the point the player presses the lock-on button. However, I also want to be able to highlight enemies in the target, and to do this using the RayCast method would require an expensive RayCast every frame (The PhysX Trigger calls are pretty optimised in contrast).

And then the strikingly simple solution hit me – I simply needed to maintain a local array of targets as they enter or leave the trigger and, at the point the player attempts to lock-on, choose the item at the top of the list (which, in the case of multiple enemies, is the one that has been there longest). Note that I’ve also changed the LockOn button to toggle between locked/unlocked to a parent target.

// Set up a list to keep track of targets
public List targets = new List();

// If a new enemy enters the trigger, add it to the list of targets
void OnTriggerEnter(Collider other){
    if (other.CompareTag("Enemy")) {
        GameObject go = other.gameObject;
        if(!targets.Contains(go)){
            targets.Add(go);
        }
    }
}

// When an enemy exits the trigger, remove it from the list
void OnTriggerExit(Collider other){
    if (other.CompareTag("Enemy")) {
      GameObject go = other.gameObject;
      targets.Remove(go);
    }
}

// When the player attempts to lock on, choose the top target from the list
if(Input.GetButtonDown("LockOn”)){
    if(targets.Count > 0 && transform.parent == null){
        transform.parent = targets[0].transform;
    }
    else {
        transform.parent = null;
    }
}

 

This solution is somewhat esoteric, but I wanted to document it for my own personal reference; if you want a more general reusable class to keep track of gameobjects in a Unity trigger, I highly recommend you check out http://technology.blurst.com/unity-physics-trigger-collider-examples/

Posted in Game Dev | Tagged , | 1 Comment

Custom Unity Terrain Material / Shaders

I love Unity – it’s one of those environments that’s actually a pleasure to program in, but its terrain system seems somewhat lacking behind its other features, requiring quite a lot of hacks and workarounds to get desired appearance and functionality (while still having an acceptable level of performance).

For example, at first glance, the inbuilt terrain appears to have only one shader, which only supports diffuse textures. And when you try to search the internet for “custom Unity terrain shader”, say, you find a variety of conflicting information that suggests it’s either not supported, requires exporting the raw terrain to a modelling program, or involves creating a new Terrain shader which is named the same (and therefore overrides) the hidden internal terrain shaders (typically "Hidden/TerrainEngine/Splatmap/Lightmap-FirstPass" or something similarly arcane) – hardly intuitive.

Many of the previous descriptions are outdated and inaccurate but, unfortunately, still appear as top hits in search result pages, and it seems hard to find reliable up-to-date information instead. So, for reference, this blog post will describe my experience of creating custom terrain shaders as of today (Dec 2013), using Unity 4.2.

 

Normal-Mapped/Specular Terrain

Since Unity 4.0, creating bump-mapped/specular terrain is actually pretty straightforward as Unity already comes supplied with a suitable shader. The steps required to apply it are as follows:

1. Create a new material and assign the Nature/Terrain/Bumped Specular shader to it. Note that there won’t be any texture slots on this material – it will use the textures assigned to the terrain itself in the next step.

image

2. Highlight the terrain and, in the inspector, click on the Paintbrush icon. Click Edit Textures –> Add Texture.

image

Add main textures and normal maps for each texture to be used on the terrain (they won’t take effect just yet):

image

You can then use the range of splat tools to paint the various textures onto the terrain.

3. Note the information text in the previous step says you need to apply a normal-mapped material to the terrain, so let’s do that: click on the cog icon on the right hand side and drag the material created in Step One into the “material” slot:

You should notice that your flat, diffuse terrain changes from this:

image

to this:

image

 

Custom Terrain Shaders

Creating bumped / specular shaders as above is fairly straightforward because Unity already comes with a bumped specular terrain shader – you just need to specify a material that uses it in place of the default diffuse terrain shader.

To create an alternative custom terrain shader other than bumped/specular is slightly more complicated – it might help to first see the structure of the inbuilt terrain shaders. You can download the source of all Unity supplied shaders from http://unity3d.com/unity/download/archive

  • The default diffuse terrain shader – Nature/Terrain/Diffuse – can be found in \DefaultResourcesExtra\TerrainShaders\Splats\FirstPass.shader. It has a dependency on Hidden/TerrainEngine/Splatmap/Lightmap-AddPass, which can be found at \DefaultResourcesExtra\TerrainShaders\Splats\AddPass.shader
  • The bumped specular terrain shader – Nature/Terrain/Bumped Specular – can be found in \DefaultResourcesExtra\Nature\Terrain\TerrBumpFirstPass.shader. It has a dependency on Hidden/Nature/Terrain/Bumped Specular AddPass, which can be found at \DefaultResourcesExtra\Nature\Terrain\TerrBumpAddPass.shader

The first thing to notice is the Properties block, which contains definitions for the textures and splat map control texture. These are marked as [HideInInspector] since they will be passed in automatically from the terrain engine rather than set in the material inspector. Also note that the control texture is passed in with a default value of “Red”, which is the channel used to indicate the influence of the Layer 0 texture (hence, by default, the entire terrain is coloured with _Splat0 texture):

// Splat Map Control Texture
[HideInInspector] _Control ("Control (RGBA)", 2D) = "red" {}

// Textures
[HideInInspector] _Splat3 ("Layer 3 (A)", 2D) = "white" {}
[HideInInspector] _Splat2 ("Layer 2 (B)", 2D) = "white" {}
[HideInInspector] _Splat1 ("Layer 1 (G)", 2D) = "white" {}
[HideInInspector] _Splat0 ("Layer 0 (R)", 2D) = "white" {}

// Normal Maps
[HideInInspector] _Normal3 ("Normal 3 (A)", 2D) = "bump" {}
[HideInInspector] _Normal2 ("Normal 2 (B)", 2D) = "bump" {}
[HideInInspector] _Normal1 ("Normal 1 (G)", 2D) = "bump" {}
[HideInInspector] _Normal0 ("Normal 0 (R)", 2D) = "bump" {}

The body of the shader is fairly standard – blending each of the four splat textures based on the corresponding RGBA channel from the control texture and setting that to the material albedo:

fixed3 col;
col  = splat_control.r * tex2D (_Splat0, IN.uv_Splat0).rgb;
col += splat_control.g * tex2D (_Splat1, IN.uv_Splat1).rgb;
col += splat_control.b * tex2D (_Splat2, IN.uv_Splat2).rgb;
col += splat_control.a * tex2D (_Splat3, IN.uv_Splat3).rgb;
o.Albedo = col;

The last bit to be careful of is near the bottom: the shader includes two “dependencies” - additional shaders - as follows:

Dependency "AddPassShader" = "MyShaders/Terrain/Editor/AddPass"
Dependency "BaseMapShader" = "MyShaders/Terrain/Editor/BaseMap"

The “Dependency” property appears to be unique to terrain shaders, and is not described anywhere in Unity’s shaderlab documentation, but here’s my findings:

AddPassShader

Examining the code of the shader specified in the “AddPassShader”, you’ll find the actual surface shader function is identical to that in the “FirstPassShader”. The differences between the two shaders are subtle:

  • FirstPassShader specifies “Queue” = “Geometry-100”, whereas AddPassShader is drawn in a later renderqueue: "Queue" = "Geometry-99"
  • AddPassShader specifies "IgnoreProjector"="True"
  • Rather than using the default blendmode, which would overwrite the output of previous, AddPassShader is an additive pass, specified using the decal:add parameter following the surface shader pragma directive.

Other than that, they’re pretty much identical. From empirical evidence and a bit of logical thinking, the reason for these two shaders and their slight differences can be explained – the “FirstPass” shader (i.e. the one used by the material assigned to the terrain) only has four texture slots available. If you assign more than four textures to the terrain, the “AddPassShader” is called by the terrain engine (and its _Control, _SplatX, and _NormalX textures assigned) for each additional batch of four textures assigned to the terrain, and its output is additively blended with the first pass. So:

  • Textures 1-4 rendered with FirstPassShader
  • Textures 5-8 rendered with AddPassShader (if necessary)
  • Textures 9-12 rendered with AddPassShader (if necessary)
  • etc...

If you specify four or less terrain textures, the Dependency “AddPassShader “directive is not required as the shader will never be called.

BaseMapShader

The FirstPassShader and AddPassShader pass the terrain control texture and splat textures to the shader and blend them in the shader itself. The problem with this approach is that it is relatively expensive on the GPU, and is not necessarily supported by older graphics cards. To compensate for this, Unity also combines all terrain splat textures (based on the control texture) into a single combined base texture (a bit like “baking” a lightmap), with resolution specified by the "Base Texture Resolution" parameter in the terrain inspector. This base texture is then passed to the shader so it can be used as a fallback by old graphics cards that can't blend the textures in the FirstPassShader, and also for any terrain that is further away than "Base Map Dist." (also specified in the inspector). This single texture is rendered using the Dependency "BaseMapShader" shader.

It receives the base texture in its  _MainTex parameter, and an additional _Colour parameter, both of which automatically set from the FirstPassShader.

 

Example Custom Terrain Shader : Toon Outlined Terrain

Here’s an example based on the above – it has a firstpass and addpass shader that blend textures as normal, but with a ramped toon lighting model. They then specify UsePass to apply a second pass which adds the outline from the Toon/Lit Outlinedshader. The basemap shader simply uses the Toon Lit Oulined shader directly on the basemap texture.

ToonTerrainFirstPass.shader

Shader "Custom/ToonTerrainFirstPass" {
Properties {

	// Control Texture ("Splat Map")
	[HideInInspector] _Control ("Control (RGBA)", 2D) = "red" {}
	
	// Terrain textures - each weighted according to the corresponding colour
	// channel in the control texture
	[HideInInspector] _Splat3 ("Layer 3 (A)", 2D) = "white" {}
	[HideInInspector] _Splat2 ("Layer 2 (B)", 2D) = "white" {}
	[HideInInspector] _Splat1 ("Layer 1 (G)", 2D) = "white" {}
	[HideInInspector] _Splat0 ("Layer 0 (R)", 2D) = "white" {}
	
	// Used in fallback on old cards & also for distant base map
	[HideInInspector] _MainTex ("BaseMap (RGB)", 2D) = "white" {}
	[HideInInspector] _Color ("Main Color", Color) = (1,1,1,1)
	
	// Let the user assign a lighting ramp to be used for toon lighting
	_Ramp ("Toon Ramp (RGB)", 2D) = "gray" {}
	
	// Colour of toon outline
	_OutlineColor ("Outline Color", Color) = (0,0,0,1)
	_Outline ("Outline width", Range (.002, 0.03)) = .005
}
	
SubShader {
	Tags {
		"SplatCount" = "4"
		"Queue" = "Geometry-100"
		"RenderType" = "Opaque"
	}
	
	// TERRAIN PASS	
	CGPROGRAM
	#pragma surface surf ToonRamp exclude_path:prepass 

	// Access the Shaderlab properties
	uniform sampler2D _Control;
	uniform sampler2D _Splat0,_Splat1,_Splat2,_Splat3;
	uniform fixed4 _Color;
	uniform sampler2D _Ramp;

	// Custom lighting model that uses a texture ramp based
	// on angle between light direction and normal
	inline half4 LightingToonRamp (SurfaceOutput s, half3 lightDir, half atten)
	{
		#ifndef USING_DIRECTIONAL_LIGHT
		lightDir = normalize(lightDir);
		#endif
		// Wrapped lighting
		half d = dot (s.Normal, lightDir) * 0.5 + 0.5;
		// Applied through ramp
		half3 ramp = tex2D (_Ramp, float2(d,d)).rgb;
		half4 c;
		c.rgb = s.Albedo * _LightColor0.rgb * ramp * (atten * 2);
		c.a = 0;
		return c;
	}

	// Surface shader input structure
	struct Input {
		float2 uv_Control : TEXCOORD0;
		float2 uv_Splat0 : TEXCOORD1;
		float2 uv_Splat1 : TEXCOORD2;
		float2 uv_Splat2 : TEXCOORD3;
		float2 uv_Splat3 : TEXCOORD4;
	};

	// Surface Shader function
	void surf (Input IN, inout SurfaceOutput o) {
		fixed4 splat_control = tex2D (_Control, IN.uv_Control);
		fixed3 col;
		col  = splat_control.r * tex2D (_Splat0, IN.uv_Splat0).rgb;
		col += splat_control.g * tex2D (_Splat1, IN.uv_Splat1).rgb;
		col += splat_control.b * tex2D (_Splat2, IN.uv_Splat2).rgb;
		col += splat_control.a * tex2D (_Splat3, IN.uv_Splat3).rgb;
		o.Albedo = col * _Color;
		o.Alpha = 0.0;
	}
	ENDCG

	// Use the Outline Pass from the default Toon shader
	UsePass "Toon/Basic Outline/OUTLINE"

} // End SubShader

// Specify dependency shaders	
Dependency "AddPassShader" = "Custom/ToonTerrainAddPass"
Dependency "BaseMapShader" = "Toon/Lighted Outline"

// Fallback to Diffuse
Fallback "Diffuse"

} // Ehd Shader

 

ToonTerrainAddPass.shader

Shader "Custom/ToonTerrainAddPass" {
Properties {

	// Control Texture ("Splat Map")
	[HideInInspector] _Control ("Control (RGBA)", 2D) = "red" {}
	
	// Terrain textures - each weighted according to the corresponding colour
	// channel in the control texture
	[HideInInspector] _Splat3 ("Layer 3 (A)", 2D) = "white" {}
	[HideInInspector] _Splat2 ("Layer 2 (B)", 2D) = "white" {}
	[HideInInspector] _Splat1 ("Layer 1 (G)", 2D) = "white" {}
	[HideInInspector] _Splat0 ("Layer 0 (R)", 2D) = "white" {}
	
	// Used in fallback on old cards & also for distant base map
	[HideInInspector] _MainTex ("BaseMap (RGB)", 2D) = "white" {}
	[HideInInspector] _Color ("Main Color", Color) = (1,1,1,1)
	
	// Let the user assign a lighting ramp to be used for toon lighting
	_Ramp ("Toon Ramp (RGB)", 2D) = "gray" {}
	
	// Colour of toon outline
	_OutlineColor ("Outline Color", Color) = (0,0,0,1)
	_Outline ("Outline width", Range (.002, 0.03)) = .005
}
	
SubShader {
	Tags {
		"SplatCount" = "4"
		"Queue" = "Geometry-99"
		"RenderType" = "Opaque"
		"IgnoreProjector"="True"
	}
	
	// TERRAIN PASS	
	CGPROGRAM
	#pragma surface surf ToonRamp decal:add

	// Access the Shaderlab properties
	uniform sampler2D _Control;
	uniform sampler2D _Splat0,_Splat1,_Splat2,_Splat3;
	uniform fixed4 _Color;
	uniform sampler2D _Ramp;

	// Custom lighting model that uses a texture ramp based
	// on angle between light direction and normal
	inline half4 LightingToonRamp (SurfaceOutput s, half3 lightDir, half atten)
	{
		#ifndef USING_DIRECTIONAL_LIGHT
		lightDir = normalize(lightDir);
		#endif
		// Wrapped lighting
		half d = dot (s.Normal, lightDir) * 0.5 + 0.5;
		// Applied through ramp
		half3 ramp = tex2D (_Ramp, float2(d,d)).rgb;
		half4 c;
		c.rgb = s.Albedo * _LightColor0.rgb * ramp * (atten * 2);
		c.a = 0;
		return c;
	}

	// Surface shader input structure
	struct Input {
		float2 uv_Control : TEXCOORD0;
		float2 uv_Splat0 : TEXCOORD1;
		float2 uv_Splat1 : TEXCOORD2;
		float2 uv_Splat2 : TEXCOORD3;
		float2 uv_Splat3 : TEXCOORD4;
	};

	// Surface Shader function
	void surf (Input IN, inout SurfaceOutput o) {
		fixed4 splat_control = tex2D (_Control, IN.uv_Control);
		fixed3 col;
		col  = splat_control.r * tex2D (_Splat0, IN.uv_Splat0).rgb;
		col += splat_control.g * tex2D (_Splat1, IN.uv_Splat1).rgb;
		col += splat_control.b * tex2D (_Splat2, IN.uv_Splat2).rgb;
		col += splat_control.a * tex2D (_Splat3, IN.uv_Splat3).rgb;
		o.Albedo = col * _Color;
		o.Alpha = 0.0;
	}
	ENDCG

	// Use the Outline Pass from the default Toon shader
	UsePass "Toon/Basic Outline/OUTLINE"

} // End SubShader

// Fallback to Diffuse
Fallback "Diffuse"

} // Ehd Shader

 

And here’s the output:

image

Posted in Game Dev | Tagged , , | 1 Comment

Sepia Shader

I found this techrepublic page, which gives a conversion algorithm from a standard RGB image to a Sepia image tone as follows:

outputRed = (inputRed * .393) + (inputGreen *.769) + (inputBlue * .189)
outputGreen = (inputRed * .349) + (inputGreen *.686) + (inputBlue * .168)
outputBlue = (inputRed * .272) + (inputGreen *.534) + (inputBlue * .131)

You can obviously tweak these values if you like – there is no exact formula (Sepia itself being a natural material originally derived from the ink sac of a Cuttlefish, and now derived using a variety of chemicals).

You can apply this algorithm in the finalcolor modifier of a Unity surface shader as follows:

Shader "Custom/Sepia" {
	Properties {
	  _MainTex ("Base (RGB)", 2D) = "white" {}
	}
	SubShader {
	  Tags { "RenderType"="Opaque" }
	  LOD 200
		
	  CGPROGRAM
	    #pragma surface surf Lambert finalcolor:Sepia

            sampler2D _MainTex;

	    struct Input {
		float2 uv_MainTex;
	    };

    	    void surf (Input IN, inout SurfaceOutput o) {
		half4 c = tex2D (_MainTex, IN.uv_MainTex);
		o.Albedo = c.rgb;
		o.Alpha = c.a;
	    }
		
	    void Sepia (Input IN, SurfaceOutput o, inout fixed4 color) {
         
            fixed3 sepia;
            sepia.r = dot(color.rgb, half3(0.393, 0.769, 0.189));
            sepia.g = dot(color.rgb, half3(0.349, 0.686, 0.168));   
            sepia.b = dot(color.rgb, half3(0.272, 0.534, 0.131));
         
            color.rgb = sepia;
        }
		
	ENDCG
    } 
    FallBack "Diffuse"
}

Which gives:

image

Posted in Game Dev | Tagged , , , | Leave a comment

Lighting Models and BRDF Maps

A Bi-directional Reflectance Distribution Function (BRDF) is a mathematical function that describes how light is reflected when it hits a surface. This largely corresponds to a lighting model in Unity-speak (although note that BDRFs are concerned only with reflected light, whereas lighting models can also account for emitted light and other lighting effects).

The “bi-directional” bit refers to the fact that the function depends on two directions:

  • the direction at which light hits the surface of the object (the direction of incidence, ωi)
  • the direction at which the reflected light is seen by the viewer (the direction of reflection, ωr).

These are typically both defined relative to the normal vector of the surface, n, as shown in the following diagram (rather than simple angles, each direction is actually modelled in the BRDF using spherical coordinates (θ, φ) making the BRDF a four-dimensional function):

File:BRDF Diagram.svg

 

Given that our perception of a material is determined to a large extent by its reflectance properties, it’s understandable that several different BRDFs have been developed, with different effectiveness and efficiency at modelling different types of surfaces:

  • Lambert: Models perfectly diffuse smooth surfaces, in which apparent surface brightness is affected only by angle of incident light. The observer’s angle of view has no effect.
  • Phong and Blinn-Phong: Models specular reflections on smooth shiny surfaces by considering both the direction of incoming light and that of the viewer.
  • Oren-Nayar: Models diffuse reflection from rough opaque surfaces (considers surface to be made from many Lambertian micro-facets)
  • Torrance-Sparrow: Models specular reflection from rough opaque surfaces (considers surface to be made from many mirrored micro-facets).

In addition to the preceding links, there’s a good article explaining some of the maths behind these models at http://www.cs.princeton.edu/courses/archive/fall06/cos526/tmp/wynn.pdf

 

BRDF Maps

In the case of game development, it’s often not necessary to strive for physically-accurate BRDF models of how a surface reacts to light. Instead, it’s sufficient to aim for something that “looks” right. And that’s where BRDF maps come in (sometimes also called “Fake BRDF”).

A BRDF map is a two-dimensional texture. It’s used in a similar way to a one-dimensional “ramp” texture, which are commonly used to lookup replacement values for individual lighting coefficients. However, the BRDF map represents different parameters on each of its two axes – the incoming light direction and the viewing direction as shown below:

image

 

A shader can use a tex2D lookup based on these two parameters to retrieve the pixel colour value for any point on a surface as a very cheap way of modelling light reflection. Here’s an example Cg BRDF surface shader:

  Shader "Custom/BRDF Ramp" {
    Properties {
      _MainTex ("Texture", 2D) = "white" {}
      _BRDF ("BRDF Ramp", 2D) = "gray" {}
    }
    SubShader {
      Tags { "RenderType" = "Opaque" }
      CGPROGRAM
      #pragma surface surf Ramp

      sampler2D _BRDF;

      half4 LightingRamp (SurfaceOutput s, half3 lightDir, half3 viewDir, half atten) {
          
          // Calculate dot product of light direction and surface normal
          // 1.0 = facing each other perfectly
          // 0.0 = right angles
          // -1.0 = parallel, facing same direction
          half NdotL = dot (s.Normal, lightDir);
          
          // NdotL lies in the range between -1.0 and 1.0
          // To use as a texture lookup we need to adjust to lie in the range 0.0 to 1.0
          // We could simply clamp it, but instead we'll apply softer "half" lighting
          // (which Unity calls "Diffuse Wrap")
          NdotL = NdotL * 0.5 + 0.5;
          
          // Calculate dot product of view direction and surface normal
          // Note that, since we only render front-facing normals, this will
          // always be positive
          half NdotV = dot(s.Normal, viewDir);
          
          // Lookup the corresponding colour from the BRDF texture map
          half3 brdf = tex2D (_BRDF, float2(NdotL, NdotV)).rgb;
          
          half4 c;
          
          // For illustrative purpsoes, let's set the pixel colour based entirely on the BRDF texture
          // In practice, you'd normally also have Albedo and lightcolour terms here too. 
          c.rgb = brdf * (atten * 2);
          c.a = s.Alpha;
          return c;
      }

      struct Input {
          float2 uv_MainTex;
      };
      sampler2D _MainTex;
      void surf (Input IN, inout SurfaceOutput o) {
          o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
      }
      ENDCG
    }
    Fallback "Diffuse"
  }

 

And here’s the image it produces – notice how the shading varies from red to yellow based on view direction, and from light to dark based on direction to the light source.

image

 

As a slightly less trivial example, here’s another BRDF texture map that again uses light direction relative to surface on the x axis, but this time, instead of using view direction, uses curvature of the surface on the y axis (the gradient in the y axis is quite subtle but you should be able to note reddish hue towards the top centre of the image, and blueish tint at the top right):

This map can be used to generate convincing diffuse reflection of skin that varies across the surface of the model (such that, say, the falloff at the nose appears different from the forehead), as shown here:

Posted in Game Dev | Tagged , , , , | Leave a comment