I found this techrepublic page, which gives a conversion algorithm from a standard RGB image to a Sepia image tone as follows:

```outputRed = (inputRed * .393) + (inputGreen *.769) + (inputBlue * .189)
outputGreen = (inputRed * .349) + (inputGreen *.686) + (inputBlue * .168)
outputBlue = (inputRed * .272) + (inputGreen *.534) + (inputBlue * .131)

```

You can obviously tweak these values if you like – there is no exact formula (Sepia itself being a natural material originally derived from the ink sac of a Cuttlefish, and now derived using a variety of chemicals).

You can apply this algorithm in the finalcolor modifier of a Unity surface shader as follows:

```Shader "Custom/Sepia" {
Properties {
_MainTex ("Base (RGB)", 2D) = "white" {}
}
Tags { "RenderType"="Opaque" }
LOD 200

CGPROGRAM
#pragma surface surf Lambert finalcolor:Sepia

sampler2D _MainTex;

struct Input {
float2 uv_MainTex;
};

void surf (Input IN, inout SurfaceOutput o) {
half4 c = tex2D (_MainTex, IN.uv_MainTex);
o.Albedo = c.rgb;
o.Alpha = c.a;
}

void Sepia (Input IN, SurfaceOutput o, inout fixed4 color) {

fixed3 sepia;
sepia.r = dot(color.rgb, half3(0.393, 0.769, 0.189));
sepia.g = dot(color.rgb, half3(0.349, 0.686, 0.168));
sepia.b = dot(color.rgb, half3(0.272, 0.534, 0.131));

color.rgb = sepia;
}

ENDCG
}
FallBack "Diffuse"
}

```

Which gives:

## Lighting Models and BRDF Maps

A Bi-directional Reflectance Distribution Function (BRDF) is a mathematical function that describes how light is reflected when it hits a surface. This largely corresponds to a lighting model in Unity-speak (although note that BDRFs are concerned only with reflected light, whereas lighting models can also account for emitted light and other lighting effects).

The “bi-directional” bit refers to the fact that the function depends on two directions:

• the direction at which light hits the surface of the object (the direction of incidence, ωi)
• the direction at which the reflected light is seen by the viewer (the direction of reflection, ωr).

These are typically both defined relative to the normal vector of the surface, n, as shown in the following diagram (rather than simple angles, each direction is actually modelled in the BRDF using spherical coordinates (θ, φ) making the BRDF a four-dimensional function):

Given that our perception of a material is determined to a large extent by its reflectance properties, it’s understandable that several different BRDFs have been developed, with different effectiveness and efficiency at modelling different types of surfaces:

• Lambert: Models perfectly diffuse smooth surfaces, in which apparent surface brightness is affected only by angle of incident light. The observer’s angle of view has no effect.
• Phong and Blinn-Phong: Models specular reflections on smooth shiny surfaces by considering both the direction of incoming light and that of the viewer.
• Oren-Nayar: Models diffuse reflection from rough opaque surfaces (considers surface to be made from many Lambertian micro-facets)
• Torrance-Sparrow: Models specular reflection from rough opaque surfaces (considers surface to be made from many mirrored micro-facets).

In addition to the preceding links, there’s a good article explaining some of the maths behind these models at http://www.cs.princeton.edu/courses/archive/fall06/cos526/tmp/wynn.pdf

## BRDF Maps

In the case of game development, it’s often not necessary to strive for physically-accurate BRDF models of how a surface reacts to light. Instead, it’s sufficient to aim for something that “looks” right. And that’s where BRDF maps come in (sometimes also called “Fake BRDF”).

A BRDF map is a two-dimensional texture. It’s used in a similar way to a one-dimensional “ramp” texture, which are commonly used to lookup replacement values for individual lighting coefficients. However, the BRDF map represents different parameters on each of its two axes – the incoming light direction and the viewing direction as shown below:

A shader can use a tex2D lookup based on these two parameters to retrieve the pixel colour value for any point on a surface as a very cheap way of modelling light reflection. Here’s an example Cg BRDF surface shader:

```  Shader "Custom/BRDF Ramp" {
Properties {
_MainTex ("Texture", 2D) = "white" {}
_BRDF ("BRDF Ramp", 2D) = "gray" {}
}
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Ramp

sampler2D _BRDF;

half4 LightingRamp (SurfaceOutput s, half3 lightDir, half3 viewDir, half atten) {

// Calculate dot product of light direction and surface normal
// 1.0 = facing each other perfectly
// 0.0 = right angles
// -1.0 = parallel, facing same direction
half NdotL = dot (s.Normal, lightDir);

// NdotL lies in the range between -1.0 and 1.0
// To use as a texture lookup we need to adjust to lie in the range 0.0 to 1.0
// We could simply clamp it, but instead we'll apply softer "half" lighting
// (which Unity calls "Diffuse Wrap")
NdotL = NdotL * 0.5 + 0.5;

// Calculate dot product of view direction and surface normal
// Note that, since we only render front-facing normals, this will
// always be positive
half NdotV = dot(s.Normal, viewDir);

// Lookup the corresponding colour from the BRDF texture map
half3 brdf = tex2D (_BRDF, float2(NdotL, NdotV)).rgb;

half4 c;

// For illustrative purpsoes, let's set the pixel colour based entirely on the BRDF texture
// In practice, you'd normally also have Albedo and lightcolour terms here too.
c.rgb = brdf * (atten * 2);
c.a = s.Alpha;
return c;
}

struct Input {
float2 uv_MainTex;
};
sampler2D _MainTex;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
}
ENDCG
}
Fallback "Diffuse"
}

```

And here’s the image it produces – notice how the shading varies from red to yellow based on view direction, and from light to dark based on direction to the light source.

As a slightly less trivial example, here’s another BRDF texture map that again uses light direction relative to surface on the x axis, but this time, instead of using view direction, uses curvature of the surface on the y axis (the gradient in the y axis is quite subtle but you should be able to note reddish hue towards the top centre of the image, and blueish tint at the top right):

This map can be used to generate convincing diffuse reflection of skin that varies across the surface of the model (such that, say, the falloff at the nose appears different from the forehead), as shown here:

## Procedural Terrain Splatmapping

If you try searching for information on how to assign textures to Unity terrain through code (i.e. derive a “splatmap” of textures based on the terrain heightmap profile itself), you’ll probably end up being sent to this post on the Unity Answers site. Despite being over 3 years old, it still seems to be pretty much the only helpful demonstration of accessing Unity’s somewhat poorly-documented terrain functions through script.

However, while its a good starting point, the script there suffers from several shortcomings (I’m not blaming the original author – I imagine that at the time he wrote it, he probably didn’t expect it to become the authoritative source on the matter!):

• It only works if your terrain’s splatmap has the same dimensions as its heightmap.
• It allows for only three textures to be blended.
• The y/x axes are inverted.
• The method of normalisation is incorrect. ( `Vector3.Normalize() `sets the magnitude of the vector representing the texture weights to 1 – i.e. two equal textures will each have weight of 0.707. What instead is required is that the component weights should sum to 1 – i.e. each have weight of 0.5).

Attached below is my version of an automated splatmap creation script which attempts to correct some of the issues in that earlier code. It allows for any number of terrain textures to be blended based on the height, normal, steepness, or any other rules you create for each part of the terrain. Just attach the AssignSplatMap C# script below onto any terrain gameobject in the scene (having first assigned the appropriate textures to the terrain) and hit play.

```using UnityEngine;
using System.Collections;
using System.Linq; // used for Sum of array

public class AssignSplatMap : MonoBehaviour {

void Start () {
// Get the attached terrain component
Terrain terrain = GetComponent();

// Get a reference to the terrain data
TerrainData terrainData = terrain.terrainData;

// Splatmap data is stored internally as a 3d array of floats, so declare a new empty array ready for your custom splatmap data:
float[, ,] splatmapData = new float[terrainData.alphamapWidth, terrainData.alphamapHeight, terrainData.alphamapLayers];

for (int y = 0; y < terrainData.alphamapHeight; y++)
{
for (int x = 0; x < terrainData.alphamapWidth; x++)
{
// Normalise x/y coordinates to range 0-1
float y_01 = (float)y/(float)terrainData.alphamapHeight;
float x_01 = (float)x/(float)terrainData.alphamapWidth;

// Sample the height at this location (note GetHeight expects int coordinates corresponding to locations in the heightmap array)
float height = terrainData.GetHeight(Mathf.RoundToInt(y_01 * terrainData.heightmapHeight),Mathf.RoundToInt(x_01 * terrainData.heightmapWidth) );

// Calculate the normal of the terrain (note this is in normalised coordinates relative to the overall terrain dimensions)
Vector3 normal = terrainData.GetInterpolatedNormal(y_01,x_01);

// Calculate the steepness of the terrain
float steepness = terrainData.GetSteepness(y_01,x_01);

// Setup an array to record the mix of texture weights at this point
float[] splatWeights = new float[terrainData.alphamapLayers];

// CHANGE THE RULES BELOW TO SET THE WEIGHTS OF EACH TEXTURE ON WHATEVER RULES YOU WANT

// Texture[0] has constant influence
splatWeights[0] = 0.5f;

// Texture[1] is stronger at lower altitudes
splatWeights[1] = Mathf.Clamp01((terrainData.heightmapHeight - height));

// Texture[2] stronger on flatter terrain
// Note "steepness" is unbounded, so we "normalise" it by dividing by the extent of heightmap height and scale factor
// Subtract result from 1.0 to give greater weighting to flat surfaces
splatWeights[2] = 1.0f - Mathf.Clamp01(steepness*steepness/(terrainData.heightmapHeight/5.0f));

// Texture[3] increases with height but only on surfaces facing positive Z axis
splatWeights[3] = height * Mathf.Clamp01(normal.z);

// Sum of all textures weights must add to 1, so calculate normalization factor from sum of weights
float z = splatWeights.Sum();

// Loop through each terrain texture
for(int i = 0; i<terrainData.alphamapLayers; i++){

// Normalize so that sum of all texture weights = 1
splatWeights[i] /= z;

// Assign this point to the splatmap array
splatmapData[x, y, i] = splatWeights[i];
}
}
}

// Finally assign the new splatmap to the terrainData:
terrainData.SetAlphamaps(0, 0, splatmapData);
}
}

```

Here’s some examples of rules you might want to implement for various textures:

 Texture weight based on surface normal (useful for e.g. snow accumulating on one side of a mountain, moss growing on north side of a hill) Texture weight based on height (e.g. ice caps at high altitudes, sand near sea-level) Texture weight based on steepness (e.g. grass grows on relatively flat terrain)

Using this script with Unity’s standard terrain textures turns the following default grey terrain:

Into this slightly more attractive scene:

Sure there’s certainly a lot more you could do to improve the mapping, and it doesn’t compare to the output you get from professional world modelling tools such as World Machine. But, then again, it’s free, and it gives you a good start from which to refine further improvements to your terrain.

Posted in Game Dev, Spatial | Tagged , , , | 3 Comments

## Importing DEM Terrain Heightmaps for Unity using GDAL

I know that some folks reading my blog are from the spatial/mapping community, and may have been disappointed that my posts of late have been more influenced by game development and Unity than by spatial data and Bing Maps. Well, good news, spatial fans – this post is about mapping terrain from DEM data! (to create terrain for a Unity game… )

I’ve written a few posts in the past (such as here and here) that have made use of Digital Elevation Model data, such as collected by the SRTM or GTOPO30 datasets, via GDAL into Bing Maps, WPF etc. In this post I’ll be running through a similar workflow to that which I’ve used before, but this time with the target output being a Unity terrain object.

So, here goes:

1.) Get a GeoTIFF file of DEM data. For whole-world STRM coverage, use Google Earth browse of http://www.ambiotek.com/topoview, or download direct from e.g. http://srtm.geog.kcl.ac.uk/portal/srtm41/srtm_data_geotiff/srtm_36_02.zip (although I note that King’s have disabled directory browsing on their server so you’ll have to know the name of the SRTM file you want to access)

If you want to preview the data in the DEM file, you can open it up in MicroDEM (File –> Open –> Open DEM –> Select GeoTiff file). It should appear something like this:

Note that although you can load up GeoTIFF files in image applications such as Photoshop or GIMP, they won’t visualise the geographic data encoded in the file, and you’ll probably just see this:

2.) (Optional) Use gdalwarp to transform the data to an alternative projection and/or crop it to a particular area of interest. Like all forms of spatial data, DEM data can be provided in various different projections. The STRM data I’m using here is provided in WGS84 decimal degrees, but as it’s data for Great Britain, I’d like to reproject it to the National Grid of Great Britain (EPSG:27700) instead:

```gdalwarp –multi –t_srs EPSG:27700 srtm_36_02.tif srtm_36_02_warped.tif

```

I’ll then crop a 100km2 area from the warped image so that it covers the Ordnance Survey Grid Square “SH”, which includes North Wales and Snowdonia National Park as shown in the red square here:

Here’s the gdalwarp command to crop the image:

```gdalwarp srtm_36_02_warped.tif –te 200000 300000 300000 400000 srtm_36_02_warped_cropped.tif

```

If you were to view the warped, cropped image in MicroDEM again now, it would look like:

3.) Use gdal_translate to convert the GeoTIFF file to the raw heightmap format expected by Unity

```gdal_translate –ot UInt16 –scale –of ENVI –outsize 1025 1025 srtm_36_02_warped_cropped.tif heightmap.raw

```

The settings used here are as follows:

• -ot UInt16 Use 16 bit channel. Most graphics applications use 32 bit colour images, consisting of four channels (R,G,B,A), each one having 8 bits. That means that, in any given channel, you can only represent an integer value between 0-255. We’ll be encoding the heightmap data in a single channel, so having only 256 unique values would not give us very precise resolution. Instead. we’ll specify a 16bit channel that gives 65,536 unique values instead.
• –scale To make the most of our 16 bit channel, we need to scale the height values from their original range to fill the full range of values available in the 16 bit channel (0 – 65535). Specifying –scale with no parameters automatically does this.
• -of ENVI  This outputs the result in the raw binary output file format Unity expects.
• -outsize 1025 1025 Unity terrain maps must have dimensions equal to a power of 2 + 1. i.e. 65x65, 129x129, 257x257, 513x513, 1025x1025, 2049x2049 etc.

4.) Import into Unity

Create a new terrain object and, from the settings tab of the Inspector, click the button to Import Raw heightmap. If you browse to select the heightmap.raw file created in the previous step, it should populate the correct settings automatically (note that, even when running under Windows, gdal appears to use Mac Byte Order). Note that Width and Height need to match the Heightmap Resolution field in the terrain inspector, not the Width and Height of the terrain itself:

If all goes well, your blank terrain should magically update to reflect the heightmap and you’ll see the following:

5.) Correct the orientation

The observant amongst you will notice one slight problem with the previous image – the heightmap has been rotated anti-clockwise by 90 degrees. It turns out that Unity treats heightmap coordinates with (0,0) at the bottom-left corner, whereas most other applications, including gdal, interpret (0,0) at the top-left corner. Fortunately the fix is quite simple. Just attach the following script to the terrain object and click play (changes will be saved to the terrain, so script can be disabled/deleted once used once)

```using UnityEngine;
using System.Collections;

public class RotateTerrain : MonoBehaviour {

void Start () {

Terrain terrain = GetComponent<Terrain>();

// Get a reference to the terrain
TerrainData terrainData = terrain.terrainData;

// Populate an array with current height data
float[,] orgHeightData = terrainData.GetHeights(0,0,terrainData.heightmapWidth, terrainData.heightmapHeight);

// Initialise a new array of same size to hold rotated data
float[,] mirroredHeightData = new float[terrainData.heightmapWidth, terrainData.heightmapHeight];

for (int y = 0; y < terrainData.heightmapHeight; y++)
{
for (int x = 0; x < terrainData.heightmapWidth; x++)
{

// Rotate each element clockwise
mirroredHeightData[y,x] = orgHeightData[terrainData.heightmapHeight - y - 1, x];
}
}

// Finally assign the new heightmap to the terrainData:
terrainData.SetHeights(0, 0, mirroredHeightData);
}
}

```

And there you have it – DEM data of Wales imported and ready to play in Unity. Now, if you want, you can texture it using procedural splat mapping (or just paint textures on manually).

Posted in Game Dev, Spatial | Tagged , , , , | 2 Comments

## Oooh… Shiny! Fun with Cube-Mapping

Cube mapping is a rendering technique to create shiny surfaces that appear to reflect the environment around them. Note that this differs from specular highlights – bright spots caused by reflected lights in the scene. Instead, cube maps simulate the reflection of other objects in the scene. Like the gentleman in this kettle, for example: (taken from the Snopes article on Reflecto-Porn”)

Cube maps are pretty cheap and easy to produce – making them possible even on modern mobile devices, and they can create a more convincing “base” lighting level than simple ambient lighting. So, how’d you do it?

## Creating a Static Cube Map from Photographic Imagery

Cube maps are a set of six textures which, as the name suggests, can be assembled into the faces of a cube representing a full 360’ view of a location. For example, the image below shows a cubemap of the interior of the Santa Maria Dei Miracoli Via Del Corso church in Rome, Italy, taken from this site:

You can create a simple cubemap texture of any real-world location using a camera and tripod (and perhaps a bit of image post-processing to correct any lens distortion and stitch together the image edges). If you’re using Unity, you don’t need to bother stitching together the images as shown above. Instead, simply select Create –> Cubemap and drag the six separate images into the appropriate texture slots:

To use the cubemap, create a new material that uses one of Unity’s “reflective” shaders, which have already implemented the necessary lighting calculations to apply the cubemap reflections. The result will look something like as follows:

The problem with the approach described so far might have already become apparent: the object is reflecting a static environment as was seen by the photographer in a particular location in a particular church in Italy. Unless your game happens to feature a shiny object placed at that same exact location, the cubemap is not going to add much realism to its appearance….

 Looks like a shiny metal ball hovering in an Italian church Doesn’t look like shiny metal ball hovering in the tropics

… that’s not to say that static cubemaps can’t ever be of use. Particularly in outdoor scenes, it can still be beneficial to use a cubemap that reflects the general environment (a snow scene, a desert etc.), and use that to provide low level lighting on the object. In these situations, you can often reuse the same texture used to create the skybox, for example. It won’t reflect specific features, but that often doesn’t matter.

## Creating Dynamic Cubemaps

To create a more convincing reflection map, we need to sample the actual game scene in which the object is placed and then apply that as a cubemap texture onto the object. Here’s a script that does just that in Unity (works in free or Pro!):

• Create a new C# script and copy ‘n’ paste the below
• Attach the script to any game object in the scene at the location at which you want the cubemap to be sampled
• Click play
• The script will create a camera positioned at the centre of the object and point it sequentially up, right forward, down, left, and back, saving a PNG texture of each view in your projects “Assets” folder. Output files are named with the name of the object followed by _PositiveX, _PositiveY, _NegativeX etc. You can use these images to create a new cubemap texture as outlined above.
• If the object on which the cube is placed has a reflective material (and the reflection map property is named “_Cube”, which it is in the inbuilt Unity reflective shaders), it will also automatically set the cubemap material on the object to allow you to preview it. Press R to update the preview if you move any of the objects in the scene. However note that this will be reset when you stop the player, so you’ll have to create a new cubemap material if you want the changes to be permanent.

```using UnityEngine;
using System.Collections;
using System.IO; // Used for writing PNG textures to disk

public class CubemapProbe : MonoBehaviour {

// Cubemap resolution - should be power of 2: 64, 128, 256 etc.
public int resolution = 256;

#region Cubemap functions

// Return screencap as a Texture2D
// http://wiki.unity3d.com/index.php?title=RenderTexture_Free
private Texture2D CaptureScreen() {
Texture2D result;
Rect captureZone = new Rect( 0f, 0f, Screen.width, Screen.height );
result = new Texture2D( Mathf.RoundToInt(captureZone.width), Mathf.RoundToInt(captureZone.height), TextureFormat.RGB24, false);
result.Apply();
return result;
}

// Save a Texture2D as a PNG file
private void SaveTextureToFile(Texture2D texture, string fileName) {
byte[] bytes = texture.EncodeToPNG();
FileStream file = File.Open(Application.dataPath + "/" + fileName,FileMode.Create);
BinaryWriter binary = new BinaryWriter(file);
binary.Write(bytes);
file.Close();
}

// Resize a Texture2D
// http://docs-jp.unity3d.com/Documentation/ScriptReference/Texture2D.GetPixelBilinear.html
Texture2D Resize(Texture2D sourceTex, int Width, int Height) {
Texture2D destTex = new Texture2D(Width, Height, sourceTex.format, true);
Color[] destPix = new Color[Width * Height];
int y = 0;
while (y < Height) {
int x = 0;
while (x < Width) {
float xFrac = x * 1.0F / (Width );
float yFrac = y * 1.0F / (Height);
destPix[y * Width + x] = sourceTex.GetPixelBilinear(xFrac, yFrac);
x++;
}
y++;
}
destTex.SetPixels(destPix);
destTex.Apply();
return destTex;
}

// Flip/Mirror the pixels in a Texture2D about the x axis
Texture2D Flip(Texture2D sourceTex) {
// Create a new Texture2D the same dimensions and format as the input
Texture2D Output = new Texture2D(sourceTex.width, sourceTex.height, sourceTex.format, true);
// Loop through pixels
for (int y = 0; y < sourceTex.height; y++)
{
for (int x = 0; x < sourceTex.width; x++)
{
// Retrieve pixels in source from left-to-right, bottom-to-top
Color pix = sourceTex.GetPixel(sourceTex.width + x, (sourceTex.height-1) - y);
// Write to output from left-to-right, top-to-bottom
Output.SetPixel(x, y, pix);
}
}
return Output;
}
#endregion

// Use this for initialization
void Start () {

// CreateCubeMap must be called in a co-routine, because we need it to wait for the end
// of each frame render before grabbing the screen
StartCoroutine(CreateCubeMap());
}

// This is the coroutine that creates the cubemap images
IEnumerator CreateCubeMap()
{
// Initialise a new cubemap
Cubemap cm = new Cubemap(resolution, TextureFormat.RGB24, true);

// Disable any renderers attached to this object which may get in the way of our camera
if(renderer) {
renderer.enabled = false;
}

// Face render order: Top, Right, Front, Bottom, Left, Back
Quaternion[] rotations = { Quaternion.Euler(-90,0,0), Quaternion.Euler(0,90,0), Quaternion.Euler(0,0,0), Quaternion.Euler(90,0,0), Quaternion.Euler(0,-90,0), Quaternion.Euler(0,180,0)};
CubemapFace[] faces = { CubemapFace.PositiveY, CubemapFace.PositiveX, CubemapFace.PositiveZ, CubemapFace.NegativeY, CubemapFace.NegativeX, CubemapFace.NegativeZ };

// Create a single face matching the settings of the cubemap itself
Texture2D face = new Texture2D(resolution, resolution, TextureFormat.RGB24, true);

// Use this to prevent white borders around edge of texture
face.wrapMode = TextureWrapMode.Clamp;

// Create a camera that will be used to render the faces
GameObject go = new GameObject("CubemapCamera", typeof(Camera));

// Place the camera on this object
go.transform.position = transform.position;

// Initialise the rotation - this will be changed for each texture grab
go.transform.rotation = Quaternion.identity;

// We need each face of the cube to cover exactly 90 degree FoV
go.camera.fieldOfView = 90;

// Ensure this camera renders above all others
go.camera.depth = float.MaxValue;

// Loop through and create each face
for(int i = 0; i < 6; i++) {
// Set the camera direction
go.transform.rotation = rotations[i];
// Important - wait the frame to be fully rendered before capturing
yield return new WaitForEndOfFrame();
// Capture the pixels to the texture
face = CaptureScreen();
// Resize to the chosen resolution - cubemap faces must be square!
face = Resize(face, resolution, resolution);
// Flip the image across the x axis
face = Flip(face);
// Retrieve the pixelarray of colours for the current face
Color[] faceColours = face.GetPixels();
// Set the current cubemap face
cm.SetPixels(faceColours, faces[i], 0);
// Save the texture
SaveTextureToFile(face, gameObject.name + "_" + faces[i].ToString() + ".png");
}

// Apply the SetPixel changes to the cubemap faces to make them take effect
cm.Apply();

// Assign the cubemap to the _Cube texture of this object's material
if(renderer.material.HasProperty("_Cube")) {
renderer.material.SetTexture("_Cube", cm);
}

// Cleanup
DestroyImmediate(face);
DestroyImmediate(go);

// Re-enable the renderer
if(renderer) {
renderer.enabled = true;
}
}

// Update is called once per frame
void Update () {

// Recalculate the cubemap if "R" is pressed)
if(Input.GetKeyDown(KeyCode.R)){
StartCoroutine(CreateCubeMap());
}
}
}

```

Note that there are still a few issues with this approach – you can currently only attach this script to one cubemap in the scene at the time (because it needs the camera of the face currently being rendered to be on top of all others). If you have Unity Pro you could get around this by rendering directly from the camera to a render texture. Also, the reflection is created from a single point at the “centre” of the object, which may be some distance from it’s exterior reflective surface. There’s also some distortion issues at the edges of the field of view, but it’s good enough for me. If you make any improvements to it please let me know!

And here’s that shiny sphere again, this time using a cubemap created from the game scene in which it is placed, using the script above:

Rendering the faces of the cubemap textures takes a second or so, you wouldn’t want to do it in realtime in the game. If you have an object that moves around and you want it to reflect its current environment, one approach is to pre-render cubemaps for various “zones” in your game (e.g. in each room in an indoor scene, or where scenery changes significantly in an outside environment), then probe and swap in the closest appropriate cubemap texture to the object’s current location. This approach is described in more detail at http://www.blog.radiator.debacle.us/2013/05/cubemapped-environment-probes-source.html

## Smooth Unity Camera Transitions with Animation Curves

I’m creating a game in which I want to transition between two camera views – a top-down, overhead camera that gives a zoomed-out world view, and an over-the-shoulder style forward-facing camera to give a close-up view. Let’s say for the sake of discussion that these positions are defined relative to the player, as follows:

• Overview: Position x/y/z: (0, 100, 0). Rotation x/y/z: (-90, 0, 0)  (60 units above, rotated straight down)
• Close-up: Position x/y/z: (0, 0, –0.5). Rotation x/y/z: (0, 0, 0) (0.5 units behind, aligned with player direction)

Rather than simply cut between the two camera views, or present the top-down view in a second minimap camera alongside the main game view,  I want to use a single camera and transition smoothly between the two view locations. Initially, I tried to simply interpolate the position and rotation of the camera between the two locations (using lerp() for linear interpolation of the position vectors and slerp() for the rotation quaternions), based on a single zoom parameter:

```transform.position = Vector3.Lerp(zoomstart.position, zoomend.position, zoom);
transform.rotation = Quaternion.Slerp(zoomstart.rotation, zoomend.rotation, zoom);

```

This gives:

However, this didn’t create the effect I wanted – I wanted to remain largely looking down as the camera zoomed in, and only pull up to match that player’s look vector at relatively close zoom levels, and also to ease the movement at both ends of the zoom.

My next thought was to ease in/out the camera motion using one of the functions described in this earlier post – perhaps a sigmoid curve, so that the rate of rotation was greatest at either extreme of the zoom spectrum. But I had trouble finding a function that exactly described the motion I wanted to create.

And then I discovered Unity’s AnimationCurves. I don’t know why they’re called animation curves, because they’re not specific to animations at all – they allow you to define custom curves based on an arbitrary number of control points (keys), and then evaluate the value of the function at any point along the curve.

To look up the value at any point along the curve use the Evaluate()  method. For example, in the curve above, the y value of the curve at x=0.5 is given by Curve.Evaluate(0.5) , which results in 0.692. Using this you can smoothly adjust any variable based on a single dimensional input.

## Curves for Smooth Camera Movement

I have three camera transform parameters that change between my two camera locations – the offset position in y and z axes, and the rotation in the x axis. Since each animation curve can express only one dimension, I thought I might need to create three animation curves – one to describe each parameter that varies as the camera zooms in. However, on more thought I decided a better way: to have one curve to control the rotation around the x axis (i.e. adjusting the pitch of the camera), and then a second curve to set the distance which the camera should be offset back based on the forward direction of the camera at that point. Since the camera rotates about the x axis, translating back along its normal means that the second curve will govern both the height (y) and offset (z) in a single curve.

You can create animation curves in the inspector, but I initialised them in code as follos:

```public class CameraScript : MonoBehaviour {

// How camera pitch (i.e. rotation about x axis) should vary with zoom
public AnimationCurve pitchCurve;
// How far the camera should be placed back along the chosen pitch based on zoom
public AnimationCurve distanceCurve;

// Use this for initialization
void Start () {

// Create 'S' shaped curve to adjust pitch
// Varies from 0 (looking forward) at 0, to 90 (looking straight down) at 1
pitchCurve = AnimationCurve.EaseInOut(0.0f, 0.0f, 1.0f, 90.0f);

// Create exponential shaped curve to adjust distance
// So zoom control will be more accurate at closer distances, and more coarse further away
Keyframe[] ks = new Keyframe[2];
// At zoom=0, offset by 0.5 units
ks[0] = new Keyframe(0, 0.5f);
ks[0].outTangent = 0;
// At zoom=1, offset by 60 units
ks[1] = new Keyframe(1, 60);
ks[1].inTangent = 90;
distanceCurve = new AnimationCurve(ks);
}
}

```

This generates the following curves:

 Pitch Distance

I then evaluate the appropriate point from each curve based on the camera zoom level, as follows:

```// Calculate the appropriate pitch (x rotation) for the camera based on current zoom
float targetRotX = pitchCurve.Evaluate(zoom);

// The desired yaw (y rotation) is to match that of the target object
float targetRotY = target.rotation.eulerAngles.y;

// Create target rotation as quaternion
// Set z to 0 as we don't want to roll the camera
Quaternion targetRot = Quaternion.Euler(targetRotX, targetRotY, 0.0f);

// Calculate in world-aligned axis, how far back we want the camera to be based on current zoom
Vector3 offset = Vector3.forward * distanceCurve.Evaluate(zoom);

// Then subtract this offset based on the current camera rotation
Vector3 targetPos = target.position - targetRot * offset;

```

Finally, rather than setting the camera position/rotation directly, I lerp/slerp between the current transform.position/rotation towards the target position/rotation to smooth the movement and get the result as follows, which I’m pretty happy with (scene view on the left, resulting camera view on the right):

## Hand-Drawn Shaders and creating Tonal Art Maps

I recently submitted my first asset to the Unity Asset Store – a set of shaders designed to produce a variety of real-time “hand-drawn” effects.

Drawing on a range of influences, including A-Ha, Murasaki Baby, and Okami, the shaders are designed to be flexible and customisable. Separate “outline” and “fill” styles can be combined in different ways to create a range of effects resembling various art styles. The following images show the result when applying some of the supplied example materials to the Unity ShadowGun sample project:

## Ballpoint Pen

You can play a web demo that demonstrates the effects here, and another here.

The “ballpoint pen” and “pencil” shaders both make use of a technique that calculates the apparent brightness of each part of the game scene, and replaces it with a texture interpolated from a sequence of images with progressively increasing density. This can be used to simulate artistic techniques such as hatching that represent different levels of tone intensity in an image. The basic method on which these shaders are based is described in this Microsoft Research article.

A Tonal Art Map representing increasing levels of hatching intensity

My shader pack already comes supplied with a set of hatching textures (referred to as the “Tonal Art Map”), but I thought I’d document here how I created them, so you can follow the process to create your own alternative TAMs. You can actually download the original set of 6 textures used in the Microsoft Research paper from here (together with an implementation of the hatching method in Javascript). However, since these textures are licenced under GPL, I couldn’t include them with the Asset Store download so created my own from scratch instead.

## Creating a Tonal Art Map

You can use pretty much any graphics program for this – I used Paint.NET, but you could use GIMP or Photoshop just as easily.

1. Create a blank image – for performance reasons you should always make sure its dimensions are a power of 2 – 256px x 256px should be fine for most purposes.

2. Add some random noise to the image. In Paint.NET this is Effects –> Noise –> Add Noise. My TAMs are all monochrome, so set Color Saturation to 0. We’ll create the least intense image first, so set coverage relatively low (5 should do it), and set intensity high:

This should create an image as follows:

3. Now apply a blur, by Effects –> Blurs –> Motion Blur. Choose an angle and distance that looks attractive to you.

This gives:

4. Notice how the effect of blurring has created the strokes we want, but lightened the stroke intensity too much in the process (in fact, you can barely see them in the image above). To correct that, we’ll use Adjustments –> Curves. Change the Luminosity curve to resemble something like this:

This will make the image look like this:

5. If you’re happy with this as a base image (if anything, the image above is actually too intense, but I’ll leave it for now), then save it as your first texture (e.g. FillTex_0.tif). Then add a new layer to the image (Layers –> Add New Layer).

NOTE: For some reason, the Paint.NET noise function doesn’t appear to work on layers with a transparent background, so the first thing to do is to use the fill tool to fill the new layer in White. This will obscure your background layer for now but don’t worry, we’ll sort that out later.

6. With the new layer selected, repeat steps 1-4 above, but rotate the angle of the motion blur by 90 degrees. This should make your top layer appear as follows:

7. Now bring up the properties for your top layer (Layer –> Layer Properties) and change the Blend Mode to Multiply:

This will cause the hatching of the second layer to be overlaid on the first, darkening it where strokes intersect in much the same manner as would occur if you were to use a pen on paper:

8. If you’re happy with that as your second texture, merge the layers down (Layers –> Merge Layers Down) and save as FillTex_1.tif.

9. Keep repeating the process of adding new layers, adding noise, blurring, changing luminosity, and multiply blending for each additional texture in your Tonal Art Map (my shader pack includes shaders for 4 textures and 6 textures). For cross-hatching, you should alternate the blur angle with each layer, but you might also want to include a bit of random variation. The important feature of this technique is that each progressively more intense texture builds upon all the preceding textures.

Of course, you don’t have to stick to simple straight lines – try different noise functions, wavy lines, or textures in your TAMs to create interesting effects, or to simulate other materials such as pencil, chalk, or charcoal. Here’s another Microsoft Research paper that demonstrates some alternative textures:

Posted in Game Dev | Tagged , | 1 Comment