Learnings from Unite 2015

I spent the last three days having a lovely time in Amsterdam attending the Unite Europe 2015 conference. This is just a place for me to gather my notes/thoughts while they’re still in my head:

 

  • Iestyn Lloyd’s (@yezzer) Dropship demo and his other diaromas are built pretty much entirely with assets and effects from the asset store, including: Orbital reentry craft model by Andromeda, Winter shaders (for the frost effect), RTP and Terrain composer (terrain, obv.), Allsky (skyboxes), Colorful (colour grading/filters), SE natural bloom & dirty lens, SSAO Pro, Amplify motion (effects), and Allegorithmic Substance (texturing)

 

  • Static Sky is a nice-looking touch-controlled squad shooter for iPad, and the developers have worked out some smart techniques for intuitive camera control and touch input such as dynamically stretching hitboxes in screenspace for interactive objects so that even fat-fingered players’ intentions can be interpreted correctly (clicking close to an enemy will likely be interpreted as the player’s intention to attack that enemy, rather than simply walking up and standing next to them).

 

  • Lots of new audio stuff in Unity 5 deserves playing with, including the ability to create hierarchical audiomixer groups, transitioning between audiomixer snapshots, and the ability to create audio plugins for, e.g. realtime dynamic effects (vocodering of voices, or programmatic raindrops, for example)
  • Assets for the Blacksmith demo are interesting and worthy of a download to examine, particularly the custom hair and skin shaders

  • The Unity Roadmap is now public, so you can check out what features are coming up.
Posted in Game Dev | Tagged , | Leave a comment

Modding Mario

Video footage of the two finalists in the recent 2015 “Nintendo World Championship” competing in speedruns of customised Super Mario Bros levels has generated some excitement for the upcoming “Mario Maker” game.

 

It’s quite fun to watch the competitors react to the unexpected level features, and some thought has gone into their entertaining, challenging design, but I struggled to find anything new here: fan-made games have allowed you to customise Mario games for some time now, not just with new maps, but with completely novel gameplay features.

Super Mario Crossover, for example, lets you play through all the Super Mario Bros levels as classic Nintendo characters including Link, Samus Aran, Simon Belmont, and Megaman. And these aren’t just skins slapped in place of Mario– each character recreates their own unique skills and mechanics from their original games – Samus can roll into a ball and lay bombs to blow up Goombas, for example.

image

 

Then there’s the brilliant Mari0, which combines Mario with Portal mechanics:

https://i1.wp.com/stabyourself.net/images/mari0screens/mari0-bowser.png 

 

Super Mario War is a multiplayer, competitive deathmatch style game using Mario mechanics. Full source code is available.

https://i0.wp.com/wii-homebrew.com/images/whb/hb-games/smwscreen3.png

 

I even recently created my own Mario mod, “Super Mario Team”, which turns Super Mario Bros into a co-operative local multiplayer game that can be played by up to 8 players:

image

 

My worry is that, with Nintendo now seeking to monetise the modding of Mario through the “Mario Maker” product on Wii U, it seems likely that they will want to close down a lot of these fan-made projects that offer the same (or greater) functionality for free. Nintendo are notoriously protective of their IP, and only a few months ago sent a cease and desist request to the student-made “Super Mario 64 HD”, so if you want to play any of these unofficial Mario games I suggest you do so quickly!

Posted in Game Dev | Tagged , , | Leave a comment

Creating Windowless Unity applications

I remember some while back there was a trend for applications to seem integrated with your Windows desktop – animated cats that would walk along your taskbar – things like that… What about a windowless Unity game that used a transparent background to reveal your regular desktop behind anything rendered by the camera?

You can do this using a combination of the OnRenderImage() event of a camera, some calls into the Windows native APIs, and a custom shader. Here’s how:

First, you’ll need to create a custom shader, as follows:

Shader "Custom/ChromakeyTransparent" {
	Properties {
		_MainTex ("Base (RGB)", 2D) = "white" {}
		_TransparentColourKey ("Transparent Colour Key", Color) = (0,0,0,1)
		_TransparencyTolerance ("Transparency Tolerance", Float) = 0.01 
	}
	SubShader {
		Pass {
			Tags { "RenderType" = "Opaque" }
			LOD 200
		
			CGPROGRAM

			#pragma vertex vert
			#pragma fragment frag
			#include "UnityCG.cginc"

			struct a2v
			{
				float4 pos : POSITION;
				float2 uv : TEXCOORD0;
			};

			struct v2f
			{
				float4 pos : SV_POSITION;
				float2 uv : TEXCOORD0;
			};

			v2f vert(a2v input)
			{
				v2f output;
				output.pos = mul (UNITY_MATRIX_MVP, input.pos);
				output.uv = input.uv;
				return output;
			}
		
			sampler2D _MainTex;
			float3 _TransparentColourKey;
			float _TransparencyTolerance;

			float4 frag(v2f input) : SV_Target
			{
				// What is the colour that *would* be rendered here?
				float4 colour = tex2D(_MainTex, input.uv);
			
			    // Calculate the different in each component from the chosen transparency colour
				float deltaR = abs(colour.r - _TransparentColourKey.r);
				float deltaG = abs(colour.g - _TransparentColourKey.g);
				float deltaB = abs(colour.b - _TransparentColourKey.b);

				// If colour is within tolerance, write a transparent pixel
				if (deltaR < _TransparencyTolerance && deltaG < _TransparencyTolerance && deltaB < _TransparencyTolerance)
				{
					return float4(0.0f, 0.0f, 0.0f, 0.0f);
				}

				// Otherwise, return the regular colour
				return colour;
			}
			ENDCG
		}
	}
}


This code should be fairly self-explanatory – it looks at an input texture and, if a pixel colour is within a certain tolerance of the chosen key colour, it is made transparent (you may be familiar with this technique as “chromakey”, or “green/blue screen” used in film special effects).

Create a material using this shader, and choose the key colour that you want to replace (and change the tolerance if necessary). Note that you don’t need to assign the main texture property – we’ll use the output of a camera to supply this texture, which is done in the next step…

Now, create the following C# script and attach to your main camera:

using System;
using System.Runtime.InteropServices;
using UnityEngine;

public class TransparentWindow : MonoBehaviour
{
    [SerializeField]
    private Material m_Material;

    private struct MARGINS
    {
        public int cxLeftWidth;
        public int cxRightWidth;
        public int cyTopHeight;
        public int cyBottomHeight;
    }

    // Define function signatures to import from Windows APIs

    [DllImport("user32.dll")]
    private static extern IntPtr GetActiveWindow();

    [DllImport("user32.dll")]
    private static extern int SetWindowLong(IntPtr hWnd, int nIndex, uint dwNewLong);
    
    [DllImport("Dwmapi.dll")]
    private static extern uint DwmExtendFrameIntoClientArea(IntPtr hWnd, ref MARGINS margins);


    // Definitions of window styles
    const int GWL_STYLE = -16;
    const uint WS_POPUP = 0x80000000;
    const uint WS_VISIBLE = 0x10000000;

    void Start()
    {
        #if !UNITY_EDITOR
        var margins = new MARGINS() { cxLeftWidth = -1 };

        // Get a handle to the window
        var hwnd = GetActiveWindow();

        // Set properties of the window
        // See: https://msdn.microsoft.com/en-us/library/windows/desktop/ms633591%28v=vs.85%29.aspx
        SetWindowLong(hwnd, GWL_STYLE, WS_POPUP | WS_VISIBLE);
        
        // Extend the window into the client area
        See: https://msdn.microsoft.com/en-us/library/windows/desktop/aa969512%28v=vs.85%29.aspx 
        DwmExtendFrameIntoClientArea(hwnd, ref margins);
        #endif
    }

    // Pass the output of the camera to the custom material
    // for chroma replacement
    void OnRenderImage(RenderTexture from, RenderTexture to)
    {
        Graphics.Blit(from, to, m_Material);
    }
}

This code uses InterOpServices to make some calls into the Windows native API that change the properties of the window in which Unity runs. It then uses the OnRenderImage() event to send the output of the camera to a rendertexture. Drag the material to which you assigned the custom transparency shader into the m_Material slot, so that our chromakey replacement works on the output of the camera.

Then, and this is important: change the background colour of the camera to match the _transparentColourKey property of the transparent material.

image

This can be any colour you want, but you might find it easiest to use, say, lime green (0,255,0), or lurid pink (255,0,255).

And then build and run your game (you could enable this in editor mode, by commenting the #if !UNITY_EDITOR condition above, but I really don’t recommend it!). This should work in either Unity 4.x (Pro, since it uses rendertextures), or Unity 5.x any version.

Here’s Ethan from Unity’s stealth demo walking across Notepad++ on my desktop…:

transparent

Posted in Game Dev | Tagged , , | 1 Comment

Creating a Unity game stretched over Two Monitors

I guess many developers, like me, use a dual-output graphics card to make their Windows workspace extend over two monitors. I’ve been experimenting with whether it’s possible to make a Unity game that spanned two monitors (and, perhaps even more interestingly, to make a gameplay feature out of it). Here’s how:

a.) I have two monitors side-by-side both set to 1280×1024. So, first I opened up nVidia control panel and created a custom resolution that represented their combined area – 2560×1024.
customres

b.) You will now be able to select the custom resolution when launching a standalone Unity game.
startscreen

c.) However, there’s still a problem. If you launch the app as full-screen, it will fill only one monitor. If you choose windowed mode, however, you’ll get the title bar and window edging visible in the display (even after having set a resolution that exactly fills both displays).
dualwithborder

The solution is to use windowed mode, but to launch the application using the -popupwindow command line switch. This can easily be done by creating a batch file and launching from that instead of launching the .exe directly:
nameofyourgame.exe -popupwindow

The game will now run in a borderless window that fills both screens.

d.) Now, to create the level setup that makes a gameplay feature of this. I created two orthographic cameras – Left and Right, and positioned them so that there was a slight gap between their viewing frustrums. This, in gameplay terms, represented the gap between the monitors. I then placed a large black cuboid in the gap. It would never be visible in-game, but allowed me to, e.g. fire OnTriggerEnter() calls any time a collider crossed between monitors.
scene

Since my monitors have exactly the same resolution as each other, I want the left camera to render to the left hand-side of the game display (the left monitor), and the right camera to render to the right-hand side (the right monitor). This is easy to achieve by setting the Viewport Rect setting on the left camera to X:0 Y:0 W:0.5 H:1, and the right camera to X:0.5 Y:0 W:0.5 H:1. And then I added a little trivial logic to determine which side of the central “void” cube the player was on, and change the colour of the platforms accordingly:

And here’s what it looks like:

Posted in Game Dev | Tagged , | Leave a comment

Creating “Hand-Drawn” Terrain in Unity

I’ve had a few questions recently about how to apply my hand-drawn shader pack on Unity terrain. So here’s a guide (written using Unity 5, but the instructions should remain almost identical in Unity 4 save for a few menu options being in different places).

To start, create a terrain just as you would do normally:

1.) Create a new terrain: GameObject -> 3D Object –> Terrain

image

2.) Use the terrain tools to sculpt it.

image

3.) Now paint on textures just as you would do normally. I’m using the terrain textures included with Unity’s standard assets package: Assets -> Import Package –> Environment

image

 

Switch to game view, and it looks something like this:

image

 

Now we’ve got our basic terrain, it’s time to apply a hand-drawn ink look.

4.) Create a new material and assign the Hand-Drawn/Fill+Outline/Overdrawn Outline + Smoothed Greyscale Fill shader from the Hand-Drawn shader pack (you can use the same settings as shown here if you want, but you might prefer to tweak them later anyway)

image

5.) Select your terrain object in the scene hierarchy and, on the settings tab of the inspector pane (the furthest right-most tab, with the cog icon), change Material to "Custom", then select the Custom Material you just created in step 4.

image

At this point, your screenshot probably looks like this – getting there but not quite right yet ;)

image

 

To give the effect that the ink terrain has been drawn onto paper, we need to put a background instead of that skybox. There are a couple of ways of doing this – in the following steps I’ll put an image on a canvas. (if you’re using Unity < 4.6 when UI was introduced, you could simply create a quad in the background instead).

6.) Add a new UI image component by going GameObject -> UI -> Image. Set the source image to be one of the background textures included with the shader pack (e.g. Japanese Paper, Ruled Paper etc.). Then click the "native size" button to resize it to normal size.

7.) Select the Canvas element in the hierarchy and change the Render Mode to “Screen Space – Camera”. Then set the render camera to "Main Camera". Finally set the Plane Distance to 999 (just inside the default far clipping plane of the camera which is 1000). This will place the background texture behind everything else in the view.

8.) On the Canvas Scaler component, set the Scale Mode to "Scale with Screen Size", set the reference resolution to 640×480 (or whatever the size of the background texture you used is), and set the Screen Match Mode to "Shrink". This will make the background texture always fill the camera screen. Your settings on the Canvas should now be as follows:

image

 

And here’s what you should now see in the game view:

image

 

If you want the ink lines to have an animated effect, you can increase the scribbliness and redraw rate parameters of the shader, which gives results as follows:

 

castleanim

 

Or:

boatanim

Posted in Game Dev | Tagged , , , | 1 Comment

Adding Shadows to a Unity Vertex/Fragment Shader in 7 Easy Steps

This was a question asked on the Unity Forums recently, so I thought I’d just write up the answer here.

Unity provides its own unique brand of “surface shaders”, which make dealing with lighting and shadows relatively simple. But there are still plenty of occasions in which you find yourself writing more traditional vert/frag CG shaders, and needing to deal with shadows in those too.

Suppose you had written a custom vertex/fragment CG shader, such as the following simple example:

Shader "Custom/SolidColor" {
    SubShader {
        Pass {
        
            CGPROGRAM

            #pragma vertex vert
            #pragma fragment frag
            #include "UnityCG.cginc"

            struct v2f {
                float4 pos : SV_POSITION;
            };


            v2f vert(appdata_base v) {
                v2f o;
                o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
                return o;
            }

            fixed4 frag(v2f i) : COLOR {
                return fixed4(1.0,0.0,0.0,1.0);
            }

            ENDCG
        }
    }
    Fallback "VertexLit"
}

 

This shader simply outputs the colour red for all fragments, as shown on the plane in the following image:

image

 

Now what if you wanted to add shadows to that surface? Unity already creates a shadowmap for you from all objects set to cast shadows, and defines several macros that make it easier to sample that shadowmap at the appropriate point. So here’s the changes you need to make to a shader to make use of those built-in shadows:

Shader "Custom/SolidColor" {
    SubShader {
        Pass {
        
            // 1.) This will be the base forward rendering pass in which ambient, vertex, and
            // main directional light will be applied. Additional lights will need additional passes
            // using the "ForwardAdd" lightmode.
            // see: http://docs.unity3d.com/Manual/SL-PassTags.html
            Tags { "LightMode" = "ForwardBase" }
        
            CGPROGRAM

            #pragma vertex vert
            #pragma fragment frag
            #include "UnityCG.cginc"

            // 2.) This matches the "forward base" of the LightMode tag to ensure the shader compiles
            // properly for the forward bass pass. As with the LightMode tag, for any additional lights
            // this would be changed from _fwdbase to _fwdadd.
            #pragma multi_compile_fwdbase

            // 3.) Reference the Unity library that includes all the lighting shadow macros
            #include "AutoLight.cginc"


            struct v2f
            {
                float4 pos : SV_POSITION;
				
                // 4.) The LIGHTING_COORDS macro (defined in AutoLight.cginc) defines the parameters needed to sample 
                // the shadow map. The (0,1) specifies which unused TEXCOORD semantics to hold the sampled values - 
                // As I'm not using any texcoords in this shader, I can use TEXCOORD0 and TEXCOORD1 for the shadow 
                // sampling. If I was already using TEXCOORD for UV coordinates, say, I could specify
                // LIGHTING_COORDS(1,2) instead to use TEXCOORD1 and TEXCOORD2.
                LIGHTING_COORDS(0,1)
            };


            v2f vert(appdata_base v) {
                v2f o;
                o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
                
                // 5.) The TRANSFER_VERTEX_TO_FRAGMENT macro populates the chosen LIGHTING_COORDS in the v2f structure
                // with appropriate values to sample from the shadow/lighting map
                TRANSFER_VERTEX_TO_FRAGMENT(o);
                
                return o;
            }

            fixed4 frag(v2f i) : COLOR {
            
                // 6.) The LIGHT_ATTENUATION samples the shadowmap (using the coordinates calculated by TRANSFER_VERTEX_TO_FRAGMENT
                // and stored in the structure defined by LIGHTING_COORDS), and returns the value as a float.
                float attenuation = LIGHT_ATTENUATION(i);
                return fixed4(1.0,0.0,0.0,1.0) * attenuation;
            }

            ENDCG
        }
    }
    
    // 7.) To receive or cast a shadow, shaders must implement the appropriate "Shadow Collector" or "Shadow Caster" pass.
    // Although we haven't explicitly done so in this shader, if these passes are missing they will be read from a fallback
    // shader instead, so specify one here to import the collector/caster passes used in that fallback.
    Fallback "VertexLit"
}

 

And that’s it!

image

Posted in Game Dev | Tagged , , , | Leave a comment

Using the Stencil Buffer in Unity Free

News concerning the recent release of Unity 4.6 was largely dominated by its new UI system.

https://i0.wp.com/badgerheadgames.com/wp-content/uploads/2014/11/starcitizenhud.jpg

However, closer inspection of the 4.6 release notes reveals some other interesting changes and improvements, including the announcement that “Stencil buffer is now available in Unity Free”.

Stencil buffers are not exactly new technology – they’ve been around for at least 10 years, and available in Unity Pro since Version 4.2. They can be used in somewhat similar circumstances to RenderTextures (still a Pro-only feature) to create a range of nice effects, so I thought I’d have a play…

The stencil buffer is a general purpose buffer that allows you to store an additional 8bit integer (i.e. a value from 0-255) for each pixel drawn to the screen. Just as shaders calculate RGB values to determine the colour of pixels on the screen, and z values for the depth of those pixels drawn to the depth buffer, they can also write an arbitrary value for each of those pixels to the stencil buffer. Those stencil values can then be queried and compared by subsequent shader passes to determine how pixels should be composited on the screen.

For example, adding the following tags to a shader will cause it to write the value "1" to the stencil buffer for each pixel that would be drawn to the screen:

    // Write the value 1 to the stencil buffer
    Stencil
    {
        Ref 1
        Comp Always
        Pass Replace
    }


With a few more tags, we can turn this shader into a mask whose only function is to write to the stencil buffer – i.e.  that does not draw anything to the screen itself, but sets values in the stencil mask to control what pixels are drawn to the screen by other shaders:

    Tags { "Queue" = "Geometry-1" }  // Write to the stencil buffer before drawing any geometry to the screen
    ColorMask 0 // Don't write to any colour channels
    ZWrite Off // Don't write to the Depth buffer

 

So, now this shader won’t have any visible output – all it does is write the value 1 to the stencil buffer for those pixels it would otherwise have rendered. We can now make use of this stencil buffer to mask the output of another shader, by adding the following lines:

    // Only render pixels whose value in the stencil buffer equals 1.
    Stencil {
      Ref 1
      Comp Equal
    }

 

The effect created by these two shaders is illustrated by the two models in the following screenshot:

  • the picture frame contains an (invisible) quad which writes the value 1 to the stencil buffer for all pixels contained within the frame.
  • the character standing behind the frame has a shader that tests the values in the stencil buffer and only renders those pixels that have a value of 1 – i.e. those pixels that are seen through the picture frame.

 

stencil

 

You can see the full list of stencil mask operators at http://docs.unity3d.com/Manual/SL-Stencil.html

Posted in Game Dev | Tagged , , | Leave a comment