## Guess Who Reloaded for the Rainbow Jam 2016

I recently enrolled to take part in the Rainbow Jam but, with somewhat of a lack of foresight, failed to realise I was going to be on holiday in France for most of the jam period, without either a computer or internet access. Determined not to be put off, I decided to try spending some of my time spent sipping vino in a deckchair in the Loire valley designing a paper and pen-based game instead.

A common game design exercise is to consider an existing game that is “broken” in some way, and propose some changes to the game mechanics that address those issues. The jam theme of “identity” made me immediately think of the children’s game Guess Who?, and it was this that gave me the inspiration for the jam.

I’m sure you’re familiar with the game: two players each have a card depicting a character chosen from an identical grid of characters. They take it in turns to ask each other questions to determine the identity of the other person’s card.

### Why Is Guess Who broken?

Although I do recall fond memories of playing Guess Who? as a child, looking back at it now, the game is obviously broken in a number of different ways:

• The dominant winning strategy is always the same – if on every turn you ask a question that distinguishes half of the remaining characters, you create a binary search which you are guaranteed to win in 5 questions (=Log₂ 24)
• Because you’re trying to guess from a set of fictional characters, based only on a name and caricature of their face, the questions themselves are naturally limited to a crude (and often unkind) judgement of people’s appearance – “Do they have a big nose?”, “Are they old?”, “Are they ugly?”…
• The creators have made a set of cards in which each obvious physical characteristic appears with a 5:19 ratio.In the set of 24 characters, there are 5 with hats, 5 with glasses, 5 with blue eyes, 5 with a bald head, for example. This is clever in that it makes it harder to formulate a question to produce the binary search strategy explained above. However, it does mean that there are only 5 women characters depicted compared to 19 men. (At least, we are led to assume such gender roles based on name and stereotypical presentation, yet Claire can be a man’s name, while Sam, Alex and Joe are commonly-used names by both men and women….). And, to avoid ambiguity, each characteristic must be apparent and polarised.
• Although there have been various reprinted versions with updated art, to my knowledge, the characters above are the same used in every version of Guess Who?. Yet, looking at it, you’d think I’d used an image of the “White Supremacy” edition – there’s not any ethnic diversity whatsoever….

### Fixing the Problem(s)

So I tried experimenting with a number of things to make the game better:

• Coming up with a completely new set of characters, with a broader range of diversifying features: characters with a disability, different ethnicities and skin colours, and less binary divide between male/female genders, for example. This added more interest to the characters, but otherwise had little mechanic effect on the game – players’ questions could still only be based on purely physical appearance of the characters.
• Adding short bios or descriptive facts to the cards enabled questions to be significantly more varied about the character’s personality, lifestyle, and identity (remember that’s the jam theme 🙂 than just their appearance alone. I started off with “age” and “profession” but then tried to be a little more adventurous. With some more time, I could probably have come up with a set of facts that fitted the 5:19 rule with regards to questions such as “Is this person trustworthy?”, “Does this person feel valued in their job?”. However, there would still be a fixed optimum strategy to eliminate the characters by asking appropriate questions to create the binary search each playthrough:

Then it struck me that the key problems with Guess Who? are all attributed to one fact – that it is played with a fixed set of fictional characters.

• For the game to work, there must be unambiguous agreement between the players as to the answer to any question, yet that means there is no room for “shades of grey” – a character must either be a woman or a man, old or young, friendly or grumpy – and with only limited information able to be presented on the card, that necessarily means that the fictional characters must meet the stereotypes that I’ve been criticising.
• The fact that it’s the same set of cards on every playthrough means that an optimum strategy can easily be developed (and remembered by a good player), reducing the skill and interest of repeat play.

### The solution : Guess Who Reloaded

Given the problem identified above, the solution then becomes remarkably easy – use a variation of the game in which there is a different set of real-life characters for each playthrough. And, with that, I present my game jam submission:

Players: 2

Equipment Required: Paper, pens

Set-up: Both players should agree on a list of 24 people with which they are all familiar – they could be mutual friends, family members, or celebrities, for example. To make the game more interesting, try to select a broad variety of people. Each player should write these names down on scraps of paper laid out, face up, in a grid in front of them. They then choose *one* of these people and write their name down on an additional scrap of paper, laid face down.This is the identity which the other player will try to guess.

Objective: The object of the game is to successfully guess the identity of the other player’s chosen person before they discover yours.

Gameplay: Players take it in turns to ask any question about the other player’s chosen person. Questions can only be answered with a single “Yes”/”No” response. Questions can take any form, from asking about the person’s physical appearance, to their history, personality, or any shared experience that they might have had with the player. Some examples might include:

• “Does this person dye their hair?”
• “Would you trust this person with your life?”
• “Does this person have a tattoo?”
• “Is this person embarrassing when drunk?”

After each response, the asking player should “eliminate” the candidates in front of them who do not match the response given by turning them face down. The winner is the first player to determine the identity of the other player’s chosen person.

The game is different on each playthrough, both due to the different selection of people and the players’ different personal experience and opinion of those people. If an optimal strategy exists, it is much harder to determine and execute due to the more subjective nature of characteristics present. And the game is made potentially more interesting and fun by drawing on shared, possibly asymmetric experiences of the players (“I didn’t know Rachel had a tattoo?!!!” etc.).

## Bulk-renaming files

I recently found myself needing to rename around 100 files. The reason was that I had saved a series of video fragments from the middle of a much larger streaming video – so the fragment number of each filename was a sequence that started from about 1000, whereas the script I described in a previous post to join Adobe streaming fragments together needs fragments to be numbered sequentially from 1.

I considered writing a script to do it, but after a quick search of the internet came across a piece of software called Bulk Rename Utility which seemed to do the job really nicely.

The interface looks a bit scary at first, but actually it’s pretty logical when you get the hang of it. The bottom half of the screen shows various rules – numbered 1 to 14, which you can individually optionally enable. By combining them you can form quite complicated renaming rules – even more so if you conduct more than one pass over your filenames.

But the best feature is that you can click on any file in the file browser at the top half of the screen to preview instantly what the filename will be changed to after applying the current rules.  When you’re happy, just run it over all the selected files.

It’s not a task I need to do very often, so I just thought I’d note it here in the hope of reminding myself next time I need it, or in case anyone reading this ever needs to do the same.

If you’ve tried dabbling with shaders at all, you’ve probably come across ShaderToy – an online shader showcase with some pretty amazing examples of what’s possible in a few lines of shader code, inspired greatly by classic demoscene coding. Here’s just two examples:

 “Seascape” by TDM “Elevated” by iq

It’s an amazing resource, not only for inspiration but for learning how to create shaders, since every example comes with full source code which you can edit and immediately test online in your browser, alter parameters, supply different inputs etc.

The shaders exhibited on ShaderToy are exclusively written in GLSL, and run in your browser using WebGL. I thought it might be fun to keep my shader skills up to scratch by converting some of them to Cg / HLSL for use in Unity. I’m using Unity 5 but, to the best of my knowledge, this should work with any version of Unity.

## GLSL –> HLSL conversion

It would probably be possible to write an automatic conversion tool to turn a GLSL shader into an HLSL shader, but I’m not aware of one. So I’ve been simply stepping through each line of the shader and replacing any GLSL-specific functions with the HLSL equivalent. Fortunately, many functions are the same, others have direct equivalents and, as a shader is typically less than 100 lines long, what sounds like a manual process only takes a few minutes in practice.

Microsoft have published a very useful reference guide here which details many of the general differences between GLSL and HLSL. Unity also have a useful page here. The following is a list of specific issues I’ve encountered when converting Shadertoy shaders to Unity:

• Replace iGlobalTime shader input (“shader playback time in seconds”) with _Time.y
• Replace iResolution.xy (“viewport resolution in pixels”) with _ScreenParams.xy
• Replace vec2 types with float2, mat2 with float2x2 etc.
• Replace vec3(1) shortcut constructors in which all elements have same value with explicit float3(1,1,1)
• Replace Texture2D with Tex2D
• Replace atan(x,y) with atan2(y,x) <- Note parameter ordering!
• Replace mix() with lerp()
• Replace *= with mul()
• Remove third (bias) parameter from Texture2D lookups
• mainImage(out vec4 fragColor, in vec2 fragCoord) is the fragment shader function, equivalent to float4 mainImage(float2 fragCoord : SV_POSITION) : SV_Target
• UV coordinates in GLSL have 0 at the top and increase downwards, in HLSL 0 is at the bottom and increases upwards, so you may need to use uv.y = 1 – uv.y at some point.

Note that ShaderToys don’t have a vertex shader function – they are effectively full-screen pixel shaders which calculate the value at each UV coordinate in screenspace. As such, they are most suitable for use in a full-screen image effect (or, you can just apply them to a plane/quad if you want) in which the UVs range from 0-1.

But calculating pixel shaders for each pixel in a 1024×768 resolution (or higher) is *expensive*. One solution if you want to achieve anything like a game-playable framerate is to render the effect to a fixed-size rendertexture, and then scale that up to fill the screen. Here’s a simple generic script to do that:

using System;
using UnityEngine;

[ExecuteInEditMode]
{

public int horizontalResolution = 320;
public int verticalResolution = 240;

// Called by camera to apply image effect
void OnRenderImage (RenderTexture source, RenderTexture destination)
{
// To draw the shader at full resolution, use:
// Graphics.Blit (source, destination, material);

// To draw the shader at scaled down resolution, use:
RenderTexture scaled = RenderTexture.GetTemporary (horizontalResolution, verticalResolution);
Graphics.Blit (source, scaled, material);
Graphics.Blit (scaled, destination);
RenderTexture.ReleaseTemporary (scaled);
}
}

And finally, here’s an example of a ShaderToy shader converted to Unity. This one is “Bubbles” by Inigo Quiles:

// GLSL -> HLSL reference: https://msdn.microsoft.com/en-GB/library/windows/apps/dn166865.aspx

Pass {
CGPROGRAM

#pragma vertex vert
#pragma fragment frag

struct v2f{
float4 position : SV_POSITION;
};

v2f vert(float4 v:POSITION) : SV_POSITION {
v2f o;
o.position = mul (UNITY_MATRIX_MVP, v);
return o;
}

fixed4 frag(v2f i) : SV_Target {

float2 uv = -1.0 + 2.0*i.position.xy/ _ScreenParams.xy;
uv.x *= _ScreenParams.x/ _ScreenParams.y ;

// Background
fixed4 outColour = fixed4(0.8+0.2*uv.y,0.8+0.2*uv.y,0.8+0.2*uv.y,1);

// Bubbles
for (int i = 0; i < 40; i++) {

// Bubble seeds
float pha =      sin(float(i)*546.13+1.0)*0.5 + 0.5;
float siz = pow( sin(float(i)*651.74+5.0)*0.5 + 0.5, 4.0 );
float pox =      sin(float(i)*321.55+4.1);

// Bubble size, position and color
float rad = 0.1 + 0.5*siz;
float dis = length( uv – pos );
float3 col = lerp( float3(0.94,0.3,0.0), float3(0.1,0.4,0.8), 0.5+0.5*sin(float(i)*1.2+1.9));

// Add a black outline around each bubble

// Render
f = sqrt(clamp(1.0-f*f,0.0,1.0));

}

// Vignetting
outColour *= sqrt(1.5-0.5*length(uv));

return outColour;
}

ENDCG
}
}
}

Pretty, isn’t it?

Posted in Game Dev | Tagged , , , , , | 5 Comments

## Multiple Mice Input in Unity

I’m a big fan of local multiplayer – as far as I’m concerned, being bundled up with your mates cajoling each other and screaming at the TV is the very essence of gaming. I had an absolute blast at the recent Hugs ‘n Uppercuts local multiplayer event in Norwich, playing the likes of Gang Beasts, Nidhogg, Friendship Club etc.

Multiplayer games where players use joypads for input is easy – just connect additional USB/wireless controllers and away you go (AFAIK, Unity supports up to 11, which is a strange limit, but there you go…). But what about local multiplayer games where players use a mouse for input? I’m not aware of any such games, but I couldn’t think of a valid reason why not to try. Perhaps it’s simply the logistics of having enough surface space on which to operate the mice… or perhaps it’s because it turns out to be a bit of a technical challenge….

Connecting additional USB mice is, at first glance, just as easy as connecting additional USB joysticks. And Windows will recognise them just fine too. The problem is that every mouse will control the same, single cursor. And, if you ask Unity’s Input class for the mousePosition, or to GetMouseButtonDown(), you’ll get the response from all connected mice, with no way to distinguish them.

Recognising the mice separately requires hooking into the Raw Input events at the OS level. This needs a native (non-managed) plugin, so this solution is very much Windows-specific. There’s probably an equivalent for Mac/Linux, but who cares about those? Not me.

While researching this problem, I came across a few relevant projects:

So, I rolled my sleeves up and hacked together various bits of the above into a Frankenstein-ish solution. It supports both 32/64 bit and will work in the editor and in a standalone build, though not a web build. To capture the mouse input, the program needs to create a window handle via System.Forms. Unity doesn’t really support System.Forms, and it will throw some warnings in the Editor (not in a build though), but it will (should) work just fine after that. Tested with up to 4 mice on Windows 8.1 64 bit and it works fine.

To use, you need to copy both the following dlls into the /Plugins directory of your project.

And the following script shows basic usage to access 4 mice:

```using UnityEngine;
using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using RawMouseDriver;
using RawInputSharp;

public class MouseInput : MonoBehaviour {

RawMouseDriver.RawMouseDriver mousedriver;
private RawMouse[] mice;
private Vector2[] move;
private const int NUM_MICE = 4;

// Use this for initialization
void Start () {
mousedriver = new RawMouseDriver.RawMouseDriver ();
mice = new RawMouse[NUM_MICE];
move = new Vector2[NUM_MICE];
}

void Update() {
// Loop through all the connected mice
for(int i=0; i<mice.Length; i++){
try {
mousedriver.GetMouse(i, ref mice[i]);
// Cumulative movement
move[i] += new Vector2(mice[i].XDelta, -mice[i].YDelta);
}
catch {  }
}
}

void OnGUI(){
GUILayout.Label("Connected Mice:");
for(int i=0; i< mice.Length; i++){
if(mice[i] != null)
GUILayout.Label("Mouse[" + i.ToString() + "] : " + move[i] + mice[i].Buttons[0] + mice[i].Buttons[1]);
}
}

void OnApplicationQuit()
{
// Clean up
mousedriver.Dispose ();
}
}```

And here it is in action:

Posted in Game Dev | Tagged , , , , | 3 Comments

## Learnings from Unite 2015

I spent the last three days having a lovely time in Amsterdam attending the Unite Europe 2015 conference. This is just a place for me to gather my notes/thoughts while they’re still in my head:

• Iestyn Lloyd’s (@yezzer) Dropship demo and his other diaromas are built pretty much entirely with assets and effects from the asset store, including: Orbital reentry craft model by Andromeda, Winter shaders (for the frost effect), RTP and Terrain composer (terrain, obv.), Allsky (skyboxes), Colorful (colour grading/filters), SE natural bloom & dirty lens, SSAO Pro, Amplify motion (effects), and Allegorithmic Substance (texturing)

• Static Sky is a nice-looking touch-controlled squad shooter for iPad, and the developers have worked out some smart techniques for intuitive camera control and touch input such as dynamically stretching hitboxes in screenspace for interactive objects so that even fat-fingered players’ intentions can be interpreted correctly (clicking close to an enemy will likely be interpreted as the player’s intention to attack that enemy, rather than simply walking up and standing next to them).

• Lots of new audio stuff in Unity 5 deserves playing with, including the ability to create hierarchical audiomixer groups, transitioning between audiomixer snapshots, and the ability to create audio plugins for, e.g. realtime dynamic effects (vocodering of voices, or programmatic raindrops, for example)
• Assets for the Blacksmith demo are interesting and worthy of a download to examine, particularly the custom hair and skin shaders

• The Unity Roadmap is now public, so you can check out what features are coming up.

## Modding Mario

Video footage of the two finalists in the recent 2015 “Nintendo World Championship” competing in speedruns of customised Super Mario Bros levels has generated some excitement for the upcoming “Mario Maker” game.

It’s quite fun to watch the competitors react to the unexpected level features, and some thought has gone into their entertaining, challenging design, but I struggled to find anything new here: fan-made games have allowed you to customise Mario games for some time now, not just with new maps, but with completely novel gameplay features.

Super Mario Crossover, for example, lets you play through all the Super Mario Bros levels as classic Nintendo characters including Link, Samus Aran, Simon Belmont, and Megaman. And these aren’t just skins slapped in place of Mario– each character recreates their own unique skills and mechanics from their original games – Samus can roll into a ball and lay bombs to blow up Goombas, for example.

Then there’s the brilliant Mari0, which combines Mario with Portal mechanics:

Super Mario War is a multiplayer, competitive deathmatch style game using Mario mechanics. Full source code is available.

I even recently created my own Mario mod, “Super Mario Team”, which turns Super Mario Bros into a co-operative local multiplayer game that can be played by up to 8 players:

My worry is that, with Nintendo now seeking to monetise the modding of Mario through the “Mario Maker” product on Wii U, it seems likely that they will want to close down a lot of these fan-made projects that offer the same (or greater) functionality for free. Nintendo are notoriously protective of their IP, and only a few months ago sent a cease and desist request to the student-made “Super Mario 64 HD”, so if you want to play any of these unofficial Mario games I suggest you do so quickly!

## Creating Windowless Unity applications

I remember some while back there was a trend for applications to seem integrated with your Windows desktop – animated cats that would walk along your taskbar – things like that… What about a windowless Unity game that used a transparent background to reveal your regular desktop behind anything rendered by the camera?

You can do this using a combination of the OnRenderImage() event of a camera, some calls into the Windows native APIs, and a custom shader. Here’s how:

First, you’ll need to create a custom shader, as follows:

```Shader "Custom/ChromakeyTransparent" {
Properties {
_MainTex ("Base (RGB)", 2D) = "white" {}
_TransparentColourKey ("Transparent Colour Key", Color) = (0,0,0,1)
_TransparencyTolerance ("Transparency Tolerance", Float) = 0.01
}
Pass {
Tags { "RenderType" = "Opaque" }
LOD 200

CGPROGRAM

#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"

struct a2v
{
float4 pos : POSITION;
float2 uv : TEXCOORD0;
};

struct v2f
{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
};

v2f vert(a2v input)
{
v2f output;
output.pos = mul (UNITY_MATRIX_MVP, input.pos);
output.uv = input.uv;
return output;
}

sampler2D _MainTex;
float3 _TransparentColourKey;
float _TransparencyTolerance;

float4 frag(v2f input) : SV_Target
{
// What is the colour that *would* be rendered here?
float4 colour = tex2D(_MainTex, input.uv);

// Calculate the different in each component from the chosen transparency colour
float deltaR = abs(colour.r - _TransparentColourKey.r);
float deltaG = abs(colour.g - _TransparentColourKey.g);
float deltaB = abs(colour.b - _TransparentColourKey.b);

// If colour is within tolerance, write a transparent pixel
if (deltaR < _TransparencyTolerance && deltaG < _TransparencyTolerance && deltaB < _TransparencyTolerance)
{
return float4(0.0f, 0.0f, 0.0f, 0.0f);
}

// Otherwise, return the regular colour
return colour;
}
ENDCG
}
}
}

```

This code should be fairly self-explanatory – it looks at an input texture and, if a pixel colour is within a certain tolerance of the chosen key colour, it is made transparent (you may be familiar with this technique as “chromakey”, or “green/blue screen” used in film special effects).

Create a material using this shader, and choose the key colour that you want to replace (and change the tolerance if necessary). Note that you don’t need to assign the main texture property – we’ll use the output of a camera to supply this texture, which is done in the next step…

Now, create the following C# script and attach to your main camera:

```﻿using System;
using System.Runtime.InteropServices;
using UnityEngine;

public class TransparentWindow : MonoBehaviour
{
[SerializeField]
private Material m_Material;

private struct MARGINS
{
public int cxLeftWidth;
public int cxRightWidth;
public int cyTopHeight;
public int cyBottomHeight;
}

// Define function signatures to import from Windows APIs

[DllImport("user32.dll")]
private static extern IntPtr GetActiveWindow();

[DllImport("user32.dll")]
private static extern int SetWindowLong(IntPtr hWnd, int nIndex, uint dwNewLong);

[DllImport("Dwmapi.dll")]
private static extern uint DwmExtendFrameIntoClientArea(IntPtr hWnd, ref MARGINS margins);

// Definitions of window styles
const int GWL_STYLE = -16;
const uint WS_POPUP = 0x80000000;
const uint WS_VISIBLE = 0x10000000;

void Start()
{
#if !UNITY_EDITOR
var margins = new MARGINS() { cxLeftWidth = -1 };

// Get a handle to the window
var hwnd = GetActiveWindow();

// Set properties of the window
// See: https://msdn.microsoft.com/en-us/library/windows/desktop/ms633591%28v=vs.85%29.aspx
SetWindowLong(hwnd, GWL_STYLE, WS_POPUP | WS_VISIBLE);

// Extend the window into the client area
See: https://msdn.microsoft.com/en-us/library/windows/desktop/aa969512%28v=vs.85%29.aspx
DwmExtendFrameIntoClientArea(hwnd, ref margins);
#endif
}

// Pass the output of the camera to the custom material
// for chroma replacement
void OnRenderImage(RenderTexture from, RenderTexture to)
{
Graphics.Blit(from, to, m_Material);
}
}

```

This code uses InterOpServices to make some calls into the Windows native API that change the properties of the window in which Unity runs. It then uses the OnRenderImage() event to send the output of the camera to a rendertexture. Drag the material to which you assigned the custom transparency shader into the m_Material slot, so that our chromakey replacement works on the output of the camera.

Then, and this is important: change the background colour of the camera to match the _transparentColourKey property of the transparent material.

This can be any colour you want, but you might find it easiest to use, say, lime green (0,255,0), or lurid pink (255,0,255).

And then build and run your game (you could enable this in editor mode, by commenting the #if !UNITY_EDITOR condition above, but I really don’t recommend it!). This should work in either Unity 4.x (Pro, since it uses rendertextures), or Unity 5.x any version.

Here’s Ethan from Unity’s stealth demo walking across Notepad++ on my desktop…:

Posted in Game Dev | | 1 Comment