The 7DFPS Game Jam, Augmented Reality, and “Spectral Echoes”

For the recent 7-Day FPS Challenge, I decided to try writing an augmented reality game – blending in-game graphics with a display taken from the rear-facing video feed on an phone/tablet device, and game navigation achieved through physical manipulation of the device. After all, how much more "First Person" can you get than physically having to look around (and possibly move around) the real world around you?

I like to use game jams as a reason to learn new skills, and creating this game required familiarising myself with the gyro/compass/accelerometer sensors available in an Android/iOS/WinPhone device (and exposed through Unity), as well as working with video feeds. So that’s not too much to learn in 7 days, right…?!

This post with gather some ramblings of things I learned during the process:

 

Day 1 : Initial Game Ideas, and Turning Technical Limitations into Design Decisions

There’s so much technology packed into modern mobile phone handsets that I felt it was a shame not to use them in some way to enhance gameplay experience. A quick browse of the Google Play store reveals several existing “augmented reality” games, but they often seem poorly-implemented. One “AR” game, for example, is a straightforward up-the-screen shooter that simply overlays the phone camera feed as the game background. Another looks quite interesting, but requires users to download and physically print out a specially-textured pattern onto which the in-game model is projected. While marker-tracking drastically improves the ability to track position and orientation in real-world space, for me, one of the purposes of mobile games is that they should be, well, mobile, and not require additional physical peripherals to play.

Having decided not to make use of visual marker recognition, I instead had to rely on the internal compass and gyro for orientation, the linear accelerometer for movement, and the GPS for position. However, early tests on various devices I had to hand showed that each of these sensors have their own limitations:

  • GPS gives position accurately to about 5 metres, but only works outdoors;
  • The accelerometer measures force of movement applied to the device, but I found the readings were a bit jittery;
  • The gyro was responsive and accurate as a way to determine rotation and orientation of the device over short movements, but suffered “drift” over a period of time. After each complete turn, I’d find that it had only measured a rotation of 330 degrees, say. Like a slow clock that loses 5 minutes every day, these errors obviously add up over time.
  • While the compass usually gave a reading for the relative direction from magnetic north, it didn’t update very frequently and occasionally got it wildly wrong.

I tried various ways of combining and smoothing results from the different sensors (for example, using the compass to correct the gyro, as suggested by George Foot in this thread), but it was clear that there would be limited accuracy with which I could align and track static objects between the “virtual” and “real” world. So that led to Design Decision #1:

“All virtual objects placed into the augmented reality world had to be dynamic– i.e. either moving, or capable of moving.”

By following this rule, I hoped that if the player were to look away from an object and then look back to find it was no longer in exactly the same position in the augmented world (because technical limitations had failed to track it accurately), this would not break the illusion for the player – they would simply assume that the object had moved itself.

The second technical limitation I faced is that, using only the video feed from a single camera, I had no method of depth perception of other topological relations between real world objects. The entire “real world” was portrayed in the game on a single flat plane an arbitrary distance from the viewer, and there was no way to know which “virtual” objects should occlude (or be occluded) real world objects. This led to Design Decision #2:

“All virtual objects would need to have an ambiguous physical form, and be visually semi-translucent.”

By following this rule, I hoped to mask the fact that virtual game objects could not be represented accurately according to their position in the real world – they could be sized relative to their distance from the viewer, but would always be drawn (partially) on top of all real objects. By somehow giving the objects an “ethereal” nature, this becomes less jarring.

With the basic design principles in mind, I had several possible game ideas and themes, and set about with some coding.

 

Day 2: Accessing Sensor Data in Unity. A.K.A “Fun with Quaternions”

Unity’s Input class contains members that expose data from device sensors including the gyro and the linear accelerometer; however my first attempt to set orientation in the virtual world directly based on those values led to some problems – rotations not behaving as expected, axes inverted, or working in portrait mode but not landscape mode etc. A quick internet search revealed that many others had also faced similar difficulties, but there was some confusion as to the correct remedy. Suggestions include:

While I don’t doubt that these various suggestions solved the problem for their respective authors at the time, none of them worked for me, and merely resulted in different combinations of rotation or mirroring of various axes. It took a lot of printing values to the screen, experimentation, and trying to think in 4-dimensional space to get the correct solution (for me, at least), as follows (use this in a script attached to a camera object in Unity):

 

// Create a parent object containing the camera (you could do this manually in the 
// hierarchy if preferred, or here in code)
GameObject camParent = new GameObject ("CamParent");
camParent.transform.position = transform.position;
transform.parent = camParent.transform;

// Rotate the parent object by 90 degrees around the x axis
camParent.transform.Rotate(Vector3.right, 90);

// Invert the z and w of the gyro attitude
rotFix = new Quaternion(gyro.attitude.x, gyro.attitude.y, -gyro.attitude.z, -gyro.attitude.w);

// Now set the local rotation of the child camera object
transform.localRotation = rotFix;

 

This script means that tilting the physical device up, down, left, and right now rotates the camera gameobject in the same manner, and works for me in Unity 4.2, with device set to Landscape Left orientation, tested on Android 4.1.2 on a Sony Xperia S and Android 4.3 on a Nexus 10. I rather suspect that they may differ from iOS/WinPhone devices, but I currently have no way of testing on either of those, so I’ll optimise for Android for now.

I then created a shader that drew a simple grid as a visual indication of the “floor” of the virtual world (i.e. the plane for which y = 0) based on some code I found here:

Shader "Grid" {
    
    Properties {
    _GridThickness ("Grid Thickness", Float) = 0.02
    _GridSpacing ("Grid Spacing", Float) = 10.0
    _GridColor ("Grid Color", Color) = (0.5, 0.5, 1.0, 1.0)
    _OutsideColor ("Color Outside Grid", Color) = (0.0, 0.0, 0.0, 0.0)
    }
    
    SubShader {
        Tags {
            "Queue" = "Transparent"
            // draw after all opaque geometry has been drawn
        }
    
        Pass {
            ZWrite Off
            // don't write to depth buffer in order not to occlude other objects
            Blend SrcAlpha OneMinusSrcAlpha
            // use alpha blending
    
            CGPROGRAM
    
            #pragma vertex vert
            #pragma fragment frag
    
            uniform float _GridThickness;
            uniform float _GridSpacing;
            uniform float4 _GridColor;
            uniform float4 _OutsideColor;
    
            struct vertexInput {
                float4 vertex : POSITION;
            };
            struct vertexOutput {
                float4 pos : SV_POSITION;
                float4 worldPos : TEXCOORD0;
            };
    
             // Vertex function
            vertexOutput vert(vertexInput input) {
           
                vertexOutput output;
    
                output.pos = mul(UNITY_MATRIX_MVP, input.vertex);
                output.worldPos = mul(_Object2World, input.vertex);
                // transformation of input.vertex from object coordinates to world coordinates;
                return output;
               }
    
             // Fragment function
               float4 frag(vertexOutput input) : COLOR {
                if (frac(input.worldPos.x/_GridSpacing) < _GridThickness || frac(input.worldPos.z/_GridSpacing) < _GridThickness) {
                   
                    float dist = distance(input.worldPos, float4(0.0, 0.0, 0.0, 1.0));
                  
                    float4 grid = _GridColor;
                    grid.a = 1-(dist/20);
                   
                    return grid;
                }
                else {
                    return _OutsideColor;
                }
               }
    
        ENDCG
        }
    }
}

Then I placed some cubes to denote X/Y/Z orientation positioned at (1,0,0), (0,1,0), and (0,0,1), and a placeholder gun model superimposed on the camera view with a PD gunshot sound effect, to get a result looking something a bit like this – beginning to look like a First-Person Shooter!:

 

Day 3: Displaying the Video Camera Feed. A.K.A “Fun with Shaders”

With the ability to navigate the “virtual world” (mostly) sorted out, I turned my attention to displaying the “real world” video feed from the camera. There’s plenty of examples of using Unity’s WebCamTexture class to display a video feed in a game, but every one I could find displayed the feed using a GUITexture element. This wasn’t going to work for me, because I wanted to apply custom shaders to the video image (you can’t set custom materials on a GUITexture), so it needed to be applied to a real game object instead – a Plane (or, because I was using Unity 4.2, I could use the new Quad primitive).

Unfortunately, as was the case with the sensor data, I found that the texture obtained from the camera image was sometimes strangely rotated/mirrored. This, it seems, is also a problem that has been spotted by other users here and here, for example. Fixing it required a certain amount of jiggling of the object onto which the texture was to be applied, the shader code itself, and the VideoRotationAngle property in the code used to populate the WebCamTexture. In the end, I decided to force my application to run in landscape mode only which seemed to mitigate the problem – if I tried to support portrait mode as well my current code would break.

Having got the video feed correctly oriented, I then had another design decision influenced by technical limitations: my Nexus10 has a 5MP rear-facing camera – fairly typical of modern tablets, while my Xperia S has an obscenely high-res 12MP camera. Trying to constantly refresh a live video feed at that resolution would mean it would very hard to get anything like an acceptable frame rate for the rest of the game. Fortunately, I also wanted to deliberately reduce the quality of the camera feed for other reasons – I felt that by adding a level of noise and imperfection to the overall scene, it would be easier to integrate the augmented objects more seamlessly with the real world display. Therefore, I deliberately requested a low resolution image from the camera (using small values for RequestedWidth and RequestedHeight in the WebCameraTexture constructor), and a relatively low framerate, then scaled the image up to fill the screen. This created an attractive, slightly blurred image that would fit my purposes well, in addition to being less of a strain on the processor.

image

Remembering my decision to make game objects in some way translucent or visually ambiguous, I decided that the video feed would feature “night vision” and" “thermal vision” modes – not uncommon in FPSs – by using dedicated shaders with the material displaying the WebCamTexture. This was my first time writing my own custom shader from scratch in Unity but, with reference to this excellent wikibook, I was quite pleased with the results:

Here’s my “night vision” shader applied to a Plane with the WebCamTexture of the camera feed:

image

And here’s the “thermal vision” shader:

image

 

Day 4: Creating the Gameplay

So now I had the basic game mechanics sorted – I could navigate around a real-world space displayed through the camera feed which had virtual objects superimposed on it, oriented based on sensor readings of the device. Now it was time to come up with some gameplay.

While I liked the night vision and thermal vision shaders created the day before, I was having problems working out how to make them work in the game. Perhaps I would have some enemies that could only be seen in one mode or another? But how could “night vision” mode reveal something that couldn’t be seen by the viewer normally? Perhaps there was an enemy that could give off heat but had no physical form? That would fit nicely with the requirement to have an ambiguous physical form… and then I came upon the idea of making a supernatural game – in which the enemies were ghosts or spectres of some kind, that could only be detected by some special sort of camera mode. This, it seemed was a perfect fit – ghosts are autonomous and mobile (so wouldn’t matter if their movement couldn’t be tracked perfectly), they can move through objects (so would get around the problem of depth-perception in the scene), they are clearly recognisable to the player as “baddies”… and I could modify the night vision shader to make it more ghostly – similar to the camera style in “Blair Witch Project”. Things seemed to be coming together nicely. At this point, I also set a title for the game – Spectral Echoes.

I downloaded a few placeholder 3d models of ghosts and scripted them to behave… well, in a ghostly manner – wandering around certain locations, but then approaching the player and attacking if they came too close. I also decided that it was unconvincing for the player to be able to shoot a ghost using a regular weapon, so I disposed of the pistol asset I had been using up to now. This meant that, at this time, the player had no means of attack, but at least things were moving forward in other ways.

I also wrote another new shader (my third in two days!) for the ghost model, which combined several effects:-

  • Opacity reduced the further the distance away from the camera
  • Opacity also reduced as y value decreased to 0, and increased with increasing y values. I thought that, if the ghost were more transparent closer to the ground, it would enhance the appearance of it “floating”.
  • I also calculated a silhouette effect by calculating the dot product of each vertex normal with the camera angle to determine those vertices on the “edge” of the model (as seen from the current view), and gave those a brightness boost.

Here’s the code:

Shader "Spectral Echoes/Ghost" {
      
    Properties {
        _TintColour ("Tint Colour", Color) = (1, 1, 1, 0.5)
        _OutlineColour ("Outline Colour", Color) = (1, 1, 1, .5)
        _MainTex ("Main Texture", 2D) = "white"
        _Alpha ("Alpha", float) = 0.4
    }
      
    SubShader {
        Tags { "Queue"="Transparent" "RenderType"="Transparent" }
            Pass {
           
                ZWrite Off // Don't occlude other objects
                Blend SrcAlpha OneMinusSrcAlpha // Alpha blending

                CGPROGRAM
                  
                #pragma vertex vert
                #pragma fragment frag alpha
                #include "UnityCG.cginc"
               
                // This struct contains the variables that will be passed from
                // Unity into the vertex shader
                struct vertexInput {
                    float4 vertex : POSITION;
                    float3 normal : NORMAL;
                    float2 texcoord : TEXCOORD0;
                };
               
                // This struct contains the variables that will be output from
                // the vertex shader (and input into the fragment shader)
                struct vertexOutput {
                    float4 pos : SV_POSITION;
                    float2 uv : TEXCOORD0;
                    float4 color : COLOR;
                    float4 worldPos : TEXCOORD1;
                };
                  
                // Access the shaderlab properties
                uniform float4 _OutlineColour;
                uniform sampler2D _MainTex;
                uniform float4 _TintColour;
                float _Alpha;
               
                // VERTEX SHADER
                vertexOutput vert (vertexInput v) {
                    // Declare the output structure
                    vertexOutput o;
                   
                    // Populate the position and texture coordinates
                    o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
                    o.uv = v.texcoord;

                    // Calculate the degree by which the vertex lies on the edge of the model
                    // based on the direction of the vertex normal compared to the camera view
                    float3 viewDir = normalize(ObjSpaceViewDir(v.vertex));
                    float dotProduct = dot(v.normal, viewDir);

                    // Set the outline colour based on the dot product and the alpha of chosen outline colour
                    o.color = _OutlineColour * (1 - dotProduct) * _OutlineColour.a;

                    // Calculate world coordinates (will be used in fragment shader)
                    o.worldPos = mul(_Object2World, v.vertex);
                       
                    return o;
                }
                  
                // FRAGMENT SHADER
                float4 frag(vertexOutput i) : COLOR {
                   
                    // Assign the default texture
                    float4 texcol = tex2D(_MainTex, i.uv);
                       
                    // Now add the tint colour to the main texture
                    texcol *= _TintColour; //* _TintColour.a;
                       
                    // Add the outline glow
                    texcol.rgb += i.color;
                       
                    // Make lower coordinates more transparent
                    texcol.a = _Alpha * i.worldPos.y;
                       
                    // Make further away more transparent
                    float dist = distance(i.worldPos, float4(0.0, 0.0, 0.0, 1.0));
                    texcol.a /= max(dist/3.0f, 1.0f);
                       
                    return texcol;
                }
                  
                ENDCG
            }
        }
    }

After the addition of a few particle effects to jazz things up a bit, and turning off the in-game grid I had been using up to now, I was quite pleased with how things were looking:

image

 

Day 5: “Real Life” Day

Unfortunately, I didn’t get anything done on the game today due to the demands of real-life (i.e. having three children 😉

 

Day 6: More Gameplay and Narrative, Audio, and Setbacks

I’m not sure if it was as a result of having a break the day before, but I found my rate of progress today to be somewhat impaired – for every feature I tried to implement or fix, it seemed two other things broke 😦

Having removed my original gun model, I tried to implement a more supernatural firing mechanism – tapping the screen resulting in a “spectral energy discharge” – a forward-firing one-shot particle effect. Graphically, it worked, but I wasted a lot of time with a bug in the raycasting used to detect whether it hit the enemy or not (resolved later when I realised I was raycasting from the parent “player” object, and not the child “camera” object, which had been rotated to fix the problems described in Day 3…).

I also decided that, seeing as I knew the player would be playing the game on a mobile or tablet device, I wanted the game narrative to be revealed via a series of in-game emails or SMS text messages received by the player. I spent some time on the style guide of the Android developers website trying to implement a resolution-independent message GUI that resembles the default Android message notification system, and then carelessly lost that work when I failed to save the file…

My spectre models are clothed in partially disintegrated cloaks, which I was keen to animate using Unity’s cloth physics. However, this also caused much trouble: it seems that the partial disintegration had caused the cloak mesh to become non-manifold, which made it unusable for either texture UV mapping or cloth simulation. Fixes such as that suggested here didn’t work, so I guess I’m going to have to completely remodel the cloak. I’m no 3d modeller so couldn’t face this right now, but aim to come back to it later.

I did, however, acquire and implement some pretty nice public domain ghostly sound effects. I think I will use 3D audio as a fairly important aspect of the game to help the player orient themselves relative to game objects (the spectres emit a kind of warped human shriek as they approach, so if you hear that getting louder but can’t see anything, you should really try looking behind you 😉

 

Day 7: The End. Or, the Future?

I had pretty much decided by now that, although the game was playable in a basic form, I wasn’t going to complete the game to my satisfaction by the end of the day, and as such I wouldn’t formally submit it to the 7DFPS site. This was a shame, but, in reality, I probably wouldn’t get much feedback on it anyway because few other users would be bothered to download and install an Android .APK compared to most other 7dfps submissions, which were available for Windows or via cross-platform webplayers, with far greater audience reach.

I did implement a title screen and basic win/loss conditions, but spent most of the rest of the day documenting what I would still like to develop in the game, namely:

  • Better game progression structure – gradually introducing the different elements of navigation and perception in the augmented world.
  • Cloth physics on the spectre’s cloak
  • More variety of enemy types and patterns. I now also have a rat in addition to the ghost above, but would still like more.
  • More clarity of game objectives (I have a basic premise that the player is using their device to collect “spectral data” which requires moving around the real world, but precise objectives they must meet to do that are unclear)
  • In-game radar and other on screen GUI elements? Not sure if this would add to or detract from the overall augmented-reality experience – I’m tempted to leave it just with the video feed filling the screen but might try experimenting.
  • Greater range of audio SFX, which I’m aiming to achieve by varying pitch/tempo and overlaying several base “ethereal” samples, combined with white/pink noise.

When I get some spare time, I’m hoping to work on those features, and maybe even aim for a release on the Google Play store. Halloween would seem to be a good target launch date 🙂 But I’m glad to have taken part in 7DFPS; even though I didn’t finish a game, it gave me some good motivation to try something new, and that’s the best way to learn.

This entry was posted in Game Dev and tagged , , , . Bookmark the permalink.

3 Responses to The 7DFPS Game Jam, Augmented Reality, and “Spectral Echoes”

  1. Clou says:

    Sorry for interrupting you Sir. I’m really impressed by your work Sir. Right now I’m working for my thesis, and my thesis is similar with your project. I want to ask whether it is possible or not to make the controller like your project without gyro ? I’m just a poor collage student, and my current device only have accelerometer. Thank you very much, you can reply me by email if you would like too :).

  2. NGUPIL says:

    Can you attach with the assets folder for unity Android, please 😦

Leave a comment