The game design challenge
Create a playable Unity mobile game in under 72 hours.
Given our time limitation we worked in Agile and divided our time into day-long sprints (the last day being divided in two), with short retrospectives and grooming exercises at key points each day.
We performed our testing sessions at the start of the second and third days, to make that a central focus for the day’s work.
As we would operate in a mixture of in-person and remote sessions, we used a variety of tools to manage and produce our work.
Invision (inc Freehand)
Setting a theme
We’d both recently been discussing Blade Runner 2049, Cyberpunk 2077 and a variety of other dystopian sci-fi, so decided to run with this as a theme. To keep the output achievable in our limited timeframe, we chose a simple game type that could fit the theme; an endless runner game.
Inputs and gameplay were simple
Blade Runner (original) has established an iconic visual style, allowing us to draw parallels without too much heavy lifting.
We setup a simple 4 column KANBAN board in Trello to organise everything we knew we’d need to do upfront, or might discover we’d need to do as we went along.
Throughout the sprints, we’d add any new cards to the backlog and stack them vertically to denote priority, and labelled them to quickly organise who needed input on what.
Lastly, we flagged items that were a blocker to success at the end of the challenge.
I quickly created a set of game states and the flow between them to create some alignment on initial functionality and what UI components I’d need to visualise.
After agreeing this and the general idea of the game, Paul got on with setting up the codebase and main functionality in Unity, whilst I began drawing some initial assets.
I started with a background of colours and shapes to capture the mood, and kept the objects to silhouettes to simplify things. As the main character sprite would need animation and this would take the most time, we adapted a stock vector asset.
By the end of the first day we had added a variety of different cards to our backlog, sometimes ideas for cool visual elements or things we needed to test.
As we wanted to test our Minimum Viable Product by day 2, we wanted the test to include:
- core gameplay of jumping between rooftops (without animated assets)
- broad flavour of the theme and art style
- be usable on a mobile device, ready for testing at the start of Day 2.
First play test
The first testable version of the game lacked polish but had the key job story working, namely that:
When I approach a gap, I want to jump/move my character to avoid death.
We conducted an in-person usability test with 5 random participants, followed by a quick survey after each.
Participants 1 and 2 claimed they regularly played games on their phones, and 3-5 did not (and who were apprehensive that “they wouldn’t know how to play”!).
First play test findings
Observing the players initial reactions and responses highlighted some interesting issues:
As the game offered no prompts for when or where to tap, the inexperienced players felt anxious and took several attempts to feel they understood enough to be in control.
The area of the screen participants tapped varied.
The score counter was too visually prominent, and was proving a distraction whilst the player character was in motion.
The dark colour scheme and silhouettes had poor contrast, and so players found it difficult to see.
For the survey, we wanted to look at some simple gameplay heuristics:
The player feels in control - their input has a consistent/predictable response
As you played the game, did you feel you were in control of the characters actions?
The player finds the screen layout and aesthetic visually pleasing
How visually appealing did you find the game?
The player feels the game offers an appropriate level of difficulty
Did you feel the difficulty of the game was fair?
Participants were asked to rate these factors on a 5-point scale, 0 being “Not at all” and 5 being “completely agree”.
They suggested the gameplay itself wasn’t difficult as it was just about timing the jumps, but the contrast made it feel hard and the score counter felt distracting, making it harder to concentrate.
Test 01 - Survey responses
Continuing to design
The learnings from our test gave us some clear things we wanted to focus on throughout the day. Fixing the visual contrast, adding some better visuals for collectibles and obstacles, and adding some variance to the jump mechanic.
At this point, a friend of ours (Alan Ciccotelli) started working on a couple of music tracks for the game.
By the end of the second day, we wanted to see if we had improved some of the issues we’d found, as well as test the double jump feature we had introduced.
We’d implemented a crude contextual tutorial that would appear right before each action was relevant, near the first gap in the level (Jump), then jump again mid-air (Double-jump), ready for testing next day.
In our infinite UX foresight we decided this should only display on the first run, with a toggle option on the Game Over screen to have them trigger again the next round.
Second play test
We ran a similar test as the day before, this time looking to see if the double jump was improving things.
Second play test findings
Our tutorials were a disaster.
Gameplay wise, it was great! players loved it, it added some variety and new ‘timing’ element to jumping, but our tutorials were causing a lot of confusion, as they would appear mid-activity and be closed by the player’s input.
Particularly with the double-jump, players were tapping mid-air out of panic, which meant the prompts interrupted the flow, and they couldn’t read them.
A critical interjection: our test results are semi-flawed in that we used the same “experienced player” participants, and so the learnability/familiarity of the game was a factor.
However, as the tutorial feature was the main focus of our test we largely focused on new/inexperienced players, and sourced new participants for slots 3-5 to not have any pre-trained behaviours.
The new colours and high contrast “items of interest” (collectibles, obstacles) improved our aesthetic and difficulty scores, and players felt the double-jump (and bigger gaps) made difficulty feel more appropriate!
Test 02 - Survey responses
Immediate design priorities
Whilst we felt we could spend time tweaking the timings of when the first tutorial prompt would appear (i.e earlier, before they approached the gap), the double-jump was the feature that needed to be trained, and we knew we didn’t want to interrupt the player mid-jump.
I experimented with an idea of an artificial (short) loading screen, then used Principle to quickly simulate a timer - this allowed me to perform a very quick comprehension test with inexperienced players.
How long did the timer need to be to give the player enough time to read and comprehend VS causing frustration from waiting?
After trying this out and tweaking it a few times, we found that inexperienced players felt they understood the game controls after 3 seconds.
Using the last of our time
The last of the time was used ironing out critical bugs and adding some additional assets and aesthetic features to delight, such as:
Improvements to colour and lighting
Particle effects for player death and the “jetpack” used in double-jump
Building interiors that also offered a second platform/floor
Background flying cars/traffic
Vertical obstacle “walls”
The main menu screen (borrowed some concept art from a different project of mine, and adapted it to fit the theme)
A credits screen
And because we simply couldn’t help ourselves;
An additional enemy type that fired at the player
Game design involves trusting your team, and always looking to compliment each other’s workflow as much as possible. Small improvements like me pushing assets directly into Unity using the built-in version control helped reduce friction.
Designing just enough of a prototype to test a hypothesis was critical to saving us precious time. Sometimes this was a static UI artefact, other times (like with the fake loading screen) it meant using motion. In all cases, this meant Paul/the developer was free to focus on aspects that could ONLY be tested as part of the playable game itself.
The survey question around difficulty could potentially have used a 3 point scale (yes/not sure/no) or even a boolean (yes/no), given that it asked participants if they felt the game was “fair”. The 5 point scale was chosen to maintain consistency with the response format from the other questions, so the jury is still out on that one.
As we had given ourselves a fixed time limit, this was our biggest restriction. Knowing the design constraints sharpened our focus on ideas or features that were both viable AND contributed to the vision we had. This was extremely useful when we were forced to shelve or abandon ideas, as we both understood that these weren’t arbitrary cuts or egos getting in the way.
Player testing as soon as possible and together allowed both of us to have a shared understanding of what was working and what wasn’t, so when it came to prioritising during our retrospectives and stand-ups, we didn’t lose precious time explaining Trello cards or issues to each other.
Naming the game is probably something we should have spent more time on...
(turn up the sound…)
Interested in a conversation?
If you’d like to get in touch - be it for a copy of my CV, more samples of work, or to discuss Games UX - you can reach me on Linkedin.