Selected Works

My specialties are systems + UX design and gameplay programming. I work mainly with Unity3D + C# for games and React + Expo + AWS for web-development.


Shipped Titles

Digital Games

Digital Research Projects

Analogue Games

Dreamscaper

Overview

Life's full of surprises. One of them is that my super talented brother, Ian, left his job to make his dream game, Dramscaper! I was lucky enough to have some skills that could help with development, so I offered to help! I contributed to the design by brainstorming, playtesting, providing feedback, and submitting proposals for puzzle and combat design. I contributed to the in-game dialogues by creating a pipeline to convert text documents to CSVs that Unreal could parse, and importing/implementing them in engine.

Dreamscaper shipped to acclaim on Steam (92% positive), Gamepass, and the Switch in 2021.

check out the steam page

Return

Animation Pipeline and Tools Engineering

Introduction

During the summer of 2018, I completed my second full-time internship with ETS out of their in San Francisco office. I worked primarily alone but was supported during some parts of the learning process by other team members. This section details topics which builds off of work done during my first year's internship. You can read more about these topics on the HALEF page here.

Background

The team's goals had changed with the working prototype from last summer and my tasks for the summer would reflect that. The new priorities were to support non-experts who wished to author new content for our system and further explorations into increasing fidelity while lowering file size.

Authoring Content

Enabling non-experts to author content involved two kinds of animation pipelines, one for mouth animations and one for body animations. I figured we could use a kinect to do low-cost and simple DIY motion recording for the body animations. After experimentation, I was able to record and re-target animations from the Kinect onto different kinds of avatars. However, the complexity of the process, the fidelity of the output, and the size of the animation files proved prohibitive for the goals of the project and it was decided to continue with a traditional animation pipeline for the near future. With this knowledge I could focus on other goals for the project.

Automating Mouth Animations

For the automated lip-syncing I set out to create a script which would not 100% accurately reflect the source material, but do a good enough job that it seemed believable. I chose to create a Python script which would take some kind of audio as input. It would then cut that audio up into small segments and analyze the amplitude of that segment. I then inserted a hook so that we could process that data in any way we need to, in this case normalizing and adjusting the values such that there was a smooth transition from segment to segment. After all the processing had finished, the script would then translate the values of the segments into a a .txt file (and show them visually on a graph).

A graph showing the output of an audio file.

These values were on a normalized scale of 0-1 and would directly correlate to how "open" the avatar's mouth should be during that frame. For an animation of 200 frames, there would be 200 values in the document. These values would then be imported into unity and read during run-time to produce realistic looking mouth animations. The best part about this is that anyone who can drag, drop, and double click files can now implement mouth animations for any avatar.

3D Asset Creation

As part of the initiative to create more content I was tasked with creating an additional 3D avatar for use in certain scenarios. The avatar had the same restrictions as my previous work: it had to be of a low file size and web friendly. It would likely be used alongside the previous model, so I used similar techniques to author content which wouldn't look out of place in a stylized context.

Alice will be used as an interactable avatar in a business setting.

Because my skills as a developer and artist had grown during the past year I experimented with different techniques to see what gave the best results. I ended up using a small texture (about 1 MB) and hand-painted the texture onto the avatar to allow for gradients between shading values and a higher fidelity overall while still maintaining a low file-size.

Alice in an office setting.

I also experimented with using Unity driven coroutines and IK targets to minimize animation file sizes for smaller gestures like raising arms or blinking. Finally, I experimented with implementing scenes using Three.JS, hoping for smaller file sizes, better web compatibility, and quicker loading times, but it didn't offer much advantage over Unity.

Conclusion

Through an expertise in javascript and python I was able to create and implement tools which supported project goals. My experiences with 3D art allowed me to create a stylized avatar for use in future projects. I was able to explore different environments and techniques including Three.JS, Unity, Motion Capture via the Kinect, and automated lip-syncing via coding to help further research goals and produce deliverables.

Return

HALEF

Introduction

During the summer of 2017, I worked on an iteration of the HALEF system for ETS, as part of a full-time internship in San Francisco, California. I worked primarily alone but was supported during some parts of the learning process by other team members.

You can read more about the process in this research paper we published here.
You can read more about the experience in this interview with American University here.

Background

HALEF is a multi-modal dialogue system; put simply, it is a computer system with which users can interact through spoken dialogue. Utilizing HALEF, it is possible to have a unique conversation with a computer.

Previous iterations of HALEF suffered from multiple issues, ranging from unacceptably long load times (10+ minutes); audio and animations becoming out of sync; and poor lip animations. When starting work on this system, I was told that the priorities were to cut down on load times and keep the project open source (no proprietary software/techniques). Due to the incredibly long load times, ETS was considering moving from Unity to other engines/javascript libraries (such as three.js)

Previous iteration of the HALEF avatar.

Solution

To reduce the file size:

I had previously done research into mobile game applications and learned that web and mobile game development were very different from console or pc development. Every asset was created with its file size in mind, to reduce load times, leading to low poly models, 2D art, and low-resolution textures. With this in mind, I advocated for a low-poly 3D aesthetic, as 3D was central to the team’s future goals.

My texture palette for the project, a whopping 5kb on disk. A second palette was used for the office itself, which was also ~5kb.

I used Blender, an open source 3D modeling program, as well as open source libraries such as Socket.IO to keep the project 100% open source.

Additionally, I researched ways to reduce Unity WebGL file size and possible alternative solutions to using Unity. Luckily, I found that there was an upgrade to Unity’s WebGL functionality (the introduction of engine code stripping) that helped immensely to reduce file size/load time. With the addition of a low-poly workplace and avatar, which I built from scratch, we were good to go. I used Blender to model, texture, rig, and animate the avatar.

The low-poly feel, although unusual for a project of this nature, fit the goals and requirements very well. Flat shading helped pull off the look and increase performance.

To eliminate latency issues:

For the animation and sound problems, I decided that the best way to eliminate a discrepancy between audio and video would be to re-architect the way in which sound and animations were called.

Rather than embedding the audio in the webpage and calling for it using javascript, as the previous iteration had, I included the audio inside the Unity web blob. Rather than animating the avatar on the update loop (via blendshape normalized values called by code), as the previous iteration had I controlled animations by creating them in Blender and exporting them into Unity. This allowed me to play an entire animation with a single message, which was useful for keeping audio and video in sync.

Mouths were animated using the Papagayo + JLipSync plugin pipeline in Blender, which was a perfect solution for our needs. It offered a robust way to account for new dialogue, while also becoming more efficient the longer we used the same models. This is because it works by setting up phoneme shapes, such as 'o', or 'L', and interpolating values between those mouth shapes (rather than hand animating every mouth movement).

Papagayo is used to insert keyframes for mouth movement via an intuitive GUI.

Because I fundamentally changed the way in which the HALEF system interacts with the Unity web blob (due to my flow re-architecture and the Unity issued updated WebGL format) I had to change how messages were passed to the different parts of the system.


The Unity web blob lives inside the client's browser page via websocket messaging it interacts with the server, which reads the scenario from a unique conversation file.

I first added hooks in the Unity web blob which could be accessed via javascript run on the HTML page, then I had the system pass a message to Unity depending on its status through the javascript running on the HTML page.


The system feedback loop is illustrated here. HALEF (the system) outputs phrases to Unity to represent it's decisions to the user. The user responded, which was carried back to HALEF via the freeswitch platform, etc.

By doing so I could tell HALEF to pass a message to Unity, which would fire both the necessary animation and play the necessary audio, eliminating any possibility for desynchronizing.

Result

Doing the above, I developed a working, low load-time application, complete with a decent looking 3D model. It was an incredible experience to be able to immerse myself in that pipeline, which was almost entirely new to me, while thinking critically about how each part affected the final product, and what I could do to eliminate the bugs in the previous iteration.

Return

4 Floors

Concept

4 Floors was conceived as a way to adapt a side-scroller with RPG elements to a tabletop format. Some core goals of the game were accessibility to non-gamers and cooperative multiplayer gameplay.

4 Floors was developed in 3 weeks in 2016 with a team of 2 people: myself on systems design and my teammate on art and graphic design.

Execution

Because I was designing for tabletop, simplicity was a core guiding principle. The game had to be usable and the mechanics learnable. Analogue games tend to be more successful with simpler and fewer rules (computers aren’t there to walk players through the game and calculate all the different equations for them). Additionally, this game was meant to be accessible to newer players or players who weren’t too interested in heavy calculations and obscure rules.

In light of this, I had only a few rules- players started the game on the left side of the board, and if they made it to the right side, then they won. Simple, right?

First iteration of 4 Floors. The board setup afforded some fantastic design opportunities, but also created problems.

Players would take on 2 columns of enemies at a time. Ranged units from the players’ party or the monsters’ party could attack from the 2nd column, but melee units could only hit the column directly in front of them. By using skills smartly it was possible to damage groups that you couldn't reach at the moment due to attacks that spanned 2 columns.

By making it to the last row players could fight and defeat a boss monster to win the game.

First iteration of 4 Floors with zones defined. The red divider indicates max interaction range (without unique skill exceptions).

Dilemma

Feedback indicated that our system for randomly generating columns (through dice rolls) was imbalanced, leading to unnecessarily difficult situations. For example, it was very difficult to deal with a column of 5 units in front of a column of 5 ranged units.

Another core piece of feedback was that there was not much variation in combat. There were 2, perhaps 3, types of enemies which acted similarly. People were interested in an expanded bestiary.

Solution

Enemy Group Size

To deal with these issues, I decided to expand upon the narrative of the game and introduce new enemies. I had been inspired by ‘A Pattern Language’ by Christopher Alexander, a book about how architecture and intentional design can affect our everyday lives for the better, which got me thinking about how spaces in games can affect gameplay. In this case, it led me to introduce a room based combat system.

Players now draw a random 'room' card from the 'room' deck. This will tell players how many monsters will be in the room, as well as any additional effects or bonuses the room provides.

Room-based exploration was important in a few ways, the first being that a room’s size will limit the number of creatures contained within it. Instead of every encounter having the possibility of a brutal number of enemies, staging these encounters inside a limited space allowed me to dictate what range of enemies players would face at a time. This was important for normalizing difficulty across all encounters. Finally, this change also was helpful for learnability and usability of the system-- players could see only what they could interact with at that moment.

Using rooms helped to teach the game to players, as well as clearly define the difficulty curve of the game.

Combat Additions

Players wanted different kinds of enemies, which required different approaches to combat for success. To address this pleasure point I decided to include a more robust progression system, to give players more tools to work with, as well as an increased variety in enemies, to facilitate the use of these tools.

New, varied enemy types. Some deal damage per turn, some require being taken out from full HP in a single turn or they heal, etc.

New types of enemies were introduced which required different methods to overcome. This was tied into difficulty and progression through use of a 'floor' system. Players would clear a certain number of rooms per floor, fight a boss, then advance to the next floor. Later floors were more difficult and rewarding, allowing for me to design obstacles for different stages of character progression. For simplicity, I designed all special monster abilities to activate on rolls of 5 or 6. Sharing this across all enemy types help learnability and ease of use.

Player Progression

To deal with more powerful enemies and increase player engagement I updated character design with unique abilities and mechanics per class. Every player would have 'skill cards' which allowed single use, powerful abilities to be activated. Depending on which skill card was drawn players would have access to different abilities. This was done to limit player options and to encourage spontaneous and explorative play.

New, re-designed character sheets and progression system. The skill cards show which skills are gained upon increasing in level.

In addition to this, I included items which helped to customize characters and shore up weaknesses or emphasize strengths differently in each playthrough. Players would draw a random item from the item deck if certain conditions were met, such as defeating a boss. Players could trade items to support cooperative play.

Items were gained from accomplishing feats and had many uses.

Conclusion

By adding more content and refining existing mechanics, I shaped player options and increased both the depth and amount of options available to players.

Personally, I feel the game successfully conveyed an interesting take on a cooperative side-scroller experience while also containing dungeon crawling and RPG elements. I would love to revisit the horizontal nature of initial iteration at some point, however, I feel that there were many problems, such as learnability and clarity, that were improved with the move towards a room based system. It was definitely fun to design a dungeon crawling tabletop RPG.

Return

Strife!

Introduction

Strife! was my capstone project for my MA in Game Design at American University. We had 3 and a half months to create a project from scratch, team size was a single person.

Background

I’d been playing a lot of card games like Hearthstone and The Elder Scrolls: Legends at that point in my life, so I decided to create a CCG-like game, however, I didn’t think the world needed another Magic clone(!), so I purposefully made it mechanically distinct. Strife! is at heart an experimental mix of a board game and a CCG. I combined the Strategy game and Card Game genres to create something new and exciting.

Process

The main inspiration for Strife! came from the RTS and CCG genres. To start, I decided to map out a list of elements in both of these genres, then try to find commonalities. From there I chose which of those elements were most important, and which were less so.

One early document, among many, to refine the design. Feel free to open it in a new tab for a larger image.

With a core direction defined, I could start brainstorming ways to mechanically represent them. I ended up focusing on a spatial element of multiple ‘zones’. Additionally, I decided to reduce uncertainty by giving players the ability to always have access to any units they’ve unlocked, throughout the game. This was meant to also act as an homage to the tech trees in strategy games.

A playtest for an early version of Strife!

Finally, I created some units and abilities to populate the game and playtested. After each playtest I would use notes to refine and adjust the balance of the game until it felt like a stable, dynamic, and exciting experience.

Getting to play with cards which had the stats and abilities printed on them was a milestone for me, and fantastic for people trying to pick up the game.

Rules

To get a feel for the game, I’ll give an overview of the rules. Each player can construct buildings, these buildings can spawn units. Buildings and units need one turn to get ready before they can be used. Units can use abilities, attack, and move. Possible targets for offensive actions are other player’s buildings, units, and the other player themselves. Destroying buildings is a way to damage your opponents economy. There’s also a deck of powerful abilities which are obtained by producing structures.

A sample building. Front side is on the left, the back side is on the right.

Dilemmas

Throughout all phases of development, Strife! suffered from pacing issues. Many of the earlier iterations’ games would last multiple hours with lulls where players wouldn’t meaningfully interact with each other. Finally, there were huge issues with the amount of complexity present in both the rules of the game and the unit abilities.

Solutions for Pacing

The mechanics of the game caused the slow pace of the game. Although I did like the core ideas of representing space via multiple zones, as well as a tech tree players could progress on, their implementation left the game feeling stale and unengaging. There were a large number of zones players had to progress through before they could either damage a player’s health or economy, additionally, because moving used up that unit’s actions for the turn, the defending player would always have the choice of what to trade for what. This made defensive play incredibly strong.

An early playtest. Playing with index cards and spreadsheets did not help games to move quickly. Many times players would spend minutes consulting the spreadsheet before taking their move.

Through playtesting, I settled upon using three zones, rather than 4, 6, or 8. Additionally, I allowed units to both use an ability and move to a different zone each turn to allow for the aggressive player to choose what trades they’d like to take. I also specified a cap of 6 units per zone, to stop players from stockpiling units, creating an impenetrable wall of bodies. Finally, I linked board options to economic strength by way of giving players a strong, randomized ability with every building constructed.

A sample ability card which can be drawn from the Ansari deck. Spiritual Assault allows players to target high value, difficult to engage targets and take them out of the game.

In terms of economics, originally the game had ways to invest into economy, by building structures which boost gold gain per turn. I wanted to allow players to play greedy and take risks for a greater payout later. However, due to the defensive nature of the game, building these structures was always the correct move. I eventually settled upon taking them out of the game, giving players a free structure each turn, and the ability to build a second one for gold. This allowed players to invest in an immediate advantage, in the form of units on the board, or an economic advantage, in the form of buildings.

A sample ability card which can be drawn from the Union deck. Detonate Mine allows players to defend against aggressive plays. This is especially key for the Union, who are generally weaker in the early game.

Solutions for Complexity

For most of playtesting, users playtested with torn up index cards and spreadsheets off to the side. Many complained that it was hard to understand which units could do what, and with tech trees being in the game, many didn’t know what units they were progressing towards.

This is part of a larger tech tree sheet for the Union faction. By presenting this information in an easy to parse way, I helped keep players engaged and increased the pace of the game for newer players.

During the last two weeks of development, I spent my time looking at the pieces from a UX perspective. I did interviews to figure out what kinds of things people wanted on cards, problems they faced, what they liked about the game, etc. Using this information I created multiple mock-ups, focusing on both which information should be included on the pieces, as well as how it’s included. I tested different icons, text, and shared terminology to help users onboard quickly.

An example of how the board looks with custom art and directions printed on the cards themselves.

Conclusion

I learned a great deal by developing Strife! First off, one of the ideas I wanted to play around with was to do what I could to remove randomness from the game. Doing so, in the way that I did, actually made the game static and unengaging. By allowing for players to always be able to do any action, it takes away the developer’s ability to make the actions exciting and powerful. The more situational and undependable a play is, the more liberty we have, as designers, to make it strong and exciting.

There were many other lessons, but ultimately, this was the most important to me. I feel like Strife! failed in many ways: I had to reintroduce chance, and games take too long, however, I also succeeded in making a unique, deep, and fun game. Because of this, and what I’ve learned from the project, I consider the game a success.

The current iteration of Strife! Players are enjoying a round of speed Strife!, which pleases me greatly!
You can find the trailer here.

Return

Grid Combat

Introduction

This project was an exercise in quickly prototyping out an idea in code while also using design patterns and C# best practices to write strong extensible logic. I introduced appropriate patterns wherever possible while trying to also keep the codebase as light as possible. It took me about 3-4 months of 10 hours a week. Please feel free to check out the codebase here.

Performance

Because of the small scope of this project, I chose to freely use Linq for the majority of queries. Although Linq is slower than the alternative: imperative programming using for/if statements, I felt that that the benefit gained from Linq's declarative, clear language outweighed any performance drawbacks. I also did some testing with object pooling, where it made sense, but after checking the 'high-stress' scenarios with the profiler I noticed there was no performance benefit, the game was just too lightweight, so I removed the systems as I felt they gave no benefit but added complexity to the codebase.

Achievements/Overview

There's a lot to say about this prototype, however, for brevity's sake I'll just detail the pathfinding system and the state system as they respectively motivated me to start the project and were one of my favorite take-aways.

I'll just give a brief mention of the major systems I won't be covering, but which are included in the codebase before we jump in:

-A persistent world which remembers character stats, world state (progression), and area state beyond instance destruction. Achieved via storing/reading data from scriptable objects. -A tool for building, loading and saving levels in Unity -A modular ability system accessed through an AbilityComponent wrapper -Tooltips/UI populated by current player data -Multiple enemy types with unique AI -An entire hierarchy generated entirely by dependency injection -An event queue system which funnels low-level events to high-level classes that care about those events -Multiple playable characters

Pathfinding System

This game features grid based combat in real-time with multiple units. This design required a specialized implementation for pathfinding which differentiates it from other systems used in turn based strategy games or in action games. I started with a great template found here and modified it it meet my needs.

Dilemma

In turn based strategy games, such as Fire Emblem, there can only ever be one request at a time. Because of this, it is completely safe and desirable to store the results of the request on the tiles themselves. We can calculate everything once, and until a new type of request is issued, we just keep referencing the same old results. However, in a game where multiple units need to simultaneously issue requests to the pathfinding system, we cannot directly store our results on the nodes. Think of this- one unit requests data and starts to move, another unit requests a different path- all the data the first unit is using to traverse the map is now replaced with a different request's results. This could cause all kinds of bugs to seep in due to the subtle of sharing data between requests.

Classes

My solution was to keep a reference to the actual board, but encapsulate all data related to the pathfinding algorithm in it's own class called a ShadowTile. Requests would then return both the tile and it's shadow which contains instanced results of a search of the board. By separating out the results of the request from the actual objects which exist in the world (shadow tile/tile separation) we can be sure that requests will be sufficiently insular.

Tile and ShadowTile UML Class diagrams

Requests are initiated by classes via accessing a board's pathfinding component. The method used to initiate a search accepts a function as an argument which is used to determine whether or not a tile should be considered a valid target.

// BoardPathfinding.cs
// The search method we query
public List Search (Tile start, Func addTile) {
    ...
}

// ConstantRange.cs
// An example query
public override List GetTilesInRange () {
    var retValue = pathfinding.Search (board.TileAt (Owner.transform.position.ToPoint ()), ExpandSearch);
    ...
}

For example, a LinearRange component’s ExpandSearch method looks like this:

// LinearRange.cs
// Only add tiles which share the same X or Y as the owner object
bool ExpandSearch (ShadowTile from, Tile to) {
    var ownerPos = Owner.transform.position.ToPoint ();

    return (ownerPos.y == to.Position.y ||
            ownerPos.x == to.Position.x) &&
        (from.distance + 1) <= range;
}

while a ConstantRange component’s ExpandSearch method looks like this:

// ConstantRange.cs
// Add all walkable tiles in range
bool ExpandSearch (ShadowTile from, Tile to) {
    return (from.Distance + 1) <= range && to.isWalkable;
}

Passing in functions which determine the logic of the algorithm at run time is called the strategy pattern. It helps to keep the algorithm generic and re-usable by encapsulating specific requirements in functions which can be swapped out for each other. By using inheritance smartly and using the strategy design pattern to determine which tiles are valid, we can easily create new types of RangeComponent with little effort.

RangeComponent UML Class diagrams

I’ve also included builder methods so that I can update old RangeComponent where/when needed, rather than instantiating whole new classes/objects. This comes in particularly useful for a RangeUtil helper class I’ve made for miscellaneous common scenarios, such as calculating the Area of Effect of an impact or finding the range of all tiles from a certain position. These scenarios are useful for Area of Effect abilities and AI respectfully.

As an aside, we keep the Owner property as a GameObject rather than a Unit to allow for instantiating and using empty gameobjects as a base from which to calculate range for things like an ability's Area of Effect radius upon impact at an arbitrary location.

Algorithm

I chose to go with an algorithm for pathfinding which doesn’t use a destination; it just moves away from the starting tile grabbing every tile it can until there are no valid moves left. Because I need to show players which tiles are valid moves I will not know the destination tile when I calculate the request. It feels very wasteful to use something like an A* algorithm because I’d have to dispatch x requests where x is the number of tiles which are theoretically in range and therefore can act as a destination. With this implementation I only need dispatch one request.

// Board.cs
// Dirs definition used by BoardPathfinding.cs
public Point[] Dirs => new Point[4] {
    new Point (0, 1),
    new Point (0, -1),
    new Point (1, 0),
    new Point (-1, 0)
};
// BoardPathfinding.cs
public List Search (Tile start, Func addTile) {
    Queue checkNext = new Queue ();
    Queue checkNow = new Queue ();
    ...

    // while there are still 'reachable' tiles which haven't been checked
    while (checkNow.Count > 0) {
        ShadowTile currentShadow = checkNow.Dequeue ();

        // add the next 'ring' of tiles to search, expanding out from
        // whatever tile we are currently checking
        for (int i = 0; i < 4; ++i) {
            Tile nextTile = GetTile (currentShadow.position + board.Dirs[i]);
            if (nextTile == null) {
                continue;
            }

            // skip the addTile logic if this tile has been checked already
            ShadowTile oldShadow = shadows.Find (shadow => shadow.tile == nextTile);
            if (oldShadow != null) continue;

            // use strategy pattern to define which adjacent tiles are valid targets
            if (addTile (currentShadow, nextTile)) {
                var checkedShadow = new ShadowTile (currentShadow.distance + 1, nextTile.Position, currentShadow, nextTile);
                checkNext.Enqueue (checkedShadow);
                shadows.Add (checkedShadow);
            }
        }

        ...
    }
}

Basically, we do a 'flood-fill' type algorithm checking each valid tile only once, creating a linked-list of the path that was taken to reach that tile from the starting position. In this way we will always get the optimal path, and we can calculate all tiles' distance without relying on a pre-defined target supplied to the algorithm.

Below is the flow for a standard request where a unit wants to see all of the valid moves available for a single ability. Abilities are referenced through a level of abstraction via the AbilityComponent which provides a generic interface through which units can set, update, and activate any abilities they are equipped with.

Pathfinding request UML collaboration diagram

State System

Overview

Taking inspiration from here I implemented a state pattern for some of the more… stateful logic of the game. This pattern is used to restrict what kinds of logic or input governs an actor at a given point in time. For example, when a unit is idle in a strategy game, the player can click on it to select it and control it, but cannot issue commands to that unit without selecting it first. The idle state and selected state could each be their own classes which encapsulate all requisite commands and limitations for a state.

UnitState Classes

I implemented a state pattern to handle similar logic for both my playable character and AI units. I would have liked to use a single UnitState class, but found the needs for the two classes were so different I ended up splitting AI/Player state into two separate child classes. Below is a class diagram which depicts the parent UnitState and a child PlayerUpdateState

UnitState UML class diagram

There really isn't much boilerplate there which makes this pattern simple to implement. And as you can see, because each child doesn't need to extend much logic we can make each state class fairly specialized to handle whatever it needs to using events or added functions.

Flow

Every Unit contains a very simple update loop shown below:

// Unit.cs
protected virtual void Update () {
    ...
    // if we returned a new state switch to it
    UnitState state = UnitState?.HandleInput (controller);
    if (state == null)
        return;

    // if we're in a new state, call the enter method before any HandleInput calls
    UnitState = state;
    UnitState.Enter ();
}

Every frame we call the HandleInput() method on our UnitState property. This function will return either null, in which case we do nothing, or a new UnitState, in which case we update our Unit's property to reference to this new object. The beauty of this loop is that it creates a kind of feedback loop where, the class will manage itself.

Below is a concrete implementation of this, the PlayerIdle state:

// PlayerIdleState.cs
public override UnitState HandleInput (Controller controller) {
    // define our list of acceptable input for this frame and state
    bool[] pressed = new bool[] {
        controller.DetectInputFor (ControlTypes.ABILITY_ONE),
        controller.DetectInputFor (ControlTypes.ABILITY_TWO),
        controller.DetectInputFor (ControlTypes.ABILITY_THREE),
    };

    // loop through them and see if any have been pressed...
    for (int i = 0; i < pressed.Length; i++) {
        if (!pressed[i])
            continue;

        // transition to the next state with that data
        if (abilityComponent.SetCurrentAbility (i)) {
            onAbilitySet (Owner, i);
            return new PlayerPrepState (Owner);
        }
    }

    return null;
}

The entire class is fairly small, just 30 or so lines total, which is another benefit of using this pattern: it breaks larger systems up into manageable chunks. In this game's context to state flow ends up looking like so:

The complete flow diagram of the UnitState system.

I love the elegance and simplicity of this pattern. It turns complex systems like handling player input from a web of if statements to a series of self-contained, flexible classes.

For some classes which rely on external systems, like the ActingState, we use a callback to notify the class when it should return a CooldownState object, rather than null.

//PlayerActingState.cs
...
UnitState state;
...

public override void Enter () {
    // activate the selected ability
    abilityComponent.ActivateWithCallback ((cooldownDuration) => this.UpdateState (cooldownDuration));
    ...
}

// return ActingState until ability finishes and updatestate is called;
public override UnitState HandleInput (Controller controller) { return state; }

// transition to next state
public void UpdateState (float cooldownDuration) { state = new PlayerCooldownState (Owner, cooldownDuration); }

AreaState Classes

I also found this pattern also useful in other cases. I implemented the state pattern to the Area class itself

// SetupState.cs
public override void Enter () {
    ...

    // Initialize resources and handle loading...
    InitializeResources (entrances, min, max);
    SetPlayerPosition (entrances, min, max);
    SetPlayerData ();
    FinishLoading ();
}

public void AdvanceAreaState () {
    this.shouldAdvanceState = true;
}

public override AreaState HandleUpdate () {
    if (shouldAdvanceState) return new ActiveState (area, tracker);

    return null;
}

Because the class doesn't respond to input, I moved all logic to the Enter() method and used an event system to trigger an update in state depending on the player's actions. Below is the complete code of the PlayerActing state, it will advance the area's state to ActiveState the first time the player commits an action.

//PlayerActingState.cs
public override void Enter () {
    // call the ability's start method
    abilityComponent.ActivateWithCallback ((cooldownDuration) => this.UpdateState (cooldownDuration));
    if (Owner is Hero) {
        EventQueue.AddEvent (new AreaStateChangeEventArgs (Owner, null, AreaStateTypes.Active));
    }
}

We do this to delay all AI actions until the player has acted to help the player adjust to the new area and not feel overwhelmed by enemies immediately rushing them down. By using a state based system in the area this is easily achieved. Below is the corresponding receiver of the AreaStateChangeEventArgs

// Area.cs
public void HandleIncomingEvent (InfoEventArgs curEvent) {
    switch (curEvent.type.eventType) {
        case EventTypes.StateChangeEvent:
            if (state is SetupState) {
                SetupState curState = (SetupState) state;
                curState.AdvanceAreaState ();
            }
            break;
        default:
            break;
    }
}

And the enter method of the active state:

// ActiveState.cs
public override void Enter () {
    // hero is initialized on the setup state, we initialize every other type here
    var enemies = area.Board.Units.Where (unit => unit.Value.TypeReference != UnitTypes.HERO);

    foreach (var unit in enemies) {
        area.Board.UnitFactory.ActivateEnemyAt (unit.Key);
    }
}

Conclusion

By implementing classic design patterns I was able to manage complexity and create a scalable prototype for a unique blend of genres. I would recommend implementing state and strategy patterns where applicable as they helped me maintain a lean codebase for the traditionally messy task of handling player input and re-using a specialized algorithm.

Return

HALEF

Some work done for ETS to model, texture, rig, and animate 3D models.

Cave

Made in a short time for a self-study with Unreal Engine in 2018. Thanks to Unreal and user altermind on the Unreal forums for the free assets.

Bike

A bike from a post-apocalyptic future. Made with Blender and Quixel, imported into Unity3D -- 2016.

Japanese Garden

Made using Blender, Photoshop, and Unity3D -- 2016.

Abandoned Highway

Made in collaboration with a classmate. I was responsible for the scene composition, terrain, gas station, and highway. Made using Blender, Photoshop, Quixel, and Unity3D -- 3 weeks, 2016.

About

I'm a coder, designer, and creative with a love for games and media. I spend my time creating new experiences and exploring what's already out there. I like to go hiking and from time to time I'll learn a new song to play on my electromatic. You can find my github here.

I'm currently working as a Growth Engineer at Expo. I get to work with amazing people and help create exciting new products and technologies enable a wide audience to develop mobile apps.

Previously, I worked as a Software Developer and Researcher for Educational Testing Services in San Francisco. I was hired into the role after completing my MA in Game Design (2018) at American University in Washington, DC. During my time in DC, I helped to run local game events (SAAM Arcade, GGJ @ AU, P3M5 @ the Goethe Institute). I was also the deputy chair of the local IGDA chapter (DC) until I moved in the spring of 2018. My BA was obtained in Philosophy from SUNY Purchase in 2013.

Cheers!

Contact

Email

Resume