Diversity, Inclusion, and Representation in the Games Industry

The Issue

Header note: this is a refactor of a university-based research task that I believe is important to consider when developing a game. I am by no means an expert on this topic, however, everyone play’s a part in facilitating a safe space for all people to comfortably share their heritage, identity, and agency. In the indie games industry, we have the power to change the stigma and accurately represent the core values of any person. Stand united and strong – be loving and kind. Spread knowledge, squash ignorance, and be a voice for those who want it.

While it is improving slowly, there is a lack of diversity and inclusion in the video games industry (Dealessandri, 2020). This often means that there is an overemphasis on white-male protagonists (Wirtz, 2021), using women as narrative rewards often revolving around sexual objectification (Liu, 2018) and, leaving out an accurate representation of other races.

The video to the right has a very interesting way of describing misrepresentation due to observing other cultures through a fixed lens, stating that collaboration and communication is the only way to accurately represent one another.

A Creative Skillset report shows that BAME (black, Asian and minority ethnic) industry representation stood at 4% in 2015, down from 4.7% the previous year. This is lower than the UK average of 10% and significantly lower than the London average of 40% (2011 Census data). Considering that 37% of the UK industry is located in London, this highlights the level of under-representation for ethnic minority groups.” (Ramanan, 2017).

By comparing the data provided in the UK report by The Guardian and the IDGA Statistics (below), we can make an educated assumption that there is a severe lack of diversity in the international games industry. “61% of the population was white, 18% was Hispanic, 13% was black, and 6% was Asian.” (Wirtz, 2021). It is clear that the problem is improving slowly, as the reported 4% diversity in the 2017 Guardian article is now at a rough average of 12% in the 2021 article by Game Designing.


Potential Benefits of More Diversity – Mentally and Financially

This improvement, while encouraging, is still not good enough. There needs to be a heavy emphasis on inclusion to prevent misrepresentation and encourage further inclusion.

As a result of misrepresentation, “The same long-term effects of depression, detachment, disengagement, low self-worth are present as outcomes, as you would see in every day, daily racism”. (University of Saskatchewan, 2018).

So with this in mind, by ensuring diversity in the workforce, games will be able to include more cultures with accurate representations, helping improve the self-esteem of minorities. (Wirtz, 2021).

Not only does the accurate representation of other minority groups help with mental wellbeing, but it can also help the games industry in regards to profit and exposure. By being more inclusive, games will be able to represent global cultures in a way that they can relate to and therefore, increase the potential global exposure (Dealessandri, 2020). More exposure means more profits. Inclusion is not only the ethically correct way to progress but also a financially profitable investment for the business.

If a project targets many cultures, a complicated method of localisation, censorship and, respecting of cultural restrictions for each instance of the project will need to be considered. This is another topic altogether, however, I feel that it is an important factor to at least touch on and consider. An example of this is the Chinese censorship of morality, where players should not have an option to be good or bad. They believe that games should represent “the correct set of morals” otherwise they cannot exist in their media streams (Kerr, 2021). While China is notorious for media restrictions, these kinds of cultural differences can only be understood if proper inclusion and collaboration with the current minority are achieved.


Solutions provided

While I touched on some solutions above, there is a list of different solutions provided by the articles I have read.

  1. Maximise global reach as a result of multicultural inclusion, “So what we need to try and do is maximize the global reach of our games by considering the expectations of a much broader demographic. To better anticipate what those expectations are, we need to have companies that are more diverse and have people from different backgrounds.” | “they were working in a system that was biased and that he wasn’t taking responsibility. So he just decided as the CEO: we’re going to change this. And he made that happen. And really that’s all it takes. It’s a matter of will, making a decision that this is going to be important to us, and we’re going to make this a priority.” (Dealessandri, 2020).
  2. Create Support groups that actively push for change in representation, “Girls Who Code is a group ‘founded with a single mission: to close the gender gap in technology’. They’ve already reached nearly 90,000 girls from every state in the U.S. That’s some serious progress. Reshma, the founder, writes ‘We’ve reached a moment unmatched in our history, a moment as full of anger and anguish as it is promise and potential. Women and girls across the country are coming together to correct centuries-long power imbalances across lines of gender, race, sexuality, and more.’ Girls Who Code offers after-school club programs, summer campus programs, and longer summer immersion programs.” (Wirtz, 2021).
  3. Provide more opportunity for computer science in early education, “Peter Kemp, senior lecturer and head of the research project underlines the importance of early access and uptake of computing. ‘The GCSE will naturally lead into the A-Level and also into degree level because not all places will offer computing at A-Level. So, if you don’t get the GCSE intake right, then you’re going to see a very skewed intake into computing careers because of that.’” (Ramanan, 2017).

When Rivers Were Trails – Case Study

There are instances of games that allow autonomy for indigenous representation already and I could become involved and help push accurate representation. The examples provided seem to be all based in the US (Beer, 2020), however, we can draw inspiration from the examples.

One example of this is When Rivers Were Trails by Elizabeth LaPensée, trailer to the right. In order to accurately represent the community LaPensée was targeting, they ensured that the narrative was written by over 20 indigenous writers, which were all from the tribes located along the trail the narrative is set in (Beer, 2020).

Furthermore, the design process involved in the development cycle included the indigenous community to ensure accurate music and artistic representation. “LaPensée’s goal isn’t just to facilitate Indigenous representation — she also wants to facilitate Indigenous self-determination. This means not just getting Indigenous people involved in games, but giving them meaningful control.” (Beer, 2020). This goes to show that the best way to ensure accurate representation is to have the team descend from the culture you are trying to represent.


What we can do

In regards to what we can do to reduce the minority gap in the games industry in Australia, we can potentially reach out to local Indigenous communities in order to represent them within my projects. While you may not be currently working on projects that represent anyone, in particular, I believe that this would be a fantastic inclusion to strive for. The PDFs below outline how we can get in contact with local communities, how to respectfully communicate with them, and provides resources to help better understand the communities before getting in contact. “Through the unique immersive interactivity it offers, gaming enables Indigenous people to share and reflect on their experiences in a culture that generally distorts or silences them.” (Beer, 2020).


Reference List

Beer, M. (2020). The next chapter of Indigenous representation in video games. Retrieved From https://www.polygon.com/features/2020/2/25/21150973/indigenous-representation-in-video-games

Dealessandri, M. (2020). What’s wrong with the games industry, and how to fix it. Retrieved from https://www.gamesindustry.biz/articles/2020-09-04-whats-wrong-with-the-games-industry-and-how-to-fix-it#section-2

Kerr, C. (2021). Chinese regulators warn devs over depictions of morality, gender, and history. Retrieved From https://www.gamedeveloper.com/business/chinese-regulators-warn-devs-over-depictions-of-morality-gender-and-history

Liu, J. (2018). Gender sexualization in digital games: Exploring female character changes in 0RW1S34RfeSDcfkexd09rT2tomb raider1RW1S34RfeSDcfkexd09rT2 (Order No. 10975189). Available from Publicly Available Content Database. (2167142408). Retrieved from https://saeezproxy.idm.oclc.org/login?url=https://www.proquest.com/dissertations-theses/gender-sexualization-digital-games-exploring/docview/2167142408/se-2

Ramanan C. (2017). The video game industry has a diversity problem – but it can be fixed. Retrieved From https://www.theguardian.com/technology/2017/mar/15/video-game-industry-diversity-problem-women-non-white-people

University of Saskatchewan. (2018). Negative Effect from Lack of Diversity in Video Games. Retrieved From https://www.cs.usask.ca/news/2018/negative-effect-from-lack-of-diversity-in-video-games.php

Wirtz, B. (2021). The Issue of Diversity in Gaming & Changes the Game Industry Is Making To Address It. Retrieved From https://www.gamedesigning.org/gaming/diversity/

Modular Code Architecture

Why should I consider Modular Code?

After working on Stellar Winds (a university final project) for over 6 months, I have managed to build an intense amount of technical debt as a result of my own negligence, in the form of spaghetti code. If you’re reading this then maybe you can avoid the same mistakes that I have made by considering modular code.

Unity offers a very easy way to remove instance reference dependencies and facilitates a space for modular architecture to shine. In this blog, we will explore what Modular Code Architecture is within the context of Unity, how to design your systems in a scalable and reusable way, and define some limitations of the architecture.

If you want access to the source code, check out my Patreon where you can get it for just $1. It takes a lot of time to research and put these together so I’d appreciate the support, thanks!  


What is Modular Code?

Put simply, it is code that does one very specific thing really well. It is code that can be used by anything in any given context. Think of your classes as a specialisation, we wouldn’t want a plumber to install our roof as much as we wouldn’t want a level designer to program our AI.

Don’t worry, we’ll be going over some practical examples that you can pull from and use. By the end of this blog, you’ll have a great understanding of the concept and be able to do it yourself.

“Modular code architecture (enables) code to be split into many independent packages that are easily developed”

(Tayar, 2021).

First things first, make a plan

I get that you want to jump straight in and get it done. Programming and game dev is really fun, but that’s what I did with Stellar Winds and as a result, its codebase is brittle. If you jump straight in without a plan you’ll end up building insane technical debt and waste a lot of time on “unforeseen circumstances.” Just look at this health class that handles shields and health, ridiculous.

Depending on how you currently work or were previously taught, you may have been required to write a Technical Specifications Document. It’s pretty commonly disliked amongst my peers, including myself, however, in the context of modular code, it is the most important step.

Tech Spec 1

Let’s explore my process using an abstracted health system as an example. First, write out everything the SYSTEM should do. Refer to Tech Spec 1.

Next, critically analyse this system, what individual elements can you identify within it and what is its core function? Simply, its core function is to track the progress to the ‘fail’ state. It also needs audio, particle, camera shake, UI updates, and game event triggers.

One thing you may notice is that each element can be separated into two categories. Function and Feedback. With this outlook, we can then define every CLASS that we will need to create in order to develop this system. Remember while planning these out to specify a specialty for each class. Refer to Tech Spec 2.

Tech Spec 2

Finally, if you are more of a visual person like me – you can put together a  flow chart of sorts that clearly defines connections and each class’s specialty. I use Figma for this but a great alternative is draw.io– which can be connected to google drive.


Unity Events

We will be primarily using UnityEvents in order to make everything work and reduce the overall code we have to write, so let’s explore what we can do with them for clarity’s sake. For starters, we can call public methods from a class within the inspector. We can also parse strings and send floats if a method has a parameter. Note that only methods with a single parameter can be used like this.

We can also send dynamic variables by declaring the events a little differently. This is important to understand as we will be using it frequently. You write them as follows:

// Declaration
UnityEvent<float, int, bool> eventToCall;

// Invoking
eventToCall.Invoke(0.2f, 1, true);

Now when we hook up a class that has a method with one of the same parameters, a ‘dynamic’ option will appear in the response of the UnityEvent. Neat right? Note, you can still call normal methods when you’ve marked an event as dynamic, however, you still need to give the invoke method a value when invoking the event. That pretty much sums up UnityEvents, if there’s anything I may have missed let me know.


Execute your bad habits

Now that we have a plan written out and understand how UnityEvents work, we can very easily program all of our classes. Let’s begin with the Health Controller class, we’re simply going to have a float to represent health and three events.

public class Health : MonoBehaviour
    {
        [Header("Values")]
        [Range(0, 1)] public float health;

        [Header("Responses")]
        public UnityEvent<float> onDamaged;
        public UnityEvent<float> onHealed;
        public UnityEvent onDied;
    }
public void ProcessDamage(float dmg)
        {
            health -= dmg;
            if (health <= 0)
                OnDied();

            OnDamaged();
        }
        public void ProcessHeal(float dmg)
        {
            health += dmg; OnHealed();
        }

        private void OnDamaged() { onDamaged.Invoke(health); }
        private void OnHealed() { onHealed.Invoke(health); }
        private void OnDied() { onDied.Invoke(); }

Now add ProcessDamage() and ProcessHeal() methods that accept a float parameter. Then, add three methods that invoke each relative UnityEvent, so it’s very clear what we’re doing. For now, I suggest either hooking up two simple inputs or using EasyButtons so that we can test it in isolation.

Now that we have our health class set up, it’s time to move on to the individual classes: Audio Controller, Animation Controller, Slider Controller, Camera Shake Controller, and Game Event Controller. These are all really straightforward, with the exception of the animation controller – so I won’t go into detail for each class. Instead, this is a perfect opportunity for you to do it yourself – using your technical specifications as your guide. Try to ensure that a single public method can be called to invoke the desired feedback.

Let’s look into the Animation Controller. As you may know, the Animator uses parameters for its state machine to function. With this in mind, you can create simple methods that take a string as a parameter to control the animator. We can invoke a Trigger, Set a bool, and Set a float value for blend trees.

Animator.SetTrigger(paramater)
Animator.SetBool(paramater, condition)
Animator.SetFloat(paramater, float)
public void SetAnimationTrigger(string animationTrigger)
        {
            animator.SetTrigger(GetParamName(animationTrigger));
        }
        public void SwitchAnimationBool(string animationTrigger)
        {
            string param = GetParamName(animationTrigger);
            animator.SetBool(param, !animator.GetBool(param));
        }

        public void SetBlendTarget(string blendParam)
        {
            targetBlendParam = blendParam;
        }
        public void SetBlendValue(float value)
        {
            animator.SetFloat(targetBlendParam, value);
        }

There could be any number of ways to do this, however, how I did it was to create a list of strings that act as an identifier for each of the parameters in the Animator. I then have four methods to interface with the Animator and a handler for checking the parameter name.

Pretty easy right? I just took inventory of what the animator can do and put together methods that facilitate UnityEvent interaction.

Now of course you could declare a UnityEvent that takes in both a string and float to skip the need to store a reference to the blend parameter – if that’s how you want to do it. However, I wanted to ensure that I didn’t have anything other than health variables in the health class. Again, it all depends on your system and the plan that you created.

If blend trees are a core function of your system then maybe this would be the way to go, however, I don’t mind the extra step of storing a reference to the target parameter. 

With the hardest class explained, you’ll be able to easily put together every other controller class without instruction. Give it a go before moving on.

// Decleration
string blendTreeParam = "objectScale";
float health;

UnityEvent<string, float> onChangeBlendEvent

// invoking
public void ProcessDamage (float amount)
{
health -= amount;
onChangeBlendEvent.Invoke(blendTreeParam, health);
}

Putting it together

Right so now that we have all of our classes, let’s set up our player object. I have a very specific way I like to name and structure my objects (which I’ll make another post on in the future), so I’ll quickly put it together and show you where I have put each class.

Now, by referring to the first instance of our tech spec, we can clearly see what methods we need to call through our UnityEvents. So let’s quickly do that. Note: objects highlighted in red are objects that the UnityEvents reference.

On Damaged

  • Slider Controller – UpdateSliderValue (dynamic float)
  • Animation Controller – SetAnimationTrigger(Damage Anim param)
  • Animation Controller – SetBlendTarget(Blend param)
  • Animation Controller – SetBlendValue (dynamic float)
  • Audio Controller – Play
  • Particle System – Play
  • Camera Shake – LightShake

On Healed

  • Slider Controller – UpdateSliderValue (dynamic float)
  • Audio Controller – Play
  • Animation Controller – SetBlendTarget(Blend param)
  • Animation Controller – SetBlendValue (dynamic float)

On Died

  • Game Event Controller – Trigger

Notice how quick that was? Now if we don’t have an audio source for some reason, don’t worry, the core of your system will still work. This is because we separated everything into modular parts that don’t need a lot of messy instance references. You can also save this as a prefab calling it “Object With Health,” or something like that, knowing full well that everything will work straight out of the box. You can then adapt it to an enemy, follower, destructible box – literally anything that has health. The power shines here with the destructible box example. We don’t want UI to show health on a box so we just remove the UI controller, everything still works, and you didn’t have to refactor your hardcoded system to remove UI, great!


Increasing Scope

So now we’ve managed to create a nicely organised modular system, in definite contrast to the spaghetti code you may have been writing up until now. It’s refreshing, isn’t it? However, overall, this class is much simpler than the health class in Stellar Winds (about 250 lines simpler). One thing to consider here is Stellar Winds features shields and this example system does not. “Is that why it’s 250 lines shorter?” aha, no… The next step is to scale the systems up further, so let’s start by adding shields. 

Now you could just add another float, a set of UnityEvents, and their relative methods to the Health class and call it a day. However, that’s not very modular, is it? What if we wanted to add an ability that gave the player another temporary layer of protection in the form of an over shield? That means we need to add, again, another float, a set of unity events, and their relative methods. Just as this paragraph is getting longer, so will your code.

Tech Spec 3

“Okay I get it, what do we do then?” I hear you ask. Well this is where I remind you that it’s important to plan this stuff out in the technical specifications (as I have done – refer to Tech Spec 3), but I won’t get you to do that, instead just follow these really simple steps.

First, refactor our current health controller class into a normal C# Serializable class called Stat. Change the float, all relative events, and methods from “health” to “stat.” We’ll also want to change “OnDied” to “OnStatZero” and “ProcessDamage/ ProcessHealth” to “DecreaseStat/ IncreaseStat” for clarity. 

Let’s also add logic for recharging the stat over time with a bool (or enum if you like) controlling whether it can recharge or not. Finally, put the logic into a Tick() method that we can call from another update method.

Next, let’s make a new Health Controller class that has two “Stat” variables. These will replace our health float and acts as our new shield float. Just as before we’ll need to add ProcessDamage() and ProcessHeal() methods. Let’s also add an onDamaged and onDied UnityEvents so we have a higher level of control and remember to call each stats Tick() method in the Update() method. 

Okay great, so now we’re pretty much back to where we were except this time we have two stats to work with. The best thing is, we can add as many as it makes sense. Setting this system up is exactly the same as the previous, however, now we have control over individual stat changes nested within the new Health class.

Now you may also be thinking, “But what if as you said, we want to add a temporary overshield? Would we have to declare a new stat for each temporary item we want to add? If so, would that not just be spaghetti code creeping back into our development patterns?” You can probably guess what I’ll say at this point. Did you plan ahead and allow for temporary stats? If the answer is yes then you probably didn’t ask that question.

Luckily, because of this modular architecture, it is really easy to add a list of stats for temporary buffs. These could dynamically change through play or you could replace your health and shield stats with a list of stats and use ints to identify each one. 

Now I couldn’t possibly answer every question you may have in a blog written before you read it, but hopefully, you can adopt a modular mindset and put together a system that fits your needs.


Advanced Simulated Health System

Now, as part of increasing the scale of our modular system, let’s explore a simulated health system. I will be using Fallout’s limb damage as an example here and focus purely on the legs.

I will again remind you that this is something that needs to be planned in advance, however, due to the holistic approach of this blog, we can explore how to do this without a solid plan.

Here’s a quick and dirty movement class that I wouldn’t recommend using in an actual project, but will work well for this example.

public class MovementExample : MonoBehaviour
    {
        public Vector2 speed = new Vector2(0.5f, 5f);
        float currentSpeed;

        private void Start()
        {
            SetSpeed(1);
        }

        private void Update()
        {
            Vector3 input = new Vector3(Input.GetAxis("Horizontal"), 0, Input.GetAxis("Vertical"));
            transform.Translate(input * currentSpeed * Time.deltaTime);
        }

        public void SetSpeed(float percent)
        {
            currentSpeed = Mathf.Lerp(speed.x, speed.y, percent);
        }
    }

Now let’s create a new health class similar to the other, allowing for individual limb damage. Create a Unity Event called onLimbsDamaged where we use a float as a parameter. This is going to send the per cent of total leg health to our movement class. This looks like:

Float percent = (leftLeg.value + rightLeg.value) / 2

Next, let’s hook up the movement class’s ChangeSpeed() method to the onLimbsDamaged responses in the inspector. Let’s also set up some UI to represent limb damage.

Easy, now when each limb takes damage, the overall speed of the player is reduced linearly. You can make this as complex or as simple as you want, but notice how straightforward it is to create more advanced systems with this modular method.

You can easily isolate function and all feedback works as intended while you make changes to your existing systems. No more spaghetti code and no more wasted time.

I said I was going to discuss some of the limitations of this system and I will, however, this should be enough for you to go ahead and start planning out and executing your own systems. If you want to go ahead and do your thing then go for it, but there are some limitations to be aware of. 

Before I get into that though, if you want access to the source code, check out my Patreon where you can get it for just $1. It takes a lot of time to produce videos and write out blogs so if you can support me then I’ll be able to make more resources for you to use and I would be forever grateful. Thanks for your time, it means a lot.


Limitations, Considerations, and Possible Solutions

Limitation 1: As designers gain more control over game logic, programmers lose control. By moving all logic into Unity Events, you effectively make instance references useless in some cases.

Potential Solution: Clearly define what systems REQUIRE programming to be the main form of control while allowing designers to interface with it in a similar and easy fashion.

Limitation 2: If you are experiencing a logic error, it could potentially take more time to dig through your nested prefabs to find where the error is located.

Potential Solution: Take advantage of the modularity and test everything in isolation before marking the asset as ready. Create prefabs that have preset feedback and create variations to preserve the original working copy.

Limitation 3: You cannot interface with static classes or manager classes that are instantiated at run-time as you would with feedback classes. This is due to the fact that you need an instance reference.

Potential Solution: Use a Game Event system similar to or exactly the same as defined in this video. I have created a controller class for this in the package on Patreon or you could make your own.

There may be more that I haven’t come across yet, but if you keep these in mind, I am positive that you’ll have a great system that is easy to debug, expand, and entertain. Thanks again!


FWD Level Design – Unity 3D Custom Level Editor Tool

What is this tool?

Custom Unity3D Automated Level Editor and Generation Process

In this blog, I will be going over a tool that I made that turned a roughly 2 and ½ hour process into a 1-hour process, how and why I created it, some of the issues I came across, and some final thoughts. To begin with, though, let me give you a brief description of the tool I created…

I have managed to remove the need to painstakingly scale and place individual objects in order to create levels. I did this by creating classes that: use an image to instantiate 3D objects and generate optimised mesh, create and save navmesh data and scenes automatically, and set up my back-end level selection. (Seen right)


What is FWD?

You’re probably reading this and thinking, “okay that’s a pretty cool tool you got there magic man, but what the h*ll is FWD?”

FWD – short for ‘forward’ – is a mobile game that was made for a university brief at SAE Institute, Brisbane, in 6 weeks; by one person. I have seen potential in the game for a polished portfolio piece, a commercially viable product, and I am actively working on its completion.

To the right is the (rushed and placeholder) gameplay trailer that should give you a good enough grasp of the concept. I am aiming to have the product commercially available on android by the end of January 2022, with future updates already planned.

Anyway, enough about the game and let’s get back to why you’re actually here…


How did the tool come about? First Iteration

Unity3D tedious scaling and repositioning.

To begin with, I was creating my levels by spawning in default cubes, scaling them, and positioning them – manually. For anyone who has used Unity 3D before, you will be able to empathise with me when I say that this was an extremely tedious process of repetitive tweaking.

As a response to this, I knew I had to make something that would speed up my process, significantly. In retrospect, I could have used Unity’s Pro-Builder Tool to rapidly create my levels, however, there are some very specific requirements that need to be met for my use case – we’ll get into these later.

This Segways into the first iteration of my custom level editor which was far from perfect, however, was a step in the correct direction.

Unity 3D custom level editor tool first iteration

This iteration of the tool required me to have a class called WallBuilder attached to an object. This class had a custom inspector with buttons that allowed me to instantiate a cube in the forward, left, right, and backward directions. In addition to this, I included an offset and count variable that allowed me to easily create a specific array of blocks – which I used to create the map boundaries.

These functions were also mapped to the Numpad so I could easily press short keys (8, 4, 6, 2) for a more intuitive interface. I also had an undo and clear button, which would simply either remove the last instantiated cube from the list or clear the entire list.

And now you’re saying, “that’s cool and all, but what’s wrong with it?”

Well, here’s the source code – download it, give it a whirl, and tell me what you notice… Can’t be bothered? Don’t blame you, I’ll just tell you in list form to save both of us time. I’m sure there is a number of smaller things that were also a problem, however, these were the main issues that frustrated me.

  • The walls instantiation direction (8, 4, 6, 2) is in global space.
    • This means scene camera positioning can cause user confusion, increasing cognitive load.
  • Walls can spawn inside of each other.
    • This increases rendering overhead and causes z-fighting.
  • The wall instance reference was not persistent.
    • This meant that duplicating the WallBuilder object or reloading the scene would cause any new additions to start at the object’s origin.
  • You could not use CTRL+Z to undo a change.
    • So you better hope you don’t accidentally press the clear button (I did).

While this list of issues may not seem all that bad in the grand scheme of things, there is one flaw in this process that a few of you keen developers may have already spotted. Each instantiated cube has its own instance of a transform, collider, mesh renderer, mesh filter, and wall component. Now multiply that by about 100-300 to make up just the walls of the level and you’ve got yourself a huge overhead of unnecessary processing

CPU Usage with old system

The most obvious process that I would like to bring to your attention, is the rendering. Each wall has 6 faces (12 tris) that are being rendered, with 1-3 of them never being seen by the player.

These extra faces are using an unnecessary amount of the device CPU. Now, on a computer, this may not be that big of a deal, however, on a lower-end mobile device, it’s the difference between 30 and 2 fps. The provided screenshot of Unity’s profiler shows how drastic this is.

Without reference, this may not mean much to you, however, let’s just say I managed to get more than a 65% increase in CPU performance with my newer tool – we’ll do a comparison when we get to the final iteration…


Like a two-faced friend, we need to cut them out. Second Iteration

Minecraft rendering

Have you ever gone into spectator mode in Minecraft and flew around under the world? Or what about accidentally falling through the world in literally any game ever made?

If you have, one thing you would have noticed is that the objects are completely transparent on the other side. This is to address the exact issue that I was experiencing here. SAVE. CPU. PERFORMANCE.

With this in mind, I have taken a page out of game development 101 and changed the way I was building the levels. (Image source)

So how do we make levels more efficiently and mobile friendly?

Firstly, I took a step back and did a little research into different level editing processes. One video that I found by Brakeys (surprise), featured a ‘Colour to Prefab’ system. Simply put, the code would read an image and instantiate an object based on the colour of the pixel it read. I thought this was brilliant and as my levels are on the same Y axis, I could simply layout the level in an image editing software from a birds-eye perspective and generate it in Unity.

Image for level editing

This is a much more efficient process as being able to draw the level is easier than trying to place objects in 3D space. Not only that, but because it’s using colour to instantiate objects, I can create levels on the fly without Unity. (Anyone have a spare laptop they’d like to donate?)

So, all I have to do is create a handful of wall mesh objects in Blender that only renders specific faces. Then I create a look-up dictionary of sorts to specify a colour that is associated with the correct orientation of the optimised wall.

Great! Problem solved right? – Not quite…

While, yes there are fewer faces being rendered, there still remains the extra overhead provided by individual transforms, colliders etc… Not only that, but it requires an excessive cognitive load to remember what colour is associated with what orientation, greatly slowing down the level creation process. Thankfully, there is always a better way and I looked to the internet to understand how mesh generation works in Unity.


Array’s, Vector’s, Triangles, mish-mash-mesh generation. Final Iteration

After reading through the Unity Documentation on mesh generation and experimenting with it for a little – I felt out of my depth. As someone who is nearly finished their Bachelor of Games DESIGN – (for me) this was a programming challenge like no other. Thankfully, though, one of the skills we are taught is cognitive outsourcing and that, I am comfortable with.

I hear you, “so you’re saying you didn’t program it from scratch?”

No, of course not – that would be a waste of energy and time, considering there are free examples out there. Not to mention I’m already over my time budget for this week’s development and I haven’t even made a dent in the long list of feature’s that need polishing. Don’t get me wrong though, it’s not like there was a solution that did everything I needed it to do straight out of the box. I still needed to understand what was going on and how it was working in order to implement a consistent and flawless system (okay, small designer rant over).

I found a resource online that was endeavouring to remake Minecraft within Unity. This was accompanied by a YouTube tutorial series that explained everything in a relatively easy to understand way. I specifically used the first two videos to understand how to generate mesh and render only what needed to be rendered.

Instead of writing everything that I did to make it work in detail (I did and it was boring to read), here’s a GIF and a list of things that the new system generates.

  • Reads through two images.
    • One for object position, the other for object rotation.
  • Sets level (chunk) size to the image dimensions.
  • Generates floor mesh.
  • Generates wall mesh where black pixels are present.
  • Renders only the faces that will be seen.
  • Uses the generated mesh as a collider reference.
  • Uses a colour look-up Scriptable Object to instantiate objects.
  • Applies a Level Theme material set to the floor and wall mesh renderers.
  • Bakes the Navmesh for AI.
Level generation

Fantastic! But what’s the difference in performance now?

If you get as excited as I do to compare numbers then I’m sure you’ve been looking forward to seeing the improvement. Now, I’m no software engineer and I’m not exactly sure what number’s I should be going for or what everything actually means, however, there is a clear improvement.  It is also worth noting that I am hyper-focusing on purely the rendering, however, one thing I noticed is that while the level generator script was active in the scene, it would increase the script overhead by about 0.2 – 0.4ms. (The profiler was used on the exact same level layout.)

  • CPU difference: 2.29ms faster
  • Dynamic Batch Triangles difference: 300 less
  • Dynamic Batch Vertices difference: 202 less
  • Static Batch Triangles difference: 2.2k less
  • Static Batch Vertices difference: 53.2k less
Old performance
Old Level Layout
Improved Performance
New Level Generator

Efficiency is the lock and automation is the key. Process Automation

Unity3D Custom Level Editor Window

Now that I have an optimised and efficient level generator, the next logical step is to put it in a package that allows me to set up levels without thinking. I did this simply by creating a Scene Generator script (yes lots of generation) and a custom Unity Editor Window.

Put simply, the Scene Generator creates a new scene, instantiates all of the necessary prefabs, generates the level, saves the scene, and sets up all my Scriptable Objects. I also included the option to create a new Level Theme Scriptable Object directly within this editor window, just for efficiency… That’s all I have to say, you get the point.


Final Thoughts

First of all, if you’re still here, thank you for your time. If you liked what you read, why not check out some of my social media accounts linked below. I plan on being much more active in releasing Game Development content, so if you’re interested in supporting me and following along with my projects, please consider subscribing and following.

I have learnt a lot while working on this tool and while it may not seem like much to some people, it is a massive achievement for me. Completing this with only my toolbag of skills and research capabilities is something I am proud of and spent many hours doing so. With this in mind, I am working on creating a Patreon page where I will be posting my source code for useful tools and smaller prototypes for those who would like to use/ contribute to them. If supporting me is something you would be interested in doing, get in contact with me and I’ll move quicker in setting everything up.

Again, thank you for your time and I hope you’re having/have had a fantastic day!

In Stability

Brief / Overview

InStability was a solo project made in 3 weeks for a University brief. The brief required us to create an interactive autobiographical experience based on a chosen aspect of our lives. I chose to peruse the internal struggle of finding balance and how it affects the world around me. After going through a handful of iterations, I began to take inspiration from “The Beginner’s Guide” by Davey Wreden. The player is tasked with interacting with objects in the environment which alters a stability level, directly controlling the narrative’s direction.

I would consider this project a failure as it did not live up to the expectations I had for it. However, I learned a valuable lesson about project scope and the importance of understanding the core of an experience. Through this self-proclaimed failure, I have been able to walk away with a greater understanding of my own capabilities and where I need to improve.

Main Learning Outcomes

Over-scoping and Focus Shift

The original concept was over ambitious for a handful of reasons. There was supposed to be two different environments that featured upwards of 40 unique furniture assets, a time date system and, a range of audiovisual effects. The plan was to source all models, however, I underestimated how long it would take to find and import each model. This combined with the plethora of systems and documentation proved to be an impossible feat for a 1 person team in 3 weeks.

In the last week of development I realised that I would not be able to finish everything in time. After consulting my facilitators, I had better understood the core of the experience and the project underwent a focus shift. Instead of fleshing out all of the desired effects and changing environments, I decided to create a narrative, outlining my thoughts on over-indulging and overworking.

I’ve learnt that a solid understanding of what the core of the project is has a direct correlation to its success. It’s important to create multiple milestones for the project that encompass a differing level of complexity and importance. An example of this can be seen in the diagram below.

Feature Overview with Estimates

Another extremely important lesson I learnt is understanding and factoring the team’s capabilities and limitations into the project scope. Originally I would look at scope as only the final output of the project, however, the team working on the project should be considered and factored in as they are the input. For example you can’t expect an artist with some programming experience to output a complex AI behaviour with over 30 unique animations in one week. Of course this example is extreme, however, as humans we can only do so much work and it’s important to understand this.

The importance of asking for help

As someone who is mostly self-taught, I have found it particularly difficult to swallow my pride and ask for help. This became painfully obvious in this project as I spent a lot of time debugging back end systems and tools for the player activities. I am a hard working person who is very determined and therefore did not want to bother my ‘higher ups.’ I let my ego get in the way of progress and the project suffered because of it.

This experience has humbled me and I learnt the value of taking advantage of the opportunities afforded to me. I learnt how to ask for help and that it is important to do so when I become stuck for both my own sanity and the credibility of the project.

I’ve found that I can do a handful of things when I get stuck in order to progress. First I can recognise that I am stuck and take a break, most of the time a solution can be found in a cup of coffee. Secondly I can move onto another task to continue making progress on the project to give myself time to subconsciously come up with solutions. Finally, I can ask for help. I believe that finding solutions on your own is a great way to learn and feel accomplished, however, if the task is time sensitive or the first two steps didn’t work, I need to request assistance.

Hellscape Learning Outcomes

Brief/Overview

Hellscape was developed by a team of 3 first year SAE students over 12 weeks and was released September 02, 2020. Hellscape is a Luftrausers inspired rougelike game, filled with environmental hazards and Demon’s armed to the teeth with attacks unique to each enemy type. Your primary objective is to eliminate the hivemind Demon Brain, survival is optional… 

This brief was an introduction to team work, mixing designers and programmers together in order to familiarise ourselves with the processes involved in collaborative work. This was the first project I have worked on in a team setting and it facilitated a scenario similar to that of studio production.

Contributions and Intention

Team Position

I took the role as team leader within this project, ensuring that work was being completed up to standard and on time. I was in charge of project specific deadlines, encouraging good work and spent a large amount of time developing the game alongside my team. I worked on a range of different feature’s and visuals throughout the project including; AI, UI, Weapons, Modular Vehicle Systems, Objective Systems and so on. Due to the large amount of work I have done, I will only cover features and systems that allowed me to learn important lessons in Game Design and Project Workflow.

AI

I created two unique AI, a standard enemy type that shoots out X shaped lasers (Fire Sigil) and a boss enemy type that switches between three separate attacks (Demon Brain).

The Fire Sigil is the second enemy type the player will encounter and will always spawn in pairs. The pairs were important for this enemy type as the X shaped lasers are designed to trap the player and limit their movement. One Sigil is easy to escape, however, two either side of the player proves challenging.

While the Fire Sigil is moving, it will keep up with the player no matter their speed, making it the hardest enemy to escape. Once they fire, however, they will become immobile and easy to target. A more skilled player will be able to easily predict when they will fire and just as easily counter their attacks.

Fire Sigil Attack

The Demon Brain was the final enemy and proves the hardest for the player to combat. The first two of the Demon Brain attacks are similar to the standard enemy types in the game. This was done intentionally as the player would have already developed techniques to counter these types of attacks. By tweaking them slightly, the Demon Brain encounter facilitates a challenge for the player to test their mastery.

The attacks that the Demon Brain can use will increase as its health decreases. This gives the player time to learn each attack and acts as a subtle indicator for it’s health. They are also weighted so that the more powerful attacks are used less frequently to give the player a chance. Each attack has a unique charge up sound effect, animation and, particle effect to clearly inform the player which attack will be coming next.

The first attack is a fireball barrage that shoots between 2-3 oversized fireballs in the direction of the player. This attack is similar to the third enemy type the player encounters (Fallen Angels). The second is a laser that quickly follows the player wherever they go, similar to the Fire Sigil’s. The final attack pulls in any nearby objects before blasting them away and temporarily disabling control. This attack is unique to the demon brain and is unforgiving as the player has a high chance of landing in the lava below.

Demon Brain Attack’s

Game Systems

I worked on a Modular Vehicle System that used Scriptable Objects (SO) to store unique audio-visual representations and functionality of the weapon, body and thruster parts. By using SO’s I was able to easily create variants of parts within the project files and add them to a global list. The Player Controller and any relevant UI would read from this list and update dynamically. The below screenshots show the SO and UI counterparts.

Weapons

I created two unique weapons with the intention of giving the player a chance to try different play styles. It was important while creating these to ensure that each weapon’s damage, speed and attack radius (if applicable) was balanced. Each weapon strived to be different, not more powerful.

The second weapon the player can use (Target Painter) shoots three projectiles, two small warheads and a large penetrating marker. The marker would set any enemy it hit as a target and the other two would rapidly chase it, exploding on impact.

I designed this weapon to give player’s a chance to attack from longer ranges, supporting the sniper/mid range playstyle. The player needs to have good aim as it is easy to miss your target, however, if a target is hit the warheads deal enough damage to take them out in 2 hits. This allows the player to strategically take out enemies from a safer distance.

Target Painter

The third weapon the player can use (Grav Bomb) sticks to any hostile it hits and pulls in nearby enemies, exploding after a few seconds.

I designed this weapon to support the rusher playstyle. Due to the slower projectile speed, it is easier to quickly fly by and attach the bomb to an enemy, letting the gravitational pull do the rest. It also allows the player to group enemies together and deal devastating AOE damage.

Grav Bomb

Other Features

  • Missions and Unlocks
  • Score Tracker
  • Part Functionality and Passive Effects
  • Persistent Data
  • Boundaries and Player Redirection
  • Environmental Hazards
  • All UI / UX
  • Background Art
  • Range of SFX’s
  • Range of Particle Systems

Main Learning Outcomes

Leadership Skills

During the course of the project, I was able to learn and exercise a range of leadership skills. I used Discord as the main platform for communication with my team and Hackn’Plan to track and assign tasks.

Throughout the 12 weeks, I was consistently involved with the project and communicative with each team member. On multiple occasions, I would hold one on one meetings with each member in order to assist them with any task that they needed help with.

Through direct instruction from my facilitator, I was able to apply techniques that encouraged consistent work and open communication. These techniques included; holding mid-week meetings, showcasing progress, giving deadlines on each task, providing and updating priority lists and, providing assistance when required.

Communication Example

Scriptable Object Architecture

In a research activity that I had to undertake while working on the project, I came across Scriptable Object (SO) Architecture. As discussed previously, this workflow was used for the Modular Part System and Mission Tracking System, proving valuable in future projects as well. I watched the below GDC talk by Ryan Hipple and was able to apply the concept into this project.

Below you can find my research document, summarising the points in the above video and providing examples on how I can apply what I have learnt.

AI Development

While I improved in a vast range of skills, AI programming and design was an area I improved in the most. Before creating the Fire Sigil and Demon Brain (detailed above), I did a handful of research into AI. Through my research, I have familiarised myself with a couple of concepts, Goal Orientated Action Planning (GOAP) and Combat Behaviour / Racial Personalities.

I used a range of resources in my research, however, the main two pieces of literature that had the most impact on my findings can be seen below.

A talk covering Combat Behaviour / Racial Personalities by Bungie employees, Chris Butcher and Jaime Griesemer: The Illusion of Intelligence.

A talk covering GOAP by an AI Programmer at Monolith Productions, Peter Higley.

Below is a quick overview of the theory I learnt

  • AI should be predictable
  • AI should communicate their state to the player using ‘barks’
  • AI should interact with the environment
  • Allow the player to do anything the AI can
  • AI should react to the player’s actions
  • AI are worth remembering if they have meaningful goals / personalities
  • Allow AI personality to drive behaviour

With both AI, I applied some of this theory, mainly predictability and ‘barks.’ By using SFX’s, animations and, particle systems I was able to ensure that the player could always predict when the enemy will be attacking. The time between the start of the ‘bark’ and the output of the attack would always remain the same, giving the player ample opportunity to react.

I mentioned previously that the Fire Sigil’s work best in pairs as they can trap the player between their lasers. In hind sight, I had an opportunity to add Racial Personality to these enemy types. If I designed them to work as pairs rather than individuals, they would always attempt to trap the player and if one of the pair is defeated, the other could go berserk.

By doing this, the Fire Sigil’s would have a unique dynamic and reaction to the player’s actions. This in turn would make them more interesting and memorable. Through analysis of other AI and my own work, as demonstrated here, I aim to constantly improve and develop my skills.

Finally, I learnt about Finite State Machines (FSM) and how to apply them in C# and the Unity3D engine. I used Unity’s Finite State Machine Tutorial to introduce me to the practical side of this concept, applying it to the attack class of the Demon Brain.

Crank UI Redesign

Brief / Overview

I was tasked with re-designing the UI/UX for an existing incremental game called Crank. I was to design it for a mobile application within Figma. The original UI/UX has no visual interest and a generally confusing layout, which facilitated a space to allow me to think about good UI/UX practices. I completed this task to a high standard within 4 weeks.

Main Learning Outcomes

Using Inspiration while Concepting

While drawing concepts on paper, I found that taking inspiration from other artworks and concepts was imperative to creating visually appealing designs. While the layout of background and UI elements were completely original, I needed some form of existing example to guide my overarching design. The below images were used as the main inspiration. Please note, I do not own these images and they are being used solely as reference.

By engaging in this process, I am able to find specific designs that work well and adapt them into something new. You can see some elements of the artwork I used as inspiration within the drawn concepts below. Examples include the curvature of the space ship, the diagonal points on the screens and the form of the UI buttons.

Prototyping

Identical to greyboxing, prototyping UX flow is imperative to a great final design. I found that I was able to focus purely on how the user will be interacting with the elements on screen. My main focus was considering where the user’s hand will be and placing interactive UI elements close to the user’s thumb. This can be clearly seen in the final design with most of the buttons at or below the mid point of the screen.

After passing off my original design for other’s to test, I was able to iterate upon the prototype shown below. The main changes that I had to make was spreading the navigation buttons (bottom right) across the whole width of the screen. I also needed to double their size as it was too small, making it challenging to click on.

Interactive Layout

— Click here if the interactive layout does not display —

Tesseract Level Design

Brief / Overview

I was tasked with creating a Capture the Flag (CTF) map using the open source game engine Tesseract, for a University brief. This project was completed to a high standard within 6 weeks. I had to engage in extensive documentation, concept sketches, modelling, playtesting and iteration.

The level I chose to create was set on a plane mid flight, where one team has breached the hull and the other team is defending important files, until the inevitable crash…

Due to the limitations of the Tesseract Engine, I couldn’t emulate a crash or any particular objectives, so the actual gameplay is more of a traditional CTF experience.

Main Learning Outcomes

Greyboxing

Greyboxing is one of the most important steps in the development cycle, as it allows you to test level flow and functionality before polishing sections that may be cut. By using placeholder assets or simple geometry, you can quickly set up the concept and ultimately save time if aspects need to change.

I found this to be useful while developing my level, as testing revealed map imbalances that favoured one team over the other. Some imbalances included spawn camping and unfair sight-lines favouring red team. For a more in-depth breakdown, refer to the reflection video found at the end.

Importance of Lighting

Lighting can add a lot to the scene and I found that it is important to place them in logical places. In the below image comparison, you can see the difference between one large light source and a handful of purposefully placed light’s.

It’s a subtle change overall, however, it adds much more depth and realism to the level. It allows for shadow casting that closer emulates real life, helping the player immerse more in the environment. If the light sources are placed with no purpose it will be noticed by the player, potentially taking them out of the experience.

Lights can also be used as a level design tool. By making lights vary in brightness, colour and placement, they can potentially be used as guides. Lighting areas of interest can lead the player off the beaten path to allow for exploration, or even as the main source of level progression and guidance. You can see how I have incorporated this theory in the below ‘after’ image, highlighting an underground passage.

Move the slider left or right.

Environmental Story Telling

Through directed learning, I had applied environmental story telling within the level in the form of destruction. In the screenshots below you can see the forced entry into the level, which shows that one team is breaching the plane.

Environmental story telling can come in many forms, from decal’s of bullet holes resembling a past fire fight, to a scrape on the ground to signify the frequent movement of furniture. Understanding this is important as I can add visual interest to level’s that allows the player to immerse themselves in the environment as they piece together past events that could have happened.

I can also use this technique to potentially show the player game mechanics without using a tutorial pop-up screen. While I could not do this due to the limitations of the Tesseract engine, an example can be used to explain my understanding. Picture a narrow hallway in an old abandoned mansion, the player is walking forward noticing a large crack in the wooden floor ahead. Suddenly a raccoon skitters past them and the floor collapses. In this scenario, the player now knows that some areas are unstable and they should look out for large cracks, finding alternative routes when necessary.

Video Reflection

An in-depth reflection on my process from start to finish