Modular Code Architecture

Why should I consider Modular Code?

After working on Stellar Winds (a university final project) for over 6 months, I have managed to build an intense amount of technical debt as a result of my own negligence, in the form of spaghetti code. If you’re reading this then maybe you can avoid the same mistakes that I have made by considering modular code.

Unity offers a very easy way to remove instance reference dependencies and facilitates a space for modular architecture to shine. In this blog, we will explore what Modular Code Architecture is within the context of Unity, how to design your systems in a scalable and reusable way, and define some limitations of the architecture.

If you want access to the source code, check out my Patreon where you can get it for just $1. It takes a lot of time to research and put these together so I’d appreciate the support, thanks!  


What is Modular Code?

Put simply, it is code that does one very specific thing really well. It is code that can be used by anything in any given context. Think of your classes as a specialisation, we wouldn’t want a plumber to install our roof as much as we wouldn’t want a level designer to program our AI.

Don’t worry, we’ll be going over some practical examples that you can pull from and use. By the end of this blog, you’ll have a great understanding of the concept and be able to do it yourself.

“Modular code architecture (enables) code to be split into many independent packages that are easily developed”

(Tayar, 2021).

First things first, make a plan

I get that you want to jump straight in and get it done. Programming and game dev is really fun, but that’s what I did with Stellar Winds and as a result, its codebase is brittle. If you jump straight in without a plan you’ll end up building insane technical debt and waste a lot of time on “unforeseen circumstances.” Just look at this health class that handles shields and health, ridiculous.

Depending on how you currently work or were previously taught, you may have been required to write a Technical Specifications Document. It’s pretty commonly disliked amongst my peers, including myself, however, in the context of modular code, it is the most important step.

Tech Spec 1

Let’s explore my process using an abstracted health system as an example. First, write out everything the SYSTEM should do. Refer to Tech Spec 1.

Next, critically analyse this system, what individual elements can you identify within it and what is its core function? Simply, its core function is to track the progress to the ‘fail’ state. It also needs audio, particle, camera shake, UI updates, and game event triggers.

One thing you may notice is that each element can be separated into two categories. Function and Feedback. With this outlook, we can then define every CLASS that we will need to create in order to develop this system. Remember while planning these out to specify a specialty for each class. Refer to Tech Spec 2.

Tech Spec 2

Finally, if you are more of a visual person like me – you can put together a  flow chart of sorts that clearly defines connections and each class’s specialty. I use Figma for this but a great alternative is draw.io– which can be connected to google drive.


Unity Events

We will be primarily using UnityEvents in order to make everything work and reduce the overall code we have to write, so let’s explore what we can do with them for clarity’s sake. For starters, we can call public methods from a class within the inspector. We can also parse strings and send floats if a method has a parameter. Note that only methods with a single parameter can be used like this.

We can also send dynamic variables by declaring the events a little differently. This is important to understand as we will be using it frequently. You write them as follows:

// Declaration
UnityEvent<float, int, bool> eventToCall;

// Invoking
eventToCall.Invoke(0.2f, 1, true);

Now when we hook up a class that has a method with one of the same parameters, a ‘dynamic’ option will appear in the response of the UnityEvent. Neat right? Note, you can still call normal methods when you’ve marked an event as dynamic, however, you still need to give the invoke method a value when invoking the event. That pretty much sums up UnityEvents, if there’s anything I may have missed let me know.


Execute your bad habits

Now that we have a plan written out and understand how UnityEvents work, we can very easily program all of our classes. Let’s begin with the Health Controller class, we’re simply going to have a float to represent health and three events.

public class Health : MonoBehaviour
    {
        [Header("Values")]
        [Range(0, 1)] public float health;

        [Header("Responses")]
        public UnityEvent<float> onDamaged;
        public UnityEvent<float> onHealed;
        public UnityEvent onDied;
    }
public void ProcessDamage(float dmg)
        {
            health -= dmg;
            if (health <= 0)
                OnDied();

            OnDamaged();
        }
        public void ProcessHeal(float dmg)
        {
            health += dmg; OnHealed();
        }

        private void OnDamaged() { onDamaged.Invoke(health); }
        private void OnHealed() { onHealed.Invoke(health); }
        private void OnDied() { onDied.Invoke(); }

Now add ProcessDamage() and ProcessHeal() methods that accept a float parameter. Then, add three methods that invoke each relative UnityEvent, so it’s very clear what we’re doing. For now, I suggest either hooking up two simple inputs or using EasyButtons so that we can test it in isolation.

Now that we have our health class set up, it’s time to move on to the individual classes: Audio Controller, Animation Controller, Slider Controller, Camera Shake Controller, and Game Event Controller. These are all really straightforward, with the exception of the animation controller – so I won’t go into detail for each class. Instead, this is a perfect opportunity for you to do it yourself – using your technical specifications as your guide. Try to ensure that a single public method can be called to invoke the desired feedback.

Let’s look into the Animation Controller. As you may know, the Animator uses parameters for its state machine to function. With this in mind, you can create simple methods that take a string as a parameter to control the animator. We can invoke a Trigger, Set a bool, and Set a float value for blend trees.

Animator.SetTrigger(paramater)
Animator.SetBool(paramater, condition)
Animator.SetFloat(paramater, float)
public void SetAnimationTrigger(string animationTrigger)
        {
            animator.SetTrigger(GetParamName(animationTrigger));
        }
        public void SwitchAnimationBool(string animationTrigger)
        {
            string param = GetParamName(animationTrigger);
            animator.SetBool(param, !animator.GetBool(param));
        }

        public void SetBlendTarget(string blendParam)
        {
            targetBlendParam = blendParam;
        }
        public void SetBlendValue(float value)
        {
            animator.SetFloat(targetBlendParam, value);
        }

There could be any number of ways to do this, however, how I did it was to create a list of strings that act as an identifier for each of the parameters in the Animator. I then have four methods to interface with the Animator and a handler for checking the parameter name.

Pretty easy right? I just took inventory of what the animator can do and put together methods that facilitate UnityEvent interaction.

Now of course you could declare a UnityEvent that takes in both a string and float to skip the need to store a reference to the blend parameter – if that’s how you want to do it. However, I wanted to ensure that I didn’t have anything other than health variables in the health class. Again, it all depends on your system and the plan that you created.

If blend trees are a core function of your system then maybe this would be the way to go, however, I don’t mind the extra step of storing a reference to the target parameter. 

With the hardest class explained, you’ll be able to easily put together every other controller class without instruction. Give it a go before moving on.

// Decleration
string blendTreeParam = "objectScale";
float health;

UnityEvent<string, float> onChangeBlendEvent

// invoking
public void ProcessDamage (float amount)
{
health -= amount;
onChangeBlendEvent.Invoke(blendTreeParam, health);
}

Putting it together

Right so now that we have all of our classes, let’s set up our player object. I have a very specific way I like to name and structure my objects (which I’ll make another post on in the future), so I’ll quickly put it together and show you where I have put each class.

Now, by referring to the first instance of our tech spec, we can clearly see what methods we need to call through our UnityEvents. So let’s quickly do that. Note: objects highlighted in red are objects that the UnityEvents reference.

On Damaged

  • Slider Controller – UpdateSliderValue (dynamic float)
  • Animation Controller – SetAnimationTrigger(Damage Anim param)
  • Animation Controller – SetBlendTarget(Blend param)
  • Animation Controller – SetBlendValue (dynamic float)
  • Audio Controller – Play
  • Particle System – Play
  • Camera Shake – LightShake

On Healed

  • Slider Controller – UpdateSliderValue (dynamic float)
  • Audio Controller – Play
  • Animation Controller – SetBlendTarget(Blend param)
  • Animation Controller – SetBlendValue (dynamic float)

On Died

  • Game Event Controller – Trigger

Notice how quick that was? Now if we don’t have an audio source for some reason, don’t worry, the core of your system will still work. This is because we separated everything into modular parts that don’t need a lot of messy instance references. You can also save this as a prefab calling it “Object With Health,” or something like that, knowing full well that everything will work straight out of the box. You can then adapt it to an enemy, follower, destructible box – literally anything that has health. The power shines here with the destructible box example. We don’t want UI to show health on a box so we just remove the UI controller, everything still works, and you didn’t have to refactor your hardcoded system to remove UI, great!


Increasing Scope

So now we’ve managed to create a nicely organised modular system, in definite contrast to the spaghetti code you may have been writing up until now. It’s refreshing, isn’t it? However, overall, this class is much simpler than the health class in Stellar Winds (about 250 lines simpler). One thing to consider here is Stellar Winds features shields and this example system does not. “Is that why it’s 250 lines shorter?” aha, no… The next step is to scale the systems up further, so let’s start by adding shields. 

Now you could just add another float, a set of UnityEvents, and their relative methods to the Health class and call it a day. However, that’s not very modular, is it? What if we wanted to add an ability that gave the player another temporary layer of protection in the form of an over shield? That means we need to add, again, another float, a set of unity events, and their relative methods. Just as this paragraph is getting longer, so will your code.

Tech Spec 3

“Okay I get it, what do we do then?” I hear you ask. Well this is where I remind you that it’s important to plan this stuff out in the technical specifications (as I have done – refer to Tech Spec 3), but I won’t get you to do that, instead just follow these really simple steps.

First, refactor our current health controller class into a normal C# Serializable class called Stat. Change the float, all relative events, and methods from “health” to “stat.” We’ll also want to change “OnDied” to “OnStatZero” and “ProcessDamage/ ProcessHealth” to “DecreaseStat/ IncreaseStat” for clarity. 

Let’s also add logic for recharging the stat over time with a bool (or enum if you like) controlling whether it can recharge or not. Finally, put the logic into a Tick() method that we can call from another update method.

Next, let’s make a new Health Controller class that has two “Stat” variables. These will replace our health float and acts as our new shield float. Just as before we’ll need to add ProcessDamage() and ProcessHeal() methods. Let’s also add an onDamaged and onDied UnityEvents so we have a higher level of control and remember to call each stats Tick() method in the Update() method. 

Okay great, so now we’re pretty much back to where we were except this time we have two stats to work with. The best thing is, we can add as many as it makes sense. Setting this system up is exactly the same as the previous, however, now we have control over individual stat changes nested within the new Health class.

Now you may also be thinking, “But what if as you said, we want to add a temporary overshield? Would we have to declare a new stat for each temporary item we want to add? If so, would that not just be spaghetti code creeping back into our development patterns?” You can probably guess what I’ll say at this point. Did you plan ahead and allow for temporary stats? If the answer is yes then you probably didn’t ask that question.

Luckily, because of this modular architecture, it is really easy to add a list of stats for temporary buffs. These could dynamically change through play or you could replace your health and shield stats with a list of stats and use ints to identify each one. 

Now I couldn’t possibly answer every question you may have in a blog written before you read it, but hopefully, you can adopt a modular mindset and put together a system that fits your needs.


Advanced Simulated Health System

Now, as part of increasing the scale of our modular system, let’s explore a simulated health system. I will be using Fallout’s limb damage as an example here and focus purely on the legs.

I will again remind you that this is something that needs to be planned in advance, however, due to the holistic approach of this blog, we can explore how to do this without a solid plan.

Here’s a quick and dirty movement class that I wouldn’t recommend using in an actual project, but will work well for this example.

public class MovementExample : MonoBehaviour
    {
        public Vector2 speed = new Vector2(0.5f, 5f);
        float currentSpeed;

        private void Start()
        {
            SetSpeed(1);
        }

        private void Update()
        {
            Vector3 input = new Vector3(Input.GetAxis("Horizontal"), 0, Input.GetAxis("Vertical"));
            transform.Translate(input * currentSpeed * Time.deltaTime);
        }

        public void SetSpeed(float percent)
        {
            currentSpeed = Mathf.Lerp(speed.x, speed.y, percent);
        }
    }

Now let’s create a new health class similar to the other, allowing for individual limb damage. Create a Unity Event called onLimbsDamaged where we use a float as a parameter. This is going to send the per cent of total leg health to our movement class. This looks like:

Float percent = (leftLeg.value + rightLeg.value) / 2

Next, let’s hook up the movement class’s ChangeSpeed() method to the onLimbsDamaged responses in the inspector. Let’s also set up some UI to represent limb damage.

Easy, now when each limb takes damage, the overall speed of the player is reduced linearly. You can make this as complex or as simple as you want, but notice how straightforward it is to create more advanced systems with this modular method.

You can easily isolate function and all feedback works as intended while you make changes to your existing systems. No more spaghetti code and no more wasted time.

I said I was going to discuss some of the limitations of this system and I will, however, this should be enough for you to go ahead and start planning out and executing your own systems. If you want to go ahead and do your thing then go for it, but there are some limitations to be aware of. 

Before I get into that though, if you want access to the source code, check out my Patreon where you can get it for just $1. It takes a lot of time to produce videos and write out blogs so if you can support me then I’ll be able to make more resources for you to use and I would be forever grateful. Thanks for your time, it means a lot.


Limitations, Considerations, and Possible Solutions

Limitation 1: As designers gain more control over game logic, programmers lose control. By moving all logic into Unity Events, you effectively make instance references useless in some cases.

Potential Solution: Clearly define what systems REQUIRE programming to be the main form of control while allowing designers to interface with it in a similar and easy fashion.

Limitation 2: If you are experiencing a logic error, it could potentially take more time to dig through your nested prefabs to find where the error is located.

Potential Solution: Take advantage of the modularity and test everything in isolation before marking the asset as ready. Create prefabs that have preset feedback and create variations to preserve the original working copy.

Limitation 3: You cannot interface with static classes or manager classes that are instantiated at run-time as you would with feedback classes. This is due to the fact that you need an instance reference.

Potential Solution: Use a Game Event system similar to or exactly the same as defined in this video. I have created a controller class for this in the package on Patreon or you could make your own.

There may be more that I haven’t come across yet, but if you keep these in mind, I am positive that you’ll have a great system that is easy to debug, expand, and entertain. Thanks again!


FWD Level Design – Unity 3D Custom Level Editor Tool

What is this tool?

Custom Unity3D Automated Level Editor and Generation Process

In this blog, I will be going over a tool that I made that turned a roughly 2 and ½ hour process into a 1-hour process, how and why I created it, some of the issues I came across, and some final thoughts. To begin with, though, let me give you a brief description of the tool I created…

I have managed to remove the need to painstakingly scale and place individual objects in order to create levels. I did this by creating classes that: use an image to instantiate 3D objects and generate optimised mesh, create and save navmesh data and scenes automatically, and set up my back-end level selection. (Seen right)


What is FWD?

You’re probably reading this and thinking, “okay that’s a pretty cool tool you got there magic man, but what the h*ll is FWD?”

FWD – short for ‘forward’ – is a mobile game that was made for a university brief at SAE Institute, Brisbane, in 6 weeks; by one person. I have seen potential in the game for a polished portfolio piece, a commercially viable product, and I am actively working on its completion.

To the right is the (rushed and placeholder) gameplay trailer that should give you a good enough grasp of the concept. I am aiming to have the product commercially available on android by the end of January 2022, with future updates already planned.

Anyway, enough about the game and let’s get back to why you’re actually here…


How did the tool come about? First Iteration

Unity3D tedious scaling and repositioning.

To begin with, I was creating my levels by spawning in default cubes, scaling them, and positioning them – manually. For anyone who has used Unity 3D before, you will be able to empathise with me when I say that this was an extremely tedious process of repetitive tweaking.

As a response to this, I knew I had to make something that would speed up my process, significantly. In retrospect, I could have used Unity’s Pro-Builder Tool to rapidly create my levels, however, there are some very specific requirements that need to be met for my use case – we’ll get into these later.

This Segways into the first iteration of my custom level editor which was far from perfect, however, was a step in the correct direction.

Unity 3D custom level editor tool first iteration

This iteration of the tool required me to have a class called WallBuilder attached to an object. This class had a custom inspector with buttons that allowed me to instantiate a cube in the forward, left, right, and backward directions. In addition to this, I included an offset and count variable that allowed me to easily create a specific array of blocks – which I used to create the map boundaries.

These functions were also mapped to the Numpad so I could easily press short keys (8, 4, 6, 2) for a more intuitive interface. I also had an undo and clear button, which would simply either remove the last instantiated cube from the list or clear the entire list.

And now you’re saying, “that’s cool and all, but what’s wrong with it?”

Well, here’s the source code – download it, give it a whirl, and tell me what you notice… Can’t be bothered? Don’t blame you, I’ll just tell you in list form to save both of us time. I’m sure there is a number of smaller things that were also a problem, however, these were the main issues that frustrated me.

  • The walls instantiation direction (8, 4, 6, 2) is in global space.
    • This means scene camera positioning can cause user confusion, increasing cognitive load.
  • Walls can spawn inside of each other.
    • This increases rendering overhead and causes z-fighting.
  • The wall instance reference was not persistent.
    • This meant that duplicating the WallBuilder object or reloading the scene would cause any new additions to start at the object’s origin.
  • You could not use CTRL+Z to undo a change.
    • So you better hope you don’t accidentally press the clear button (I did).

While this list of issues may not seem all that bad in the grand scheme of things, there is one flaw in this process that a few of you keen developers may have already spotted. Each instantiated cube has its own instance of a transform, collider, mesh renderer, mesh filter, and wall component. Now multiply that by about 100-300 to make up just the walls of the level and you’ve got yourself a huge overhead of unnecessary processing

CPU Usage with old system

The most obvious process that I would like to bring to your attention, is the rendering. Each wall has 6 faces (12 tris) that are being rendered, with 1-3 of them never being seen by the player.

These extra faces are using an unnecessary amount of the device CPU. Now, on a computer, this may not be that big of a deal, however, on a lower-end mobile device, it’s the difference between 30 and 2 fps. The provided screenshot of Unity’s profiler shows how drastic this is.

Without reference, this may not mean much to you, however, let’s just say I managed to get more than a 65% increase in CPU performance with my newer tool – we’ll do a comparison when we get to the final iteration…


Like a two-faced friend, we need to cut them out. Second Iteration

Minecraft rendering

Have you ever gone into spectator mode in Minecraft and flew around under the world? Or what about accidentally falling through the world in literally any game ever made?

If you have, one thing you would have noticed is that the objects are completely transparent on the other side. This is to address the exact issue that I was experiencing here. SAVE. CPU. PERFORMANCE.

With this in mind, I have taken a page out of game development 101 and changed the way I was building the levels. (Image source)

So how do we make levels more efficiently and mobile friendly?

Firstly, I took a step back and did a little research into different level editing processes. One video that I found by Brakeys (surprise), featured a ‘Colour to Prefab’ system. Simply put, the code would read an image and instantiate an object based on the colour of the pixel it read. I thought this was brilliant and as my levels are on the same Y axis, I could simply layout the level in an image editing software from a birds-eye perspective and generate it in Unity.

Image for level editing

This is a much more efficient process as being able to draw the level is easier than trying to place objects in 3D space. Not only that, but because it’s using colour to instantiate objects, I can create levels on the fly without Unity. (Anyone have a spare laptop they’d like to donate?)

So, all I have to do is create a handful of wall mesh objects in Blender that only renders specific faces. Then I create a look-up dictionary of sorts to specify a colour that is associated with the correct orientation of the optimised wall.

Great! Problem solved right? – Not quite…

While, yes there are fewer faces being rendered, there still remains the extra overhead provided by individual transforms, colliders etc… Not only that, but it requires an excessive cognitive load to remember what colour is associated with what orientation, greatly slowing down the level creation process. Thankfully, there is always a better way and I looked to the internet to understand how mesh generation works in Unity.


Array’s, Vector’s, Triangles, mish-mash-mesh generation. Final Iteration

After reading through the Unity Documentation on mesh generation and experimenting with it for a little – I felt out of my depth. As someone who is nearly finished their Bachelor of Games DESIGN – (for me) this was a programming challenge like no other. Thankfully, though, one of the skills we are taught is cognitive outsourcing and that, I am comfortable with.

I hear you, “so you’re saying you didn’t program it from scratch?”

No, of course not – that would be a waste of energy and time, considering there are free examples out there. Not to mention I’m already over my time budget for this week’s development and I haven’t even made a dent in the long list of feature’s that need polishing. Don’t get me wrong though, it’s not like there was a solution that did everything I needed it to do straight out of the box. I still needed to understand what was going on and how it was working in order to implement a consistent and flawless system (okay, small designer rant over).

I found a resource online that was endeavouring to remake Minecraft within Unity. This was accompanied by a YouTube tutorial series that explained everything in a relatively easy to understand way. I specifically used the first two videos to understand how to generate mesh and render only what needed to be rendered.

Instead of writing everything that I did to make it work in detail (I did and it was boring to read), here’s a GIF and a list of things that the new system generates.

  • Reads through two images.
    • One for object position, the other for object rotation.
  • Sets level (chunk) size to the image dimensions.
  • Generates floor mesh.
  • Generates wall mesh where black pixels are present.
  • Renders only the faces that will be seen.
  • Uses the generated mesh as a collider reference.
  • Uses a colour look-up Scriptable Object to instantiate objects.
  • Applies a Level Theme material set to the floor and wall mesh renderers.
  • Bakes the Navmesh for AI.
Level generation

Fantastic! But what’s the difference in performance now?

If you get as excited as I do to compare numbers then I’m sure you’ve been looking forward to seeing the improvement. Now, I’m no software engineer and I’m not exactly sure what number’s I should be going for or what everything actually means, however, there is a clear improvement.  It is also worth noting that I am hyper-focusing on purely the rendering, however, one thing I noticed is that while the level generator script was active in the scene, it would increase the script overhead by about 0.2 – 0.4ms. (The profiler was used on the exact same level layout.)

  • CPU difference: 2.29ms faster
  • Dynamic Batch Triangles difference: 300 less
  • Dynamic Batch Vertices difference: 202 less
  • Static Batch Triangles difference: 2.2k less
  • Static Batch Vertices difference: 53.2k less
Old performance
Old Level Layout
Improved Performance
New Level Generator

Efficiency is the lock and automation is the key. Process Automation

Unity3D Custom Level Editor Window

Now that I have an optimised and efficient level generator, the next logical step is to put it in a package that allows me to set up levels without thinking. I did this simply by creating a Scene Generator script (yes lots of generation) and a custom Unity Editor Window.

Put simply, the Scene Generator creates a new scene, instantiates all of the necessary prefabs, generates the level, saves the scene, and sets up all my Scriptable Objects. I also included the option to create a new Level Theme Scriptable Object directly within this editor window, just for efficiency… That’s all I have to say, you get the point.


Final Thoughts

First of all, if you’re still here, thank you for your time. If you liked what you read, why not check out some of my social media accounts linked below. I plan on being much more active in releasing Game Development content, so if you’re interested in supporting me and following along with my projects, please consider subscribing and following.

I have learnt a lot while working on this tool and while it may not seem like much to some people, it is a massive achievement for me. Completing this with only my toolbag of skills and research capabilities is something I am proud of and spent many hours doing so. With this in mind, I am working on creating a Patreon page where I will be posting my source code for useful tools and smaller prototypes for those who would like to use/ contribute to them. If supporting me is something you would be interested in doing, get in contact with me and I’ll move quicker in setting everything up.

Again, thank you for your time and I hope you’re having/have had a fantastic day!

Hellscape Learning Outcomes

Brief/Overview

Hellscape was developed by a team of 3 first year SAE students over 12 weeks and was released September 02, 2020. Hellscape is a Luftrausers inspired rougelike game, filled with environmental hazards and Demon’s armed to the teeth with attacks unique to each enemy type. Your primary objective is to eliminate the hivemind Demon Brain, survival is optional… 

This brief was an introduction to team work, mixing designers and programmers together in order to familiarise ourselves with the processes involved in collaborative work. This was the first project I have worked on in a team setting and it facilitated a scenario similar to that of studio production.

Contributions and Intention

Team Position

I took the role as team leader within this project, ensuring that work was being completed up to standard and on time. I was in charge of project specific deadlines, encouraging good work and spent a large amount of time developing the game alongside my team. I worked on a range of different feature’s and visuals throughout the project including; AI, UI, Weapons, Modular Vehicle Systems, Objective Systems and so on. Due to the large amount of work I have done, I will only cover features and systems that allowed me to learn important lessons in Game Design and Project Workflow.

AI

I created two unique AI, a standard enemy type that shoots out X shaped lasers (Fire Sigil) and a boss enemy type that switches between three separate attacks (Demon Brain).

The Fire Sigil is the second enemy type the player will encounter and will always spawn in pairs. The pairs were important for this enemy type as the X shaped lasers are designed to trap the player and limit their movement. One Sigil is easy to escape, however, two either side of the player proves challenging.

While the Fire Sigil is moving, it will keep up with the player no matter their speed, making it the hardest enemy to escape. Once they fire, however, they will become immobile and easy to target. A more skilled player will be able to easily predict when they will fire and just as easily counter their attacks.

Fire Sigil Attack

The Demon Brain was the final enemy and proves the hardest for the player to combat. The first two of the Demon Brain attacks are similar to the standard enemy types in the game. This was done intentionally as the player would have already developed techniques to counter these types of attacks. By tweaking them slightly, the Demon Brain encounter facilitates a challenge for the player to test their mastery.

The attacks that the Demon Brain can use will increase as its health decreases. This gives the player time to learn each attack and acts as a subtle indicator for it’s health. They are also weighted so that the more powerful attacks are used less frequently to give the player a chance. Each attack has a unique charge up sound effect, animation and, particle effect to clearly inform the player which attack will be coming next.

The first attack is a fireball barrage that shoots between 2-3 oversized fireballs in the direction of the player. This attack is similar to the third enemy type the player encounters (Fallen Angels). The second is a laser that quickly follows the player wherever they go, similar to the Fire Sigil’s. The final attack pulls in any nearby objects before blasting them away and temporarily disabling control. This attack is unique to the demon brain and is unforgiving as the player has a high chance of landing in the lava below.

Demon Brain Attack’s

Game Systems

I worked on a Modular Vehicle System that used Scriptable Objects (SO) to store unique audio-visual representations and functionality of the weapon, body and thruster parts. By using SO’s I was able to easily create variants of parts within the project files and add them to a global list. The Player Controller and any relevant UI would read from this list and update dynamically. The below screenshots show the SO and UI counterparts.

Weapons

I created two unique weapons with the intention of giving the player a chance to try different play styles. It was important while creating these to ensure that each weapon’s damage, speed and attack radius (if applicable) was balanced. Each weapon strived to be different, not more powerful.

The second weapon the player can use (Target Painter) shoots three projectiles, two small warheads and a large penetrating marker. The marker would set any enemy it hit as a target and the other two would rapidly chase it, exploding on impact.

I designed this weapon to give player’s a chance to attack from longer ranges, supporting the sniper/mid range playstyle. The player needs to have good aim as it is easy to miss your target, however, if a target is hit the warheads deal enough damage to take them out in 2 hits. This allows the player to strategically take out enemies from a safer distance.

Target Painter

The third weapon the player can use (Grav Bomb) sticks to any hostile it hits and pulls in nearby enemies, exploding after a few seconds.

I designed this weapon to support the rusher playstyle. Due to the slower projectile speed, it is easier to quickly fly by and attach the bomb to an enemy, letting the gravitational pull do the rest. It also allows the player to group enemies together and deal devastating AOE damage.

Grav Bomb

Other Features

  • Missions and Unlocks
  • Score Tracker
  • Part Functionality and Passive Effects
  • Persistent Data
  • Boundaries and Player Redirection
  • Environmental Hazards
  • All UI / UX
  • Background Art
  • Range of SFX’s
  • Range of Particle Systems

Main Learning Outcomes

Leadership Skills

During the course of the project, I was able to learn and exercise a range of leadership skills. I used Discord as the main platform for communication with my team and Hackn’Plan to track and assign tasks.

Throughout the 12 weeks, I was consistently involved with the project and communicative with each team member. On multiple occasions, I would hold one on one meetings with each member in order to assist them with any task that they needed help with.

Through direct instruction from my facilitator, I was able to apply techniques that encouraged consistent work and open communication. These techniques included; holding mid-week meetings, showcasing progress, giving deadlines on each task, providing and updating priority lists and, providing assistance when required.

Communication Example

Scriptable Object Architecture

In a research activity that I had to undertake while working on the project, I came across Scriptable Object (SO) Architecture. As discussed previously, this workflow was used for the Modular Part System and Mission Tracking System, proving valuable in future projects as well. I watched the below GDC talk by Ryan Hipple and was able to apply the concept into this project.

Below you can find my research document, summarising the points in the above video and providing examples on how I can apply what I have learnt.

AI Development

While I improved in a vast range of skills, AI programming and design was an area I improved in the most. Before creating the Fire Sigil and Demon Brain (detailed above), I did a handful of research into AI. Through my research, I have familiarised myself with a couple of concepts, Goal Orientated Action Planning (GOAP) and Combat Behaviour / Racial Personalities.

I used a range of resources in my research, however, the main two pieces of literature that had the most impact on my findings can be seen below.

A talk covering Combat Behaviour / Racial Personalities by Bungie employees, Chris Butcher and Jaime Griesemer: The Illusion of Intelligence.

A talk covering GOAP by an AI Programmer at Monolith Productions, Peter Higley.

Below is a quick overview of the theory I learnt

  • AI should be predictable
  • AI should communicate their state to the player using ‘barks’
  • AI should interact with the environment
  • Allow the player to do anything the AI can
  • AI should react to the player’s actions
  • AI are worth remembering if they have meaningful goals / personalities
  • Allow AI personality to drive behaviour

With both AI, I applied some of this theory, mainly predictability and ‘barks.’ By using SFX’s, animations and, particle systems I was able to ensure that the player could always predict when the enemy will be attacking. The time between the start of the ‘bark’ and the output of the attack would always remain the same, giving the player ample opportunity to react.

I mentioned previously that the Fire Sigil’s work best in pairs as they can trap the player between their lasers. In hind sight, I had an opportunity to add Racial Personality to these enemy types. If I designed them to work as pairs rather than individuals, they would always attempt to trap the player and if one of the pair is defeated, the other could go berserk.

By doing this, the Fire Sigil’s would have a unique dynamic and reaction to the player’s actions. This in turn would make them more interesting and memorable. Through analysis of other AI and my own work, as demonstrated here, I aim to constantly improve and develop my skills.

Finally, I learnt about Finite State Machines (FSM) and how to apply them in C# and the Unity3D engine. I used Unity’s Finite State Machine Tutorial to introduce me to the practical side of this concept, applying it to the attack class of the Demon Brain.

Tesseract Level Design

Brief / Overview

I was tasked with creating a Capture the Flag (CTF) map using the open source game engine Tesseract, for a University brief. This project was completed to a high standard within 6 weeks. I had to engage in extensive documentation, concept sketches, modelling, playtesting and iteration.

The level I chose to create was set on a plane mid flight, where one team has breached the hull and the other team is defending important files, until the inevitable crash…

Due to the limitations of the Tesseract Engine, I couldn’t emulate a crash or any particular objectives, so the actual gameplay is more of a traditional CTF experience.

Main Learning Outcomes

Greyboxing

Greyboxing is one of the most important steps in the development cycle, as it allows you to test level flow and functionality before polishing sections that may be cut. By using placeholder assets or simple geometry, you can quickly set up the concept and ultimately save time if aspects need to change.

I found this to be useful while developing my level, as testing revealed map imbalances that favoured one team over the other. Some imbalances included spawn camping and unfair sight-lines favouring red team. For a more in-depth breakdown, refer to the reflection video found at the end.

Importance of Lighting

Lighting can add a lot to the scene and I found that it is important to place them in logical places. In the below image comparison, you can see the difference between one large light source and a handful of purposefully placed light’s.

It’s a subtle change overall, however, it adds much more depth and realism to the level. It allows for shadow casting that closer emulates real life, helping the player immerse more in the environment. If the light sources are placed with no purpose it will be noticed by the player, potentially taking them out of the experience.

Lights can also be used as a level design tool. By making lights vary in brightness, colour and placement, they can potentially be used as guides. Lighting areas of interest can lead the player off the beaten path to allow for exploration, or even as the main source of level progression and guidance. You can see how I have incorporated this theory in the below ‘after’ image, highlighting an underground passage.

Move the slider left or right.

Environmental Story Telling

Through directed learning, I had applied environmental story telling within the level in the form of destruction. In the screenshots below you can see the forced entry into the level, which shows that one team is breaching the plane.

Environmental story telling can come in many forms, from decal’s of bullet holes resembling a past fire fight, to a scrape on the ground to signify the frequent movement of furniture. Understanding this is important as I can add visual interest to level’s that allows the player to immerse themselves in the environment as they piece together past events that could have happened.

I can also use this technique to potentially show the player game mechanics without using a tutorial pop-up screen. While I could not do this due to the limitations of the Tesseract engine, an example can be used to explain my understanding. Picture a narrow hallway in an old abandoned mansion, the player is walking forward noticing a large crack in the wooden floor ahead. Suddenly a raccoon skitters past them and the floor collapses. In this scenario, the player now knows that some areas are unstable and they should look out for large cracks, finding alternative routes when necessary.

Video Reflection

An in-depth reflection on my process from start to finish