Posted on Leave a comment

How to Create a Retro Game Boy in 3D: Part 2

Final product image
What You’ll Be Creating

Follow this tutorial step-by-step to create a retro Nintendo Game Boy that you can use in video games, graphic design and illustration projects whilst learning how to model in Cinema 4D

Some of the skills you’ll learn in this tutorial include creating basic 3D modelling, importing reference images and basic rendering techniques.

In this, the first part of the two-part tutorial, I’ll show you how to:

  • How to create Materials
  • How to Colour the Model
  • How to Render the Game Boy 

10. How to Create a New Material

Step 1

Located near the bottom of the screen there should be a material selection window. Click on Create > New Material to create a new material to use that you can edit. I’ll use the materials to colour in the various objects and parts of our Nintendo Game Boy.

Create a new material

Step 2

Double click on the new material to bring up the Material Editor. This is where you can adjust the various properties of your material such as the colour, transparency and reflectance etc. 

For this particular style we will only be interested in flat colour.

The material editor

Step 3

Untick the boxes for Colour and Reflectance and tick the button for Luminance. This gives us the flat colour look for our final image.

Edit the material editor properties

Step 4

To add a material to the object, drag and drop the material to the object using the mouse. 

You can also drag and drop the material onto the object listed on the right of the screen.

The same material can be used for multiple parts of the model. For example, you can use this one material for the front, back and middle parts of our Nintendo Game Boy model.

Drag and drop the material onto an object

11. How to Adjust the Material Properties

Step 1

In the Material Editor, click on the arrow button next to Texture and select Sketch and Toon > Cel from the menu.

Select sketch and toon

Step 2

Click on the new Cel button that appears next to Texture to bring up a new menu.

Select Cel

Step 3

In this new menu, adjust the settings as follows:

  1. Change the Diffuse to match the image at below. It should be greyscale with the colour getting light from left to right.
  2. Untick Camera.
  3. Tick Lights.
  4. Tick Shadows 
  5. Select Multiply from the drop down menu.
Editing the sketch and toon properties

Step 4

Select a colour by double clicking on the colour option box. 

The Material Editor gives you many different options for choosing and picking your colour. These include RGB, HSV etc. Choose whichever option you are most familiar with as a flat colour. Because of the Multiply shading mode, the different colour shades will also appear in the material.

If the material has already been applied to the model, the colour of the model will automatically be adjusted to the new material colour.

Choosing a colour

12. How to Adjust the Shading

Step 1

To get a better idea of the shading and how our render will eventually turn out, click on the small black arrow on the bottom right of the Render Button and select Interactive Render Region from the menu. 

Select interactive render region

Step 2

Adjust the window so that it fits around the Game Boy model. Look for the very small arrow on the right hand side of the window and use the mouse to drag it to the top (this increases the resolution). Decrease the resolution by moving the arrow down if you need to.

Increase the resolution

Step 3

To change the way the light source is facing go to Options > Default Light.

Add a light object to the scene if you are more comfortable working that way.  

Select default light

Step 4

In the Default Light window, click where you would like the light source to come from on the sphere and it will sync with the view port. Experiment with different lighting positions to come up with a lighting situation that you like. 

This can be adjusted at anytime.

Editing the default light

13. Other Lighting Solutions

Step 1

To set up the basic lighting, you’ll want to go to the Floor button located in the top menu bar. Left Click-Hold and then select Physical Sky from the menu.

Select physical sky

Step 2

Ensuring that Physical Sky is selected in the menu on the right, a new menu will appear on the bottom right of the screen. Select the Time and Location tab and choose a time using the left and right arrow keys. This will change the lighting of the scene. Choose a suitable time to light up the scene.

Editing the physical sky

Step 3

To add lighting to the scene, select the Add Light button in the top menu bar at the top of the screen.

This will create a Light Object which you can move around the scene. Move the light object using the Move Tool to a suitable position.

Select add light

Step 4

To customise the lighting further, experiment with the light object options in the bottom right of the screen. Here you can adjust the Intensity, Light Type and even Colour.

Editing the light properties

14. How to Colour the Screen

Step 1

Duplicate the material by holding CTRL on the keyboard and then using the mouse to click and drag the material to the side.

Duplicating the material

Step 2

Double click on the new material to open up the Material Editor and select Luminance. From there we can change the colour to the following:

  • R: 79
  • G: 222
  • B: 183

We can also click on Cel to change the shading properties of this material if needed.

Choosing a green colour

Step 3

Duplicate the first material again (hold CTRL and click and drag with the mouse).

Duplicating the material

Step 4

Double click on the new material to open up the Material Editor and select Luminance. From there you can change the colour to the following:

  • R: 111
  • G: 126
  • B: 135
Choosing the grey colour

Step 5

To create a flat colour, select Cel and change the properties of Diffuse to one colour.

Editing the shading options

Step 6

To apply the new materials to the object, use the mouse to drag and drop the materials to the object that you wish to colour. Drag and drop the material to the object list on the right side of the screen.

Drag and drop the screen material onto the Game Boy screen.

Applying the green material to the screen

Step 7

Drag and drop the dark grey material onto the screen border of the Game Boy.

Applying the grey material to the screen border

15. How to Colour the Buttons

Step 1

Duplicate a new material from the first material again (hold CTRL and click and drag with the mouse). Double click on the new material to open up the Material Editor and select Luminance. Change the colour to the following:

  • R: 232
  • G: 96
  • B: 32
Creating the red colour material

Step 2

Duplicate a new material. Double click on the new material to open up the Material Editor and select Luminance. From there we can change the colour to the following:

  • R: 165
  • G: 199
  • B: 219
Creating the light grey colour material

Step 3

Drag and drop the light grey material onto the button borders of the Game Boy.

Applying the light grey colour material

Step 4

Drag and drop the bright red material onto the main buttons of the Game Boy.

Applying the red colour material to the buttons

16. How to Colour the D-Pad & Small Buttons

Step 1

Duplicate a new material as you did before. Double click on the new material to open up the Material Editor and select Luminance. From there change the colour to the following:

  • R: 89
  • G: 98
  • B: 106
Creating the dark grey colour material

Step 2

Drag and drop the new material onto the D-Pad, Start and Select buttons of the Game Boy.

Applying the dark grey colour material to the buttons

17. How to Render the Game Boy

Step 1

Click on the Render Settings button located in the top menu bar.

Selecting the render settings button

Step 2

In the Save settings you can select the file location that you want to save your image in. Make sure you also choose the format (PNG) and tick Alpha Channel.

You may wish to tick Alpha Channel on, if you want to continue editing the image in another program such as Adobe Photoshop.

Choosing the save settings

Step 3

Under the Output Settings choose the resolution, height and width of the image. This tutorial uses the settings 1920×1200 72dpi.

Choosing the output settings

Step 4

In the Anti-Aliasing settings, select Cubic (Still Image) which can be found under Filter. This will ensure that your render is nice and sharp.

Choosing the Anti-Aliasing settings

Step 5

Click on the Render button to render your final image. Remember to make sure that you have selected the right view in your viewport (front, right, left or perspective etc.) and that you are happy with the camera angle. 

Choose a suitable angle in the viewport by navigating around the scene.

Click on the render button

Step 6

You can also create a camera if you would like greater control over the camera and render options.

Click on the camera button

The End Result

Wait for the final render to finish. Once the render is complete you can repeat the steps above and change the camera or the view port to create a new image from a different angle. 

Throughout the course of the project you have successfully learnt how to:

  • Import Reference Images
  • Prepare for Modelling
  • Model the retro Game Boy
  • Create Materials
  • Colour the Model
  • Render the Game Boy 

Feel free to share your own creations below. You can also export your image into Adobe Photoshop to enhance it further or to use it as part of a larger image.

Final Nintendo Game Boy Image

Posted on Leave a comment

How to Create a Retro Game Boy in 3D: Part 1

Final product image
What You’ll Be Creating

Follow this tutorial step-by-step to create a retro Nintendo Game Boy that you can use in video games, graphic design and illustration projects whilst learning how to model in Cinema 4D

Some of the skills you’ll learn in this tutorial include creating basic 3D modelling, importing reference images and basic rendering techniques.

In this, the first part of the two-part tutorial, I’ll show you:

  • How to Import Reference Images
  • How to Prepare for Modelling
  • How to Model the retro Game Boy

1. How to Import Reference Images

Step 1

Use the middle mouse button to click anywhere on the viewport. This will display all four views. From there, use the middle mouse button to select the Front view.

Four different camera angles to choose from
Four different camera angles to choose from

Step 2

In the Attributes tab select Mode > View Settings.

Select View Settings
Select View Settings

Step 3

In Viewport [Right] select the Back button and click on the button next to Image. 

Select the Back button
Select the Back button

Step 4

Select your reference image from the finder and open it. In this tutorial I’ll use the front view of a retro Nintendo Game Boy to help me.

Reference image of a Nintendo Game Boy
Reference image of a Nintendo Game Boy

Step 5

Adjust the image size and transparency to your liking in the properties window.

Adjust size in the properties window
Adjust size in the properties window

2. How to Adjust the Cube Shape

Step 1

In the top menu bar select the Cube to spawn a cube into the scene.

Spawn a cube into the scene
Spawn a cube into the scene

Step 2

In the properties window, adjust the size so that the shape of the cube fits the shape of the Game Boy background image.

Adjust the size of the cube
Adjust the size of the cube

Step 3

The shape of the cube should roughly fit the shape of the Game Boy background image at this stage. 

Ensure that you check the shape using the other camera views as well (perspective, side and top etc).

The cube should fit the size of the reference image
The cube should fit the size of the reference image

Step 4

Ensure that you check the shape using the other camera views as well (perspective, side and top etc). The size of the shape in the Z axis should be roughly half of what the Game Boy should be.

View in perspective
View in perspective

Step 5

Select the Make Editable button to make the shape editable. Next select the Edge Selection button.

Select Make Editable and Edge Selection buttons
Select Make Editable and Edge Selection buttons

Step 6

Select the bottom right corner of the shape. This is highlighted in orange.

Select the right corner of the shape with Edge Selection
Select the right corner of the shape with Edge Selection

Step 7

Once selected choose the Bevel Tool by using the mouse to Right Click > Bevel. Adjust the bevel by changing the settings in the properties window.

Use the Bevel Tool to curve the corner
Use the Bevel Tool to curve the corner

Step 8

Select the remaining three corners of the shape. This is highlighted in orange. Use the Bevel Tool to curve the corners slightly. The curvature of the remaining three corners should be less than the bottom right corner.

Adjust the remaining corners using Edge Selection
Adjust the remaining corners using Edge Selection

3. How to Create the Game Boy Shape

Step 1

Select the Move Tool. To duplicate this shape you will need to hold CTRL button on the keyboard and then click and drag the Blue Arrow using the mouse. Leave a small gap in between both shapes.

Duplicate the shape
Duplicate the shape

Step 2

Duplicate the shape again and place the new shape in in between the other two (in the middle). With the middle shape still selected, use the Scale Tool and shrink the middle shape by about 95%.

Duplicate and scale the object
Duplicate and scale the object

Step 3

Click on the small back arrow on the Cube button and select Cylinder from the list of options. This will spawn a Cylinder in the scene.

Select Cylinder
Select Cylinder

Step 4

Adjust the properties of the size and orientation of the cylinder using the properties window. The thickness of the cylinder should be roughly the same as the small gap created between the two larger shapes.

Adjust the Cylinder
Adjust the Cylinder

Step 5

Once you’re happy with the size and shape of the cylinder, place it in between the two shapes in the gap.

Place the Cylinder
Place the Cylinder

4. How to Create the Screen

Step 1

Duplicate one of the larger shapes and then use the Scale Tool to reduce the thickness of the shape so that it becomes quite thin.

Duplicate and scale the object
Duplicate and scale the object

Step 2

Use the Move Tool and combine it with the Points Selection Tool to move the rounded corners of the shape so that it fits the size of the screen border (as shown in the background reference image).

Combine the points button with the move and selection tools
Combine the points button with the move and selection tools

Step 3

Select the points you want to move and then move them into place using the Move Tool.

Adjust the points of the object
Adjust the points of the object

Step 3

Create a new cube and change the shape of it so that it fits the main screen of the Game Boy.

Create a new cube object and fit it to the screen
Create a new cube object and fit it to the screen

Step 4

Once you’re happy with the size of the shapes, use the Move Tool to place them on to the body of the Game Boy.

Place the new objects to the Game Boy
Place the new objects to the Game Boy

5. How to Create the Buttons

Step 1

Click on the small back arrow on the Cube button and select Cylinder from the list of options. This will spawn a cylinder in the scene.

Select the Cylinder button
Select the Cylinder button

Step 2

Adjust the properties of the cylinder so that it is facing the correct way, has the correct size and has the desired thickness for the button. Then use the Move Tool to move the button into the correct place using the background image as your reference.

Scale the cylinder and place it correctly
Scale the cylinder and place it correctly

Step 3

Duplicate the button and use the Scale Tool to increase the size slightly. You will want to increase the size uniformly so that the centre point of each object is the same (seen from the front view). 

Duplicate and scale new cylinder
Duplicate and scale new cylinder

Step 4

Make sure that the duplicate cylinder is selected and click on the Make Editable Button. 

Select the Make Editable button
Select the Make Editable button

Step 5

Now that the object is editable, we can edit the points. Make sure that the cylinder object is selected and click on Points Button. 

Click on the Rectangle Selection Tool so that we can select the points you wish to edit. Once you’ve selected the points you can move them using the Move Tool.

Combine the points button with the move and selection tools
Combine the points button with the move and selection tools

Step 6

Use the Rectangle Selection Tool to select half of the cylinder. Then use the Move Tool to move the points to the right.

Adjust the points of the shape
Adjust the points of the shape

Step 7

Duplicate the button and use the Move Tool to move the button to the right.

Duplicate and place buttons
Duplicate and place buttons

Step 8

When moving a whole object, select and use the Model button. If this is not selected, you may find that you will be moving the points, edges or faces instead.

Select the Model button
Select the Model button

Step 9

Making sure that you have the Model button turned on, select all three cylinder shapes which now make up your Game Boy buttons. 

Once all three have been selected, press Alt-G on the keyboard to group them up. You can then rename the Null by double clicking on it.

Group the cylinders
Group the cylinders

Step 10

Select the whole group and use the Rotate Tool to rotate the buttons so that it matches the reference image in the front view.

Rotate the buttons
Rotate the buttons

Step 11

Ensure that you use the other camera views to ensure that the button placement is correct and moved to the right area.

Place the buttons on to the Game Boy
Place the buttons on to the Game Boy

6. How to Create the D-Pad

Step 1

To create the D-Pad, use a similar technique to creating the Game Boy buttons. 

First, create the background cylinder. To do this click on the small back arrow on the Cube button and select Cylinder from the list of options. This will spawn a cylinder in the scene.

Select the Cylinder button
Select the Cylinder button

Step 2

Resize the cylinder using the shape parameters window so that it fits roughly around the size of the D-Pad. Then use the Move Tool to move the cylinder shape in place. 

Place cylinder correctly onto Game Boy
Place cylinder correctly onto Game Boy

Step 3

With the cylinder shape selected you can use the parameters window to adjust the radius, height and rotation segments of the cylinder. 

The more segments you create, the smoother the circle will become. We also want to make sure that the height is not too large, as this shape will only be used on the surface of the Game Boy.

Ajdust the cylinder
Ajdust the cylinder

Step 4

Create a cube and use the parameters window again to adjust the shape so that it is roughly the same shape as half of the D-Pad.

Create a new Cube and adjust it
Create a new Cube and adjust it

Step 5

With the new cube shape selected you can use the parameters window to adjust the size or the D-Pad. This object will need to be duplicated in order to create the second half of the D-Pad.

Adjust the new object
Adjust the new object

Step 6

To duplicate this shape, select the cube from the object list. Press the CTRL button on the keyboard and click and drag the cube object. 

Place the duplicate cube object on the list. This way of duplicating an object means that both objects are exactly the same in exactly the same position.

Duplcate the new cube
Duplcate the new cube

Step 7

Rotate the new D-Pad object using the Rotate Tool and rotate it by exactly 90 degrees. 

Rotate the object 90 degrees
Rotate the object 90 degrees

Step 8

Make sure that you use the other camera views to ensure that the D-Pad placement is correct and moved to the right area.

Place the D-Pad onto the surface of the Game Boy
Place the D-Pad onto the surface of the Game Boy

7. How to Create the Start & Select Buttons

Step 1

Because the start and select buttons have a very similar shape to the shape that was created for the background for the buttons, duplicate the same shape. 

Do this by pressing and holding the CTRL button on the keyboard and clicking and dragging the object using the Move Tool. 

Duplicate shape
Duplicate shape

Step 2

With the new object selected, use the Scale Tool to make the shape around the same size as the start and select buttons in the reference image. 

Ensure that you click on the background instead of the coloured icons when using the tool if you want to scale uniformly.

Scale shape
Scale shape

Step 3

Use the Move Tool and combine it with the Points Selection Tool to move the rounded corners of the shape so that it fits the length of the start and select button (as shown in the background reference image).

Combine the points button with the move and selection tools
Combine the points button with the move and selection tools

Step 4

Ensure the you have the object and Move Tool selected. Hold the CTRL button on the keyboard and click and drag the button to duplicate it.

Duplicate and space the shapes out
Duplicate and space the shapes out

Step 5

Ensure that you use the other camera views to ensure that the start and select button placement is correct and moved to the right area.

Place buttons on to the surface of the Game Boy
Place buttons on to the surface of the Game Boy

8. How to Create the Speakers

Step 1

Because the speakers have a very similar shape to the shape that was created for the start and select buttons, we will be duplicating the same shape. 

Do this by pressing and holding the CTRL button on the keyboard and clicking and dragging the object using the Move Tool. 

Duplicate button
Duplicate button

Step 2

Use the Rotate Tool to rotate the shape so that it fits the angle shown in the reference image.

Rotate button
Rotate button

Step 3

Use the Scale Tool so that the size roughly fits that of the speakers in the reference image.

Scale button
Scale button

Step 4

Use the Move Tool and combine it with the Points Selection Tool to move the rounded corners of the shape so that it fits the length of the speakers (as shown in the background reference image).

Combine the points button with the move and selection tools
Combine the points button with the move and selection tools

Step 5

Ensure that the length of the shape roughly fits that of the speakers in the reference image. 

Lengthen the button
Lengthen the button

Step 6

Ensure the you have the object and Move Tool selected. Hold the CTRL button on the keyboard and click and drag the button to duplicate it.

Duplicate the shape
Duplicate the shape

Step 7

Duplicate the object several times until looks roughly the same as the shape of the Game Boy speakers.

Duplicate the shape many times until you are satisfied
Duplicate the shape many times until you are satisfied

Step 8

Ensure that you use the other camera views to ensure that the placement is correct and moved to the right area.

Place shapes onto the surface of the Game Boy
Place shapes onto the surface of the Game Boy

9. How to use Boole

Step 1

Select Boole by clicking on the small arrow on the bottom right of the Array button at the top of the screen.

Select Boole from menu
Select Boole from menu

Step 2

Select the objects that you want the Boole to effect. This will be the front part of the Game Boy and the speakers (which have been grouped together using Alt-G). 

Put both objects into the Boole and make sure that the speakers group is placed below the Game Boy body.

Group objects under Boole
Group objects under Boole

Step 3

By using the Boole create cut outs using the shapes created. 

Cut out the speakers
Cut out the speakers

Step 4

Use the different camera angles to double check the Game Boy model to make sure that it looks correct.

Final Game Boy model
Final Game Boy model

Coming Next

In the second part of the the tutorial series, I’ll show you how to:

  • Create Materials
  • Colour in the Game Boy Model
  • Render the Game Boy Model
Posted on Leave a comment

How to Create a Low Poly Sword in 3DS Max: Part 2

Final product image
What You’ll Be Creating

Follow this tutorial step-by-step to create a low poly sword model that you can use in video games, graphic design and illustration projects whilst learning 3D Studio Max quickly. 

Some of the skills you’ll learn in this tutorial include creating basic 3D shapes, modelling techniques and preparing topology for texturing.

In the first part of the two-part tutorial, I showed you how to:

  • Model the Handle of the Sword
  • Model the Hand Guard of the Sword
  • Model the Blade of the Sword

In this, the second part of the the tutorial series, I’ll show you how to:

  • Unwrap the UVs
  • Create a UV Map
  • Create a Texture Map in Photoshop

05. UVW Remove

Step 1

In this section of the tutorial I’ll go through how to prepare the model so that I can unwrap it. Both UVW Remove and Reset Xform must be performed before attempting to unwrap any of the sword parts.

Ensure you have selected the sword part that you want to unwrap (blade, handle, hand guard etc.). The object must now be converted into an editable mesh. To do this Right Click on the object and select Convert to Editable Mesh

Convert to Editable Mesh
Convert to Editable Mesh

Step 2

Then go to the Utilities Tab and click on the More button. Scroll down the menu and select UVW Remove.

Select UVW Remove
Select UVW Remove

Step 3

Click on the UVW button that appears under Parameters to apply this to the selected object.

Click on the UVW Button
Click on the UVW Button

Step 4

Once UVW Remove has been applied, Right Click on the object and select Convert to Editable Poly.

Convert to Editable Poly
Convert to Editable Poly

06. Reset XForm

Step 1

Ensure that you’ve selected the sword part that you want to unwrap (blade, handle, hand guard etc.). Go to the Utilities Tab and click on the Reset XForm button and then click on the Reset Selected button.

Click on Reset Xform
Click on Reset Xform

Step 2

Click on the Modify Tab and right click on XForm > Collapse All to apply this to your object.

Secondary click on XForm and then Collapse All
Secondary click on XForm and then Collapse All

07. Unwrapping the Blade

Step 1

To begin unwrapping the blade, ensure you’ve selected it and ensure you’ve followed both sections five and six of this tutorial—UVW Remove and Reset XForm.

To make the next step easier ensure that you’re viewing the blade from the side view.

Turn the Blade model
Turn the Blade model

Step 2

Delete one half of the blade using the Vertex Selection tool.

Delete half of the blade
Delete half of the blade

Step 3

Make sure that the blade is an editable poly by right clicking on it and then selecting Convert to Editable Poly.

Convert to Editable Poly
Convert to Editable Poly

Step 4

Go to the Modifier List drop down menu and select Unwrap UVW.

Select Unwrap UVW
Select Unwrap UVW

Step 5

In the Modify Tab under the Edit UVs section select Open UV Editor.

Select Open UV Editor
Select Open UV Editor

Step 6

The Edit UVSs window appears. Use the Move Tool and the Scale Tool located in the top right of the window to move the object within the square. Once the object is within the square, click on the Reset Peel button.

Select Reset Peel
Select Reset Peel

Step 7

After clicking on the Reset Peel button, use the Move Tool to move the blade inside the border of the square.

Move the blade in the UV Editor
Move the blade in the UV Editor

Step 8

Use the tools on the top left of the Edit UVWs window (Move, Rotate, Scale) and place it on the right side of the square within the border.

Edit the UVs in the UV Editor
Edit the UVs in the UV Editor

Step 9

Close the Edit UVWs window. Ensure the blade is selected and go to the Modify tab. Right Click on the Unwrap Modifier and select Collapse All

If it becomes an Editable Mesh, convert it into an Editable Poly. Secondary Click > Convert to Editable Poly.

Right click and Collapse All
Right click and Collapse All

08. Duplicate the Blade UVs

Step 1

Use the Edge Selection Tool and select one of the outer most edges of the blade.

Select an edge of the sword
Select an edge of the sword

Step 2

In the Modifier List, scroll down and select Symmetry.

Select Symmetry from the list
Select Symmetry from the list

Step 3

Under the Parameters section, select the following:

  • Mirror Axis: Y
  • Flip
Edit the mirror parameters
Edit the mirror parameters

Step 4

Once the parameters have been selected, your blade should be whole again.

Mirror the blade
Mirror the blade

Step 5

Convert the blade back into an Editable Poly by selecting the blade with Secondary Click > Convert to Editable Poly.

Convert to Editable Poly
Convert to Editable Poly

Step 6

To check that the UVWs have been duplicated correctly, apply the Unwrap UVW modifier on the blade again by selecting it from the Modifier List.

Select Unwrap UVW
Select Unwrap UVW

Step 7

Move the UVWs in the Edit UVWs window to ensure that the blade shapes sit on top of each other nicely.

Check the UVs
Check the UVs

09. Unwrapping the Hand Guard

Step 1

Select the hand guard. The aim of this section is to remove half of each shape, so that they can be mirrored and so that they take up less space on the UVW Map.

Delete half of the hand guard
Delete half of the hand guard

Step 2

Select the middle tube shape and using the Vertex Selection tool, select and delete half of the shape.

Delete half of the shape
Delete half of the shape

Step 3

Select the middle shape and using the Vertex Selection tool, select and delete half of the shape.

Delete half of the shape
Delete half of the shape

Step 4

Repeat the same process for the horns. However I’ll only need one half as this will be duplicate four times because we only need one half for the texture map.

Delete 34 of the horns
Delete 34 of the horns

Step 5

Select all the hand guard shapes and select Unwrap UVW from the Modifier List.

Select Unwrap UVW
Select Unwrap UVW

Step 6

Using the same process as what was done for the blade, Reset Peel and arrange the shapes within the square. Ensure all the objects are attached to one another.

Arrange UVs in the UVW Editor
Arrange UVs in the UVW Editor

Step 7

Convert the shapes back into an Editable Poly by selecting the blade and going Right Click > Convert to Editable Poly. Using the same process as what was done with the blade, use the Symmetry modifier to make the hand guard whole again.

Select symmetry from the menu
Select symmetry from the menu

Step 8

Ensure that the symmetry modifier gives you the correct results by choosing the right parameters. You can also apply Unwrap UVW modifier again to check the UVWs are placed on top of each other like the blade.

Delete half of the hand guard
Delete half of the hand guard

10. Unwrapping the Handle

Step 1

Select the middle section of the handle.

Select the middle section
Select the middle section

Step 2

Using the Vertex Selection tool, delete half of the handle.

Delete half of the handle
Delete half of the handle

Step 3

Repeat the same steps as above to remove half of the handle for all the objects.

Delete half of all the handle objects
Delete half of all the handle objects

Step 4

Ensure all the objects are attached to each other and apply the Unwrap UVW modifier.

Select Unwrap UVW from the menu
Select Unwrap UVW from the menu

Step 5

Using the same process as what was done for the blade and the hand guard, Reset Peel and arrange the shapes within the square. 

Arrange the UVs in the UVW Editor
Arrange the UVs in the UVW Editor

Step 6

Convert the shapes back into an Editable Poly by selecting the blade and going Secondary Click > Convert to Editable Poly

Using the same process as what was done with the blade and hand guard, use the Symmetry modifier to make the handle whole again.

Select Symmetry from the menu
Select Symmetry from the menu

Step 7

Ensure the symmetry modifier gives you the correct results by choosing the right parameters. You can also apply Unwrap UVW modifier again to check the UVWs are placed on top of each other like the blade.

Apply the symmetry modifier to the handle
Apply the symmetry modifier to the handle

11. Render UVW Map

Step 1

Attach all the objects together with Secondary Click > Attach, then apply the Unwrap UVW modifier.

Select Unwrap UVW fromt he menu
Select Unwrap UVW fromt he menu

Step 2

Open the UV Editor and ensure all the shapes are arranged within the square and are not overlapping each other. You can still edit them at this stage using the Move, Scale or Rotate tool.

Arrange the UVs in the UVW Editor
Arrange the UVs in the UVW Editor

Step 3

Once you are happy with the arrangement of the UVs go to Tools > Render UVW Template.

Select Render UVW Template
Select Render UVW Template

Step 4

Select the dimensions you would like to render the UV Map to and then click the Render UV Template button.

Edit dimensions to render the UV Map
Edit dimensions to render the UV Map

Step 5

Once the map has been generated, save it to a location using the save button on the top left hand corner of the window. 

Save the UV Map
Save the UV Map

12. Create Texture Map

Step 1

Import the UV map into Photoshop and create a new background layer below the UVs.

Import UV Map to Photoshop
Import UV Map to Photoshop

Step 2

You can make the UVs easier to see by changing the colour of the lines. To do this, right click on the UV layer and select Blending Options.

Select Blending Options
Select Blending Options

Step 3

Select Colour Overlay and choose a white colour.

Select Colour Overlay and choose white
Select Colour Overlay and choose white

Step 4

Create a new layer underneath the UVs and create a colour map so that you can select the shapes efficiently.

Create a colour map
Create a colour map

Step 5

Using the UVs and the colour map, create a new layer and create some base colours for the blade.

Create the base colours for the blade
Create the base colours for the blade

Step 6

On a separate layer, create some base colours for the handle.

Create the base colours for the handle
Create the base colours for the handle

Step 7

You can create a new layer on top of your base colours to create some details to the sword. You can add some highlights, cracks and battle damage to make it look more interesting.

Create some details for the texture map
Create some details for the texture map

Step 8

Once you are happy with the results, save your Texture Map as an image file.

Save the texture map
Save the texture map

13. Apply the Texture Map

Step 1

Ensure you have the sword selected and then click on the Material Editor button on the top menu bar.

Select the Material Editor
Select the Material Editor

Step 2

Once the Material Editor window appears go to Mode > Compact Material Editor.

Select the Compact Material Editor
Select the Compact Material Editor

Step 3

Select one of the spheres that you want the texture map to appear on and then click on the empty box next to Diffuse.

Select the Diffuse
Select the Diffuse

Step 4

Scroll up to the top of the Material/Map Browser and select Bitmap.

Choose bitmap from the menu
Choose bitmap from the menu

Step 5

Select the Texture Map image that was created in Adobe Photoshop.

Select the texture map
Select the texture map 

Step 6

Ensure the sword model is selected and then click on Assign Material to Selection and then click on Show Shaded Material in Viewport

Apply the texture map
Apply the texture map 

Step 7

This should apply the texture map to the 3D Sword Model.

The texture map has been applied to the sword model
The texture map has been applied to the sword model

Conclusion

And with that, the 3D Low Poly Sword is complete. Feel free to share the own creations below. Explore different objects, shapes and colours to find out what works best for your model.

You can also render the model and export it to Adobe Photoshop to create an image for your portfolio.

Final sword model including the texture map
Final sword model including the texture map

Posted on Leave a comment

Creating Playing Cards Dynamically Using Code for Game Jams

Final product image
What You’ll Be Creating

This tutorial is different from my earlier tutorials as this one is oriented towards game jams and game prototyping, specifically card games. We are going to create a 2D playing card deck in Unity without using any art—purely with code.

1. Components of a Playing Card Deck

A playing card deck has a total of 52 cards with 13 cards each of 4 different symbols. In order to create one using code, we will need to create these 4 symbols, the rounded rectangular base for the card, and the design on the back of the card.

The design on the back of the card can be any abstract pattern, and there are numerous ways to create one. We will be creating a simple tileable pattern which will then be tiled to create the design. We won’t have any special design for the A, K, Q, and J cards.

2. Alternative Solutions

Before we start, I have to mention that there are easier solutions out there which we can use to create a deck of cards. Some of those are listed below.

  1. The obvious one is to use pre-rendered art for all the designs.
  2. The less obvious one is to use a font which contains all the necessary symbols. We can also turn the said font into a bitmap font to reduce draw calls and increase performance.

The font-based solution is the fastest and easiest one if you want to do quick prototypes.

3. Creating Textures During Runtime

The first step is to learn how to create a Texture2D using code which can then be used to create a Sprite in Unity. The following code shows the creation of a 256×256 blank texture.

The idea is to draw all the designs onto the texture before we use the Apply method. We can draw designs onto the texture pixel by pixel using the SetPixel method, as shown below.

For example, if we wanted to fill out the entire texture with a color, we could use a method like this.

Once we have a Texture2D created, we can use it to create a Sprite to be displayed on screen.

The complicated part in all this is the creation of the necessary designs on the texture.

4. Creating the Heart Shape

When it comes to the creation of the heart shape, there are many different approaches which we could use, among which are some complicated equations as well as simple mixing of shapes. We will use the mixing of shapes method as shown below, specifically the one with the triangle.

heart shape combining primitive shapes

As you have observed, we can use two circles and a square or a triangle to create the basic heart shape. This means it would miss those extra beautiful curves but would fit our purpose perfectly.

Painting a Circle

Let’s brush up on some equations to paint a circle. For a circle with centre at origin and radius r, the equation for the point (x,y) on the circle is x2 + y2 = r2. Now if the centre of the circle is at (h,k) then the equation becomes  (x-h)2 + (y-k)2 = r2. So if we have a square bounding box rectangle then we can loop through all the points within that rectangle and determine which points fall inside the circle and which do not. We can easily create our PaintCircle method based on this understanding, as shown below.

Once we have the PaintCircle method, we can proceed to create our heart shape as shown below.

The variable resolution is the width and height of the texture.

5. Creating the Diamond Shape

We will discuss two ways to draw the diamond shape.

Painting a Simple Diamond

The easiest one is to extend the code used for the triangle and add an inverted triangle on the top to create the necessary shape, as shown below.

Painting a Curvy Diamond

The second one is to use another equation to create a better, curvy version of our diamond shape. We will be using this one to create the tiling design for the back side of our card. The equation for a circle derives from the original equation of an ellipse, which is (x/a)2 + (y/b)2 = r2.

This equation is the same as that of the circle when the variables a and b are both 1. The ellipse equation can then be extended into a superellipse equation for similar shapes just by changing the power, (x/a)n + (y/b)n = rn. So when n is 2 we have the ellipse, and for other values of n we will have different shapes, one of which is our diamond. We can use the approach used to arrive at the PaintCircle method to arrive at our new PaintDiamond method.

Painting a Rounded Rectangle

The same equation can be used to create our rounded rectangle card base shape by varying the value of n.

Painting a Tiling Design

Using this PaintDiamond method, we can draw five diamonds to create the tiling texture for the design on the back of our card.

tiling design and tiled back side of the card

The code for drawing the tiling design is as below.

6. Creating the Spades Shape

The spades shape is just the vertical flip of our heart shape along with a base shape. This base shape will be the same for the clubs shape as well. The below figure illustrates how we can use two circles to create this base shape.

primitive shapes used to define spades shape

The PaintSpades method will be as shown below.

7. Creating the Clubs Shape

At this point, I am sure that you can figure out how easy it has become to create the clubs shape. All we need are two circles and the base shape we created for the spades shape.

primitive shapes used to define clubs shape

The PaintClubs method will be as shown below.

8. Packing Textures

If you explore the Unity source files for this project, you’ll find a TextureManager class which does all the heavy lifting. Once we have created all the necessary textures, the TextureManager class uses the PackTextures method to combine them into a single texture, thereby reducing the number of draw calls required when we use these shapes.

Using the packedAssets array, we can retrieve the bounding boxes of individual textures from the master texture named packedTexture.

Conclusion

With all the necessary components created, we can proceed to create our deck of cards as it is just a matter of properly laying out the shapes. We can either use the Unity UI to composite cards or we can create the cards as individual textures. You can explore the sample code to understand how I have used the first method to create card layouts.

We can follow the same method for creating any kind of dynamic art at runtime in Unity. Creating art at runtime is a performance-hungry operation, but it only needs to be done once if we save and reuse those textures efficiently. By packing the dynamically created assets into a single texture, we also gain the advantages of using a texture atlas.

Now that we have our playing card deck, let me know what games you are planning to create with it.

Posted on Leave a comment

Interactive Storytelling: Non-Linear

In this final part of our series about interactive storytelling, we’ll talk about the future of storytelling in videogames.

Non-Linear Interactive Storytelling

Or the Philosopher’s Stone

Non-linear interactive storytelling is similar to the philosopher’s stone: everybody talks about it, everybody wants it, but no one has found it yet.

Let’s start with the definition: what is non-linear interactive storytelling? 

It’s simple: this is storytelling that changes based on the player’s choices. In the previous article, we discussed linear interactive storytelling and how it gives the player only the illusion of choice. Of course, there are some really sophisticated games that give a better illusion about freedom and choices and even the chance to really change the course of the story. But still, it is an illusion.

Bioshock gameplay
Bioshock gives players some interesting choices, but they are gameplay and narrative related, not story related.

So the best definition of non-linear interactive storytelling is a way to break this illusion and give the player real choices. However, this requires some advanced technology: Artificial Intelligence.

That’s because true non-linear interactive storytelling requires an AI capable of reacting to the player’s actions. As in real life. The theory is quite simple, on paper. The player does something in the game’s world, and the world and everybody inside it will react.

But, of course, creating a system like that is nearly impossible with the current technology, because of the complex calculations needed. We’re talking about totally removing the scripted part from a game! And right now at least 90% of a game is scripted.

In the second part, we talked about Zelda Breath of the Wild. That, I think, is a starting point: a game where the developers set out rules about the world, and the player can play with them freely.

Extend this idea to all elements, and you’ll have the illusion broken.

Again: this has never been done, but I’m sure somebody will do it in the future. Maybe with the next console generation, when the calculation power increases.

Okay, that’s the future. But what about today?

Today, there are some games that are trying to create a non-linear experience. I’ll talk about two of them, as examples.

The first is an AAA game. Everybody knows it: Detroit Become Human. In his most recent game, David Cage is trying really hard to give the player a lot of forks and choices. Yes, it’s still an illusion, but it’s the game that I know with the highest number of narrative forks. While playing this game, you have the feeling that every choice matters. Even the small ones.

Detroit Become Human narrative forks
This flowchart shows all the choices in one scene of Detroit Become Human

Don’t get me wrong: it’s all scripted. And I think that’s the wrong way to achieve real non-linear storytelling. But it’s one of the games that comes closest. It’s great to play without trying to discover the hidden script, just to “live” the experience.

Of course, the game itself, chapter after chapter, will show you a flowchart about the forks. And you will know where the narrative joints are. But, really: if you can, try it without thinking about that, and you’ll have a better illusion of choice in the game.

The second game is an indie experiment: Avery. It’s an experimental game based on an AI. It’s free, it’s for both iOS and Android, and you must try it. It’s a game where the AI will respond to the player dynamically. And that’s the right way, I’m sure, to achieve true non-linear interactive storytelling.

Avery gameplay
Avery in all its splendor

Of course, keep in mind that it’s an indie game and it’s free. But it’s amazing. It’s an “Artificial Intelligence Conversation”. Those among you who are a little older (like me) will surely remember Eliza, the first chatterbot. Avery is an evolution of that. You’ll talk with an AI that has lost its memory and is scared because something is wrong. Again: try it because, playing Avery, you can see one of the first steps towards our philosopher’s stone.

Direct and Indirect Mode

As I said at the start of the article, we have the theory. More theory than we really need, probably—that’s because we can’t work on the real part, so we’re talking, writing and thinking too much about it.

But a good part of this theory is found in some interesting papers and books. In those you will find two main definitions: direct mode and indirect mode. These are the ways in which the storytelling should react to the player’s actions.

Direct mode is quite simple: the player does something, and the story will change in response. Action -> reaction. This is the way, for example, most tabletop role-playing games work.

The Game Master explains a situation -> the player makes a choice -> the Game Master tells how the story reacts to that choice.

The two games that I gave as examples before also work in this way. And when we have a full non-linear interactive storytelling game, I guess this mode will be the most common.

Also note that the majority of linear storytelling works this way: there is a setting, with a conflict, the character does something, and the story (the ambient environment, the villain, or some other character) reacts.

But there is a more sophisticated way to tell a non-linear story: indirect mode.

This is more like how the real world works. You do something, which causes a small direct reaction, which engages a chain reaction that can go on to have effects in distant places.

This is the so-called “butterfly effect”. You will discover that this type of story-telling works only if there is not a real “main character”. Because, in the real world, everyone is the main character of his or her own story. But there are billions of stories told every second around the world. And each story, somehow, is influenced by all the other stories.

Back to gaming, there are already games that use this concept: MMOs. Think about World of Warcraft: there is no main character, and the “total story” (the sum of all stories about all characters) is a complex web that links all individual stories. So actually, in the first part of this article, I lied: there is already a way to create non-linear interactive storytelling, and that’s to put the domain of the story in the players’ hands!

World of Warcraft gameplay
World of Warcraft is a place where the stories are told between players.

Of course, in World of Warcraft, there are still scripted parts (the enemies, the quests, the NPCs, etc.), and that’s why WoW is not an example of true non-linear interactive storytelling. But when the players have the ability to create their own story, there is not only non-linear storytelling, but also it’s told in indirect mode.

So think about this: some day, in the near future, we’ll have a game where the AI will be so advanced that we’ll play with it in the same way we play with the other humans in WoW.

That’s the goal. That’s true non-linear interactive storytelling.

Conclusion

I started writing this series of articles almost six months ago. It’s been a labour of love, and I’m thankful to Envato Tuts+ for encouraging me to pursue it. This is a topic I really care about, and there are a lot of things that I had to cut to keep the series to only three parts. 

If you are interested, though, there are lots of articles and videos on this topic. For example, I could have talked about the dissonance ludonarrative (look it up!). I also had to cut a big part about Florence (a really great linear interactive storytelling game—again, try it if you can). And so on.

However, I’m happy to have this series wrapped up, and I hope you’ve enjoyed the articles and will find them useful.

Interactive storytelling is, in my opinion, one of the big challenges that the industry will face in its next step. In the last two console generations, we saw incredible advances in graphics and gameplay. Now it’s time to think about the story. Because, you know, stories matter.

Posted on Leave a comment

Creating Toon Water for the Web: Part 3

Welcome back to this three-part series on creating stylized toon water in PlayCanvas using vertex shaders. In Part 2 we covered buoyancy & foam lines. In this final part, we’re going to apply the underwater distortion as a post-process effect.

Refraction & Post-Process Effects

Our goal is to visually communicate the refraction of light through water. We’ve already covered how to create this sort of distortion in a fragment shader in a previous tutorial for a 2D scene. The only difference here is that we’ll need to figure out which area of the screen is underwater and only apply the distortion there. 

Post-Processing

In general, a post-process effect is anything applied to the whole scene after it is rendered, such as a colored tint or an old CRT screen effect. Instead of rendering your scene directly to the screen, you first render it to a buffer or texture, and then render that to the screen, passing through a custom shader.

In PlayCanvas, you can set up a post-process effect by creating a new script. Call it Refraction.js, and copy this template to start with:

This is just like a normal script, but we define a RefractionPostEffect class that can be applied to the camera. This needs a vertex and a fragment shader to render. The attributes are already set up, so let’s create Refraction.frag with this content:

And Refraction.vert with a basic vertex shader:

Now attach the Refraction.js script to the camera, and assign the shaders to the appropriate attributes. When you launch the game, you should see the scene exactly as it was before. This is a blank post effect that simply re-renders the scene. To verify that this is working, try giving the scene a red tint.

In Refraction.frag, instead of simply returning the color, try setting the red component to 1.0, which should look like the image below.

Scene rendered with a red tint

Distortion Shader

We need to add a time uniform for the animated distortion, so go ahead and create one in Refraction.js, inside this constructor for the post effect:

Now, inside this render function, we pass it to our shader and increment it:

Now we can use the same shader code from the water distortion tutorial, making our full fragment shader look like this:

If it all worked out, everything should now look like as if it’s underwater, as below.

Underwater distortion applied to the whole scene

Challenge #1: Make the distortion only apply to the bottom half of the screen.

Camera Masks

We’re almost there. All we need to do now is to apply this distortion effect just on the underwater part of the screen. The most straightforward way I’ve come up with to do this is to re-render the scene with the water surface rendered as a solid white, as shown below.

Water surface rendered as a solid white to act as a mask

This would be rendered to a texture that would act as a mask. We would then pass this texture to our refraction shader, which would only distort a pixel in the final image if the corresponding pixel in the mask is white.

Let’s add a boolean attribute on the water surface to know if it’s being used as a mask. Add this to Water.js:

We can then pass it to the shader with material.setParameter('isMask',this.isMask); as usual. Then declare it in Water.frag and set the color to white if it’s true.

Confirm that this works by toggling the “Is Mask?” property in the editor and relaunching the game. It should look white, as in the earlier image.

Now, to re-render the scene, we need a second camera. Create a new camera in the editor and call it CameraMask. Duplicate the Water entity in the editor as well, and call it WaterMask. Make sure the “Is Mask?” is false for the Water entity but true for the WaterMask.

To tell the new camera to render to a texture instead of the screen, create a new script called CameraMask.js and attach it to the new camera. We create a RenderTarget to capture this camera’s output like this:

Now, if you launch, you’ll see this camera is no longer rendering to the screen. We can grab the output of its render target in Refraction.js like this:

Notice that I pass this mask texture as an argument to the post effect constructor. We need to create a reference to it in our constructor, so it looks like:

Finally, in the render function, pass the buffer to our shader with:

Now to verify that this is all working, I’ll leave that as a challenge.

Challenge #2: Render the uMaskBuffer to the screen to confirm it is the output of the second camera.

One thing to be aware of is that the render target is set up in the initialize of CameraMask.js, and that needs to be ready by the time Refraction.js is called. If the scripts run the other way around, you’ll get an error. To make sure they run in the right order, drag the CameraMask to the top of the entity list in the editor, as shown below.

PlayCanvas editor with CameraMask at top of entity list

The second camera should always be looking at the same view as the original one, so let’s make it always follow its position and rotation in the update of CameraMask.js:

And define CameraToFollow in the initialize:

Culling Masks

Both cameras are currently rendering the same thing. We want the mask camera to render everything except the real water, and we want the real camera to render everything except the mask water.

To do this, we can use the camera’s culling bit mask. This works similarly to collision masks if you’ve ever used those. An object will be culled (not rendered) if the result of a bitwise AND between its mask and the camera mask is 1.

Let’s say the Water will have bit 2 set, and WaterMask will have bit 3. Then the real camera needs to have all bits set except for 3, and the mask camera needs to have all bits set except for 2. An easy way to say “all bits except N” is to do:

You can read more about bitwise operators here.

To set up the camera culling masks, we can put this inside CameraMask.js’s initialize at the bottom:

Now, in Water.js, set the Water mesh’s mask on bit 2, and the mask version of it on bit 3:

Now, one view will have the normal water, and the other will have the solid white water. The left half of the image below is the view from the original camera, and the right half is from the mask camera.

Split view of mask camera and original camera

Applying the Mask

One final step now! We know the areas underwater are marked with white pixels. We just need to check if we’re not at a white pixel, and if so, turn off the distortion in Refraction.frag:

And that should do it!

One thing to note is that since the texture for the mask is initialized on launch, if you resize the window at runtime, it will no longer match the size of the screen.

Anti-Aliasing

As an optional clean-up step, you might have noticed that edges in the scene now look a little sharp. This is because when we applied our post effect, we lost anti-aliasing. 

We can apply an additional anti-alias on top of our effect as another post effect. Luckily, there’s one available in the PlayCanvas store we can just use. Go to the script asset page, click the big green download button, and choose your project from the list that appears. The script will appear in the root of your asset window as posteffect-fxaa.js. Just attach this to the Camera entity, and your scene should look a little nicer! 

Final Thoughts

If you’ve made it this far, give yourself a pat on the back! We covered a lot of techniques in this series. You should now be comfortable with vertex shaders, rendering to textures, applying post-processing effects, selectively culling objects, using the depth buffer, and working with blending and transparency. Even though we were implementing this in PlayCanvas, these are all general graphics concepts you’ll find in some form on whatever platform you end up in.

All these techniques are also applicable to a variety of other effects. One particularly interesting application I’ve found of vertex shaders is in this talk on the art of Abzu, where they explain how they used vertex shaders to efficiently animate tens of thousands of fish on screen.

You should now also have a nice water effect you can apply to your games! You could easily customize it now that you’ve put together every detail yourself. There’s still a lot more you can do with water (I haven’t even mentioned any sort of reflection at all). Below are a couple of ideas.

Noise-Based Waves

Instead of simply animating the waves with a combination of sine and cosines, you can sample a noise texture to make the waves look a bit more natural and unpredictable.

Dynamic Foam Trails

Instead of completely static water lines on the surface, you could draw onto that texture when objects move, to create a dynamic foam trail. There are a lot of ways to go about doing this, so this could be its own project.

Source Code

You can find the finished hosted PlayCanvas project here. A Three.js port is also available in this repository.

Posted on Leave a comment

Creating Toon Water for the Web: Part 2

Welcome back to this three-part series on creating stylized toon water in PlayCanvas using vertex shaders. In Part 1, we covered setting up our environment and water surface. This part will cover applying buoyancy to objects, adding water lines to the surface, and creating the foam lines with the depth buffer around the edges of objects that intersect the surface. 

I made some small changes to my scene to make it look a little nicer. You can customize your scene however you like, but what I did was:

  • Added the lighthouse and the octopus models.
  • Added a ground plane with color #FFA457
  • Added a clear color for the camera of #6CC8FF.
  • Added an ambient color to the scene of #FFC480 (you can find this in the scene settings).

Below is what my starting point now looks like.

The scene now includes an octopus and a ligthouse

Buoyancy 

The most straightforward way to create buoyancy is just to create a script that will push objects up and down. Create a new script called Buoyancy.js and set its initialize to:

Now, in the update, we increment time and rotate the object:

Apply this script to your boat and watch it bobbing up and down in the water! You can apply this script to several objects (including the camera—try it)!

Texturing the Surface

Right now, the only way you can see the waves is by looking at the edges of the water surface. Adding a texture helps make motion on the surface more visible and is a cheap way to simulate reflections and caustics.

You can try to find some caustics texture or make your own. Here’s one I drew in Gimp that you can freely use. Any texture will work as long as it can be tiled seamlessly.

Once you’ve found a texture you like, drag it into your project’s asset window. We need to reference this texture in our Water.js script, so create an attribute for it:

And then assign it in the editor:

The water texture is added to the water script

Now we need to pass it to our shader. Go to Water.js and set a new parameter in the CreateWaterMaterial function:

Now go into Water.frag and declare our new uniform:

We’re almost there. To render the texture onto the plane, we need to know where each pixel is along the mesh. Which means we need to pass some data from the vertex shader to the fragment shader.

Varying Variables

A varying variable allows you to pass data from the vertex shader to the fragment shader. This is the third type of special variable you can have in a shader (the other two being uniform and attribute). It is defined for each vertex and is accessible by each pixel. Since there are a lot more pixels than vertices, the value is interpolated between vertices (this is where the name “varying” comes from—it varies from the values you give it).

To try this out, declare a new variable in Water.vert as a varying:

And then set it to gl_Position after it’s been computed:

Now go back to Water.frag and declare the same variable. There’s no way to get some debug output from within a shader, but we can use color to visually debug. Here’s one way to do this:

The plane should now look black and white, where the line separating them is where ScreenPosition.x is 0. Color values only go from 0 to 1, but the values in ScreenPosition can be outside this range. They get automatically clamped, so if you’re seeing black, that could be 0, or negative.

What we’ve just done is passed the screen position of every vertex to every pixel. You can see that the line separating the black and white sides is always going to be in the center of the screen, regardless of where the surface actually is in the world.

Challenge #1: Create a new varying variable to pass the world position instead of the screen position. Visualize it in the same way as we did above. If the color doesn’t change with the camera, then you’ve done this correctly.

Using UVs 

The UVs are the 2D coordinates for each vertex along the mesh, normalized from 0 to 1. This is exactly what we need to sample the texture onto the plane correctly, and it should already be set up from the previous part.

Declare a new attribute in Water.vert (this name comes from the shader definition in Water.js):

And all we need to do is pass it to the fragment shader, so just create a varying and set it to the attribute:

Now we declare the same varying in the fragment shader. To verify it works, we can visualize it as before, so that Water.frag now looks like:

And you should see a gradient, confirming that we have a value of 0 at one end and 1 at the other. Now, to actually sample our texture, all we have to do is:

And you should see the texture on the surface:

Caustics texture is applied to the water surface

Stylizing the Texture

Instead of just setting the texture as our new color, let’s combine it with the blue we had:

This works because the color of the texture is black (0) everywhere except for the water lines. By adding it, we don’t change the original blue color except for the places where there are lines, where it becomes brighter. 

This isn’t the only way to combine the colors, though.

Challenge #2: Can you combine the colors in a way to get the subtler effect shown below?

Water lines applied to the surface with a more subtle color

Moving the Texture

As a final effect, we want the lines to move along the surface so it doesn’t look so static. To do this, we use the fact that any value given to the texture2D function outside the 0 to 1 range will wrap around (such that 1.5 and 2.5 both become 0.5). So we can increment our position by the time uniform variable we already set up and multiply the position to either increase or decrease the density of the lines in our surface, making our final frag shader look like this:

Foam Lines & the Depth Buffer

Rendering foam lines around objects in water makes it far easier to see how objects are immersed and where they cut the surface. It also makes our water look a lot more believable. To do this, we need to somehow figure out where the edges are on each object, and do this efficiently.

The Trick

What we want is to be able to tell, given a pixel on the surface of the water, whether it’s close to an object. If so, we can color it as foam. There’s no straightforward way to do this (that I know of). So to figure this out, we’re going to use a helpful problem-solving technique: come up with an example we know the answer to, and see if we can generalize it. 

Consider the view below.

Lighthouse in water

Which pixels should be part of the foam? We know it should look something like this:

Lighthouse in water with foam

So let’s think about two specific pixels. I’ve marked two with stars below. The black one is in the foam. The red one is not. How can we tell them apart inside a shader?

Lighthouse in water with two marked pixels

What we know is that even though those two pixels are close together in screen space (both are rendered right on top of the lighthouse body), they’re actually far apart in world space. We can verify this by looking at the same scene from a different angle, as shown below.

Viewing the lighthouse from above

Notice that the red star isn’t on top of the lighthouse body as it appeared, but the black star actually is. We can tell them apart using the distance to the camera, commonly referred to as “depth”, where a depth of 1 means it’s very close to the camera and a depth of 0 means it’s very far.  But it’s not just a matter of the absolute world distance, or depth, to the camera. It’s the depth compared to the pixel behind.

Look back to the first view. Let’s say the lighthouse body has a depth value of 0.5. The black star’s depth would be very close to 0.5. So it and the pixel behind it have similar depth values. The red star, on the other hand, would have a much larger depth, because it would be closer to the camera, say 0.7. And yet the pixel behind it, still on the lighthouse, has a depth value of 0.5, so there’s a bigger difference there.

This is the trick. When the depth of the pixel on the water surface is close enough to the depth of the pixel it’s drawn on top of, we’re pretty close to the edge of something, and we can render it as foam. 

So we need more information than is available in any given pixel. We somehow need to know the depth of the pixel that it’s about to be drawn on top of. This is where the depth buffer comes in.

The Depth Buffer

You can think of a buffer, or a framebuffer, as just an off-screen render target, or a texture. You would want to render off-screen when you’re trying to read data back, a technique that this smoke effect employs.

The depth buffer is a special render target that holds information about the depth values at each pixel. Remember that the value in gl_Position computed in the vertex shader was a screen space value, but it also had a third coordinate, a Z value. This Z value is used to compute the depth which is written to the depth buffer. 

The purpose of the depth buffer is to draw our scene correctly, without the need to sort objects back to front. Every pixel that is about to be drawn first consults the depth buffer. If its depth value is greater than the value in the buffer, it is drawn, and its own value overwrites the one in the buffer. Otherwise, it is discarded (because it means another object is in front of it).

You can actually turn off writing to the depth buffer to see how things would look without it. You can try this in Water.js:

You’ll see how the water will always be rendered on top, even if it is behind opaque objects.

Visualizing the Depth Buffer

Let’s add a way to visualize the depth buffer for debugging purposes. Create a new script called DepthVisualize.js. Attach this to your camera. 

All we have to do to get access to the depth buffer in PlayCanvas is to say:

This will then automatically inject a uniform into all of our shaders that we can use by declaring it as:

Below is a sample script that requests the depth map and renders it on top of our scene. It’s set up for hot-reloading. 

Try copying that in, and comment/uncomment the line this.app.scene.drawCalls.push(this.command); to toggle the depth rendering. It should look something like the image below.

Boat and lighthouse scene rendered as a depth map

Challenge #3: The water surface is not drawn into the depth buffer. The PlayCanvas engine does this intentionally. Can you figure out why? What’s special about the water material? To put it another way, based on our depth checking rules, what would happen if the water pixels did write to the depth buffer?

Hint: There is one line you can change in Water.js that will cause the water to be written to the depth buffer.

Another thing to notice is that I multiply the depth value by 30 in the embedded shader in the initialize function. This is just to be able to see it clearly, because otherwise the range of values are too small to see as shades of color.

Implementing the Trick

The PlayCanvas engine includes a bunch of helper functions to work with depth values, but at the time of writing they aren’t released into production, so we’re just going to set these up ourselves.

Define the following uniforms to Water.frag:

Define these helper functions above the main function:

Pass some information about the camera to the shader in Water.js. Put this where you pass other uniforms like uTime:

Finally, we need the world position for each pixel in our frag shader. We need to get this from the vertex shader. So define a varying in Water.frag:

Define the same varying in Water.vert. Then set it to the distorted position in the vertex shader, so the full code would look like:

Actually Implementing the Trick

Now we’re finally ready to implement the technique described at the beginning of this section. We want to compare the depth of the pixel we’re at to the depth of the pixel behind it. The pixel we’re at comes from the world position, and the pixel behind comes from the screen position. So grab these two depths:

Challenge #4: One of these values will never be greater than the other (assuming depthTest = true). Can you deduce which?

We know the foam is going to be where the distance between these two values is small. So let’s render that difference at each pixel. Put this at the bottom of your shader (and make sure the depth visualization script from the previous section is turned off):

Which should look something like this:

A rendering of the depth difference at each pixel

Which correctly picks out the edges of any object immersed in water in real time! You can of course scale this difference we’re rendering to make the foam look thicker/thinner.

There are now a lot of ways in which you can combine this output with the water surface color to get nice-looking foam lines. You could keep it as a gradient, use it to sample from another texture, or set it to a specific color if the difference is less than or equal to some threshold.

My favorite look is setting it to a color similar to that of the static water lines, so my final main function looks like this:

Summary

We created buoyancy on objects floating in the water, we gave our surface a moving texture to simulate caustics, and we saw how we could use the depth buffer to create dynamic foam lines.

To finish this up, the next and final part will introduce post-process effects and how to use them to create the underwater distortion effect.

Source Code

You can find the finished hosted PlayCanvas project here. A Three.js port is also available in this repository.

Posted on Leave a comment

Interactive Storytelling: Linear Storytelling

In the last article we saw where the need for storytelling comes from, which is something intrinsic to humankind, and we said that telling a story means basically to convey a message in order to obtain a response in our listener. 

We also started to examine the tools that we, as game designers, have available to learn how to tell stories. Finally, we mentioned the birth of interactive stories, typical of videogames. 

However, in order to thoroughly address this issue, we have to take a step back and start analysing the classic narrative (or passive narrative).

Passive Narrative

In the past, storytelling has traditionally been considered as a one-way relationship: the author of a story chooses a medium (book, theatrical play, movie, etc.) and uses it to tell a story that will be passively received by the audience.

But is it really like that?

Leaving aside the fact that in ancient times attempts were made to directly engage the public during theatrical performances (such as in experimental Greek theatre), passive narrative must actually be considered, more correctly, a two-stage narrative.

Because, if it is true that the author tells us a story to convey a message and to generate a response in us, then two different stages must be taken into account: reception and elaboration.

The greek theatre was often experimental
The Greek theatre was often experimental.

Whenever we assist a story, we are passive, it’s true. For example, while watching a movie in the theater, we’re usually sitting in the dark, in silence, ready to just “live” the experience that the director and the authors have prepared for us. This first stage, reception, is one-way: the author tells, we listen. We become receivers of the message from the author.

However, it’s not unusual to go out of the theater and talk about what we just watched, maybe with our friends or our partner. We comment on the movie, discuss our personal opinions (“I liked it”, “I got bored”, etc.), and often elaborate on the scenes, underlining the details we were most impressed by.

Therefore, we analyse the parts of the message from the author that were etched in our brains, the ones that have generated most of a response in us.

It doesn’t matter what kind of movie we just watched; this kind of after reception interaction happens anyway: whether it’s a comedy, drama, documentary or action movie, the second stage, the elaboration one, always happens. Even if we went to the movie by ourselves, we would think about particular scenes and elaborate on them.

The length and intensity of this stage, clearly, can vary depending on how much we liked the movie (that is to say, depending on how much the message from the author managed to create a response in us).

The most famous franchises in the world are the ones that push their fans to wonder and speculate, for example, in between movies, about a character’s origins that haven’t been revealed yet. Thousands of Twitter messages, Facebook groups, YouTube videos, and Reddit threads, for example, have been created by fans after watching Star Wars Episode VII, proposing theories dealing with the mystery of Rey’s parents.

For two years Star Wars fans talked everyday about the Reys character for Episode VII
For two years, Star Wars fans talked every day about Rey’s character for Episode VII

When we develop a passion for a story that excites us, it usually happens that we dedicate ten or a hundred times as much time to the second stage compared to the first one.

Let’s ask ourselves: why does this second stage even exist? Why does reading a book, closing it, putting it on our nightstand and forgetting about it not seem to be enough? Why do we want, instead, be directly involved, letting the suspension of disbelief make us live the responses that the author wants to create in us? And then why do we keep trying to interact with that story, reliving and analysing specific parts?

The keyword here is precisely interaction: it is one of the needs of humankind. Without going into too much detail about a complex field such as the human psyche (a field that, however, is increasingly being studied by game designers and authors of movies and books because it is obviously extremely useful in order to calibrate our messages and get exactly the desired responses), one of the fundamental parts of human personality is the ego. And it’s precisely our ego that makes us want to be in the center of the story, or pushes us to discover some contact points between the characters in a story and ourselves. It’s our ego that lets us relate to the characters and make our reactions to the story we’re being told so powerful that they become able to actually affect our reality.

Without the ego, we wouldn’t be moved by reading a dramatic book.

At the same time, the ego leads us to not want to play just a minor role in the story—that is to say, to be just a passive audience.

Without the ego we wouldnt be moved by reading a dramatic book
Without the ego, we wouldn’t be moved by reading a dramatic book

We want, by instinct, to be at the center of the scene (and let’s say that we are also living in a time in which society and technology push us in this direction). Thus, if we can’t edit the story while it’s being told to us, we wish to interact with it anyway, at a later stage.

One of the first authors who understood this mechanism was David Lynch. Perhaps one of the most important authors of the modern era, he is certainly the father of TV series as we know them today. In 1990, when David Lynch began telling the story of an unknown (and fictional) town in the northern United States, Twin Peaks, he was following a hunch: he created a mystery that engaged viewers all over the world and led them to look for a solution. 

The dreamlike puzzle created by Lynch and Frost (the other author of Twin Peaks) kept viewers glued to that story for two and a half years (and later for 30 solid years, because the fans never gave up on that unsolved mystery until the release of a very long-awaited third season just last year). The story brought viewers to interact among themselves: they shared theories and possible scenarios. For the first time in the history of television, the second stage became actually important and, clearly, contributed to the success of Lynch’s work.

Then how can we call this experience passive if, sometimes, the second stage lasts longer and is more intense than the first one?

Twin Peaks changed forever the way to tell stories on tv
Twin Peaks changed forever the way to tell stories on TV

You’ll agree with me that the definition is inadequate at least. However, it’s true that during the narration the audience is passive: throughout the duration of the transmission of the message from the author, whoever is receiving that message can only passively listen to it. The audience is not able to intervene in the events or shift their focus to minor details that look interesting to them. Furthermore, in the case of media such as cinema and theater, the audience doesn’t even have the chance to choose the narrative rhythm: the message from the author is shown in an unstoppable way, like a river in flood that overwhelms the viewers.

From this point of view, videogames are deeply different, and their interactive narration opens up countless possibilities that, before videogames became an established medium, used to be unthinkable.

The Evolution of Storytelling

It’s interesting to note how, looking at the videogame world, older media have always felt a little bit of envy. The authors of a movie or a TV series are clearly aware of how charming interaction can be for the audience, and they know that, generation after generation, classic storytelling is getting less and less appealing.

In the last 30 years, many attempts have been made to hybridize the classic nature of certain media, and some have been more successful than others.

One of the most famous attempts of this kind is the book series Choose Your Own Adventure: books where the story is made of forks in the road in which the reader/player can make choices and often fight against enemies or use a style of interaction very similar to that of tabletop role-playing games.

In the eighties all nerds like me read dozens of books like that
In the eighties all nerds (like me) read dozens of books like that

Another example is the eighties TV series Captain Power and the Soldiers of the Future that allowed players, using infra-red devices, to fight against the enemies on the screen and score points, and the player’s action figure reacted based on the results.

A legendary tagline
A legendary tagline

A recent example is the interactive episode of Puss in Boots, published on Netflix and designed for tablet: it’s a cartoon for children with choices to make and forks in the road in the story.

The diagram of the Puss in Boots branches on Netflix
The diagram of the Puss in Boots’ branches on Netflix

I’m really curious about what will happen in the future in this respect.

What about you?

Interactive Storytelling

Now that we have looked at traditional (to a certain extent, passive) storytelling, it’s time to go into the very subject of these articles: interactive storytelling.

First of all, let’s try to set things straight: are all games narrative?

To answer, let’s look at a few examples.

Chess is one of the oldest and most popular games in the world. It represents a conflict on a battlefield between two armies and, as many of you will know, chess and go are considered the most strategic games in the world.

However, is let’s simulate a battle enough to define chess as a narrative game?

No.

Because all the elements we highlighted as fundamental to narration are missing: the narrator is missing, and so is the message.

The same goes for videogames.

There are completely abstract games (like Tetris) and games in which storytelling is a simple expedient for the setting of the game. Consider Super Mario Bros, in its first version. There was a basic story (Bowser has kidnapped Princess Peach and Mario must save her). But there’s no actual storytelling, no narrator, no message.

The reason for Super Mario Bros success was certainly not caused by its narrative structure
The reason for the success of Super Mario Bros was certainly not its narrative structure

There are responses, but they are directly provoked by gameplay. In fact, taking away the story from Super Mario Bros doesn’t affect the user experience at all.

The lack of any actual storytelling, however, doesn’t invalidate the quality of the game. On the other hand, adding narration to the structure of the game as it is would probably burden the experience and ruin the perfect balance of the design.

Not for nothing, even though in more modern Super Mario games texts and cut-scenes have appeared, the story keeps working as a mere expedient, as a corollary to the gameplay.

When we as designers, therefore, start approaching the design of a new game, we have to ask ourselves a couple of questions:

  1. Does my story (my message) need interactive storytelling?
  2. How can interactive storytelling improve my story?

Answering these questions in the first place will let us understand if and how to include interactive storytelling in our game.

We may realise that a simple story used as an expedient is enough, or that the game doesn’t need a story at all! The assumption that any modern game should have interactive storytelling is a mistake we have to avoid.

If, instead, the answers are positive, then it’s time to learn how to master the art of interactive storytelling.

Linear Interactive Storytelling

The first kind of interactive storytelling that we are going to consider is the linear one. This definition might, at first sight, appear to be counterintuitive, but it’s actually the most common kind of interactive storytelling.

Videogames using this kind of storytelling allow the player to interact with the events, choosing the narrative rhythm (in the case, for example, of a quest that won’t proceed without the player’s intervention), choosing the order in which to go through the events (for example, when there are two parallel quests active at the same time and the player can decide which one to complete first), or setting the desired level of accuracy (for example, when reading documents and clues in a game is not mandatory but increases the player’s knowledge about the story or the game’s setting).

However, as much as the player feels free, the story is eventually going precisely how the author meant.

It’s as if the game designer had taken his message and split it into many different pieces to be put together by the player.

Developing this kind of interaction is clearly more complicated than classic storytelling: certain tricks of the trade commonly used in book-writing, for example, cannot be used here. 

Consider this game with a linear interactive storytelling (maybe one of the most famous in the world): The Secret of Monkey Island. It allows players, on a number of occasions, to explore the story and interact with it in the order and rhythm they prefer. There are at least two large open sections where players have multiple tasks to do, following their own hunches and preferences.

Probably the first game thanks to which I approached interactive storytelling
Probably the first game thanks to which I approached interactive storytelling

A more recent example is The Legend of Zelda: Breath of the Wild, in which the story is told through flashbacks but it is up to the player to decide which parts of the game will be handled first and thus which pieces of the puzzle will be put together first.

Each part of the story, however, had been written in order to coexist without contradicting or hindering each other.

There’s no need to deal with this kind of problem when writing a book.

In order to be sure to create a correct interaction, therefore, a game designer has to use certain tools.

When writing a book, one often takes notes and sketches diagrams. Not all authors, I know, take this approach. Some of them are way more spontaneous: they sit in front of the keyboard and start writing.

But when you’re dealing with interactive storytelling, the spontaneous approach is simply not feasible: outlining the story, using flow charts, creating tables and summaries about every character of the story is the necessary starting point.

All these documents, in fact, will be part of the Game Design Document (GDD), which contains all the elements of the game.

Writing this kind of story, without losing track or making mistakes, is definitely complicated. The more diagrams and notes you’ve got, the more you’ll limit the risk of mistakes.

But it won’t be enough.

When writers finish their work, they will usually hand it to a proofreader who will thoroughly read it and point out mistakes and inconsistencies in the text. Likewise, designers will have to entrust their work to a QA department, made of different people that will check the story and test in a systematic way all cases of interaction—looking for every possible loophole.

Conclusion

And yet… what if we want more? What if we want to give the players the freedom to affect the events and make their experience even more intimate and personal, providing each player with a different response?

In this case we would have to resort to non-linear interactive storytelling that, along with the direct method and the indirect method, will be the subject of the third and last article of this series.

Posted on Leave a comment

Unity Solution for Hitting Moving Targets

Final product image
What You’ll Be Creating

While developing games which involve an action element, we often need to figure out a way to collide with a moving target. Such scenarios can be typically called a ‘hitting a moving target’ problem. This is particularly prominent in tower defense games or missile command like games. We may need to create an AI or algorithm which could figure out the enemy’s motion and fire at it. 

Let’s see how we can solve this particular problem, this time in Unity.

1. The Missile Command Game

For this particular tutorial, we will consider a missile command game. In the game we have a turret on the ground which fires missiles at an incoming asteroid. We should not allow the asteroid to hit the ground. 

The game play is tap-based, where we need to tap to aim the turret. With human assistance, the game mechanics are pretty straightforward as the turret just needs to aim and fire. But imagine if the turret needs to automatically fire at incoming asteroids. 

The Challenges for Auto-Firing AI

The turret needs to find out how many asteroids are approaching the ground. Once it has a set of all approaching asteroids, it would then need to do a threat analysis to determine which one to target. A slow-moving asteroid is a lesser threat than a fast-moving one. Also, an asteroid which is closer to the ground is an imminent threat as well. 

These problems can be solved by comparing the speed and position of the incoming asteroids. Once we have determined which one to target, we reach the most complicated problem. When should the turret fire? At which angle should it fire? When should the missile set to explode after firing? The third question becomes relevant because the missile explosion can also destroy the asteroid and has a bigger radius of effect as well.

To simplify the problem, the turret can decide to fire right away. Then we need to only figure out the angle of firing and distance of detonation. Also, there may be the case where the asteroid has already passed the area where it could be hit, meaning there is no solution!

You should download the unity source provided along with this tutorial to see the solution in action. We will see how we derive that solution.

2. The Solution

We are going to do a little refresher of our high school mathematics in order to find the solution. It is very straightforward and involves solving a quadratic equation. A quadratic equation looks like axˆ2 + bx + c = 0, where x is the variable to be found and it occurs with the highest power of 2. 

Analysing the Problem

Let us try to represent our problem diagrammatically. 

diagram of the incoming asteroid and the predicted path of missile

The green line shows the predicted path to be followed by the asteroid. As we are dealing with uniform motion, the asteroid moves with constant velocity. Our turret will need to rotate and fire the missile along the blue path for it to collide with the asteroid at a future time.

For uniform motion, the distance travelled by an object is the product of time and the object’s speed, i.e. D = T x S, where D stands for the distance, T is the time taken to travel D, and S is the speed of travel. Assuming that our asteroid and the missiles would definitely collide, we can find the distance of the blue line followed by the missile in terms of time t. In the same time t, our asteroid will also reach the same position. 

Essentially, in the same time t, the asteroid will reach the collision position from its current position, and the missile will also reach the same collision position in the same time t. So at time t, both the asteroid and the missile would be at the same distance from the turret as they would be colliding with each other.

Enter Math

We can equate the distance from the turret to the asteroid and missile at this future time t in order to derive our quadratic equation with the variable t. Consider two points on a two-dimensional plane with coordinates (x1,y1) and (x2,y2). The distance D between them can be calculated using the equation below.

If we denote the turret position as (Tx,Ty), the missile speed as s and the unknown collision position as (X,Y), then the above equation can be rewritten as:

where t is the time taken for the missile to travel the distance D. Equating both, we get our first equation for unknowns X and Y with another unknown t.

We know that the asteroid also reaches the same collision spot (X,Y) in the same time t, and we have the following equations using the horizontal and vertical components of the asteroid’s velocity vector. If the velocity of the asteroid can be denoted by (Vx,Vy) and the current position as (Ax,Ay), then the unknown X and Y can be found as below.

Substituting these in the earlier equation gives us a quadratic equation with a single unknown t

Expanding and combining similar terms:

Representing the power of two as ˆ2 and the multiplication symbol as * may have made the above look like hieroglyphics, but it essentially boils down to the final quadratic equation axˆ2 + bx + c = 0, where x is the variable t, a is Vxˆ2 +Vyˆ2 - sˆ2, b is 2* (Vx*(Ax - Tx) + Vy*(Ay - Ty)), and c is (Ay - Ty)ˆ2 + (Ax - Tx)ˆ2. We used the equations below in the derivation.

Solving the Quadratic Equation

To solve a quadratic equation, we need to calculate the discriminant D using the formula:

If the discriminant is less than 0 then there is no solution, if it is 0 then there is a single solution, and if it is a positive number then there are two solutions. Solutions are calculated using the formulas given below.

Using these formulas, we can find values for the future time t when the collision will happen. A negative value for t means we have missed the opportunity to fire. The unknowns X and Y can be found by substituting the value of t in their respective equations.

Once we know the collision point, we can rotate our turret to fire the missile, which would definitely hit the moving asteroid after t secs.

3. Implementing in Unity

For the sample Unity project, I have used the sprite creation feature of the latest Unity version to create the necessary placeholder assets. This can be accessed with Create > Sprites > as shown below.

Implementing in Unity

We have a game script named MissileCmdAI which is attached to the scene camera. It holds the reference to the turret sprite, missile prefab, and asteroid prefab. I am using SimplePool by quill18 to maintain the object pools for missiles and asteroids. It can be found on GitHub. There are component scripts for missile and asteroid which are attached to their prefabs and handle their motion once released.

The Asteroids

Asteroids are randomly spawned at fixed height but random horizontal position and are hurled at a random horizontal position on the ground with a random speed. The frequency of asteroid spawning is controlled using an AnimationCurve. The SpawnAsteroid method in the MissileCmdAI script looks as below:

The Launch method in the Asteroid class is shown below.

As seen in the Update method, once the asteroid has traveled the predetermined distance to ground, deployDistance, it would return to its object pool. Essentially this means it has collided with the ground. It would do the same on the event of collision with the missile.

The Targeting

In order for the auto-targeting to work, we need to call the corresponding method frequently to find and target the incoming asteroid. This is done in the MissileCmdAI script in its Start method.

The FindTarget method loops through all the asteroids present in the scene to find the closest and fastest asteroids. Once found, it then calls the AcquireTargetLock method to apply our calculations.

AcquireTargetLock is where the magic happens as we apply our quadratic equation solving skills to find the time of collision t.

Once we find the point of impact, we can easily calculate the distance for the missile to travel in order to hit the asteroid, which is passed on through the deployDist variable onto the LockOn method of the missile. The missile uses this value to return to its object pool once it has travelled this distance the same way as the asteroid. Before this happens, it would have definitely hit the asteroid, and the collision events would have been triggered.

Conclusion

Once we implement it, the result looks almost magical. By reducing the aiPollTime value, we can make it an invincible AI turret which would shoot down any asteroid unless the asteroid speed becomes close to or higher than our missile speed. The derivation we followed can be used to solve a variety of similar problems which could be represented in the form of a quadratic equation. 

I would like you to experiment further by adding the effect of gravity to the motion of the asteroid and missile. This would change the motion to projectile motion, and the corresponding equations would change. Good luck.

Note also that Unity has an active economy. There are many other products that help you build out your project. The nature of the platform also makes it a great option from which you can better your skills. Whatever the case, you can see what we have available in the Envato Marketplace.

Posted on Leave a comment

Class Design in Games: Beyond RPGs

Classes are
everywhere. Once the domain of RPGs, now class systems have been
pushed into every type of game imaginable. We’re all familiar with the tropes of Warriors and Wizards in high fantasy, but what can we learn about class design from other games?

The first question we need to ask ourselves is, “What exactly is a
class?” The term is pretty loosely defined in gaming, and there are several correct answers. In an RPG like Dungeons &
Dragons, classes are defined by the rulebook and present a list of abilities your character can have access to. 

If you want to be a stealthy assassin or a shapeshifter, you need to choose an appropriate class. The thing
is, there are other choices you can make as well: choosing your race
(elf or dwarf) and background (criminal or noble) which also affect
your gameplay options. What exactly is the difference between race
and class? If your character can breathe fire because they’re a
half-dragon, is that any different from being able to shoot magic
flame from your hands? We really have to look at these things as variations on the class concept.

So when we’re
discussing classes, we’ll be talking about not just standard RPG classes and races, but Starcraft armies, Street Fighter characters, and even Mario Kart vehicles. It might seem odd to lump all of these in
the same box, but they all share something simple: a choice you make outside of the game which determines your gameplay options within the game.

Age of mythology image
Age of Mythology divides its classes into races, and then further as individual gods.

Why Use Classes?

So why even bother with
classes? What do they add to a game? There are a lot of reasons, but
one of the simplest is adding content. More classes = more ways to
play the game = more ways to have fun. When you look at World of
Warcraft, it’s not uncommon to see players with several high-level characters. 

Tails was so popular as an additional character in Sonic that they later added Knuckles, Shadow, Cream, and countless others. Dungeons &
Dragons has a multitude of classes available for players, spread out
throughout optional rulebooks. At an extreme level, some games exist
solely because of their variety of classes—imagine Smash Bros with Mario as the only character. Fighting games are fun
largely because of the way different characters interact, meaning that every matchup has different strategies.

Another reason
classes are useful is because they promote diversity. This is
especially important in competitive multiplayer games, where
(generally speaking) everyone wants to be the best. If you wanted to
make an MMO where players can assign points to their skills, you might think
that the playerbase would create a range of different character types. What inevitably
happens, though, as shown over and over by MMOs like Ultima Online, is that players gravitate towards “best
builds”. 

Generally, a small selection of players who are
experienced at the game will do the math and post optimal builds, and
everyone else will just copy that. This “copy others” attitude isn’t unique to MMOs (Magic: The Gathering players have debated for some time the pros and cons of “netdecking”), and any game where you can choose your skills will have at least some discussion on best builds.

Of course, creating classes
doesn’t stop the issue—World of Warcraft, despite having multiple classes, has plenty of build discussion—but it at least creates a little bit of
variety. Instead of having a single “generic tank build”, you might
have a choice of playing a warrior tank, paladin tank, or druid
tank. 

And lastly, it
reduces the gap between skilled and unskilled players. Being a new
player to a game can already be frustrating when everyone is better
than you, but if everyone is also using better characters then it
can feel doubly frustrating. New players might feel as if they are
being punished for their lack of knowledge,
whereas pro players might spend their time trying to find abusive
build combinations. 

New players also run the risk of “doing it wrong” by spending points on useless skills—the idea of “noob traps” is something we’ve discussed before. By forcing players into predesigned classes, you refocus the game back onto the gameplay, and away from character building.

So are there any
problems with classes? Well, obviously it can be a massive time investment. But from a design perspective, there’s really just one issue: class systems limit a player’s ability to experiment with fun builds or create specific ideas. Players love to be creative, and limiting that creativeness can limit the amount of fun to be had. 

For highly competitive games, it can be argued that “design your own” systems are an extremely dangerous idea, as all it takes is one overpowered combination to ruin the whole thing. But for some games, character creation is what makes the game fun in the first place.

Impossible Creatures image
Impossible Creatures, an RTS where players can fuse creatures together to create their own armies and engage in Mad Scientist combat. 

So, assuming we do want to add classes, how do we go about designing them? Well, it’s such an expansive concept that even if we limited ourselves to a
particular genre, we could write a novel and still only scratch the surface. So let’s focus instead on some general common issues that apply across the board.

Strict vs. Loose Class Design

The word “class” means
many things, so let’s introduce a new concept: the idea of strict and
loose classes.

  • A strict class is
    one that defines a player’s available skillset.
  • A loose class gives more limited powers or bonuses to certain playstyles.

Generally speaking,
the more complex a system is, the more likely it is to be
strict.

In Diablo 3, players can choose from classes like Barbarian, Monk, and Wizard. These classes have special abilities, and those abilities define what the character can do. Only the Monks have Cyclone Strike, and only Wizards have Hydra. The classes gain specific skills at specific levels, and can never learn skills from other classes. Diablo 3 is very firmly a strict system.

Compare to a game like Desktop Dungeons, which is a loose system. When a player chooses a class, it simply gives that player a minor advantage: Berserkers have 50% magic resistance. Priests deal double damage to undead. A Berserker can still do all the things a Priest does, but is better (or worse) in certain situations.

Obviously, there is no clear distinction between “strict” and “loose”, and there will be games which can be argued to be in either camp. Vampire: The Masquerade allows players to choose a clan, and although each clan has unique powers, these powers do not define the character and the game otherwise operates like a standard point-buy system. 

But what of other genres? Well, Hearthstone allows players to choose a class,
and this gives them a class ability they can use in game, such as producing minions or
drawing extra cards. Since this ability only gives a minor advantage in game, it counts as a “loose” class advantage.

However, Hearthstone also has
class cards which can only be used by certain classes. Cards like
Backstab or Sap are Rogue-only cards, but are theoretically useful
for every class. This limiting of cards means Hearthstone is “strict” class design, as every class will have a variety of options unavailable to other players.

So why does this all matter? Well, the stricter a game is, the more pronounced the
benefits of a class system are (as discussed above in “why use classes”). More variety between classes, fewer “noob traps”,
more fun for players. Additionally, strict design allows you to
create incredibly flavourful classes. In Hearthstone, playing
a priest feels like playing a priest (or at least, as close as you can
get in a card game). Each of the classes feels distinct, and this
distinctness allows the player to play the game in a variety of
different ways (hopefully finding one suitable to their playstyle).

The downside is, of
course, the same downside mentioned above—that the player is
limited to the playstyles defined by the developers. It doesn’t
really allow for exploration beyond that. And because each class has
a certain playstyle, there are times when you’re going to know how
the game will play out before the first move is made or card is
drawn. 

This can be pleasant (if you’re winning), or frustrating (if
not). If you struggle to beat rogues and continually get matched
against them, the game can become unfun very quickly. Depending on
what playstyles or meta is popular at the time, it might
mean playing a string of games against not just the same class, but
the same deck or character build—which can be pretty
underwhelming.

Mechanical design is
just one aspect of character creation, however. We need to ask what players want from their games, and there are several answers. For most new players, they’re not thinking about the mechanics behind each class—most often, they want to play the cool soul-stealing ninja, or the alien that eats face. This side of character design, which includes things like backstory and visual design, is often referred to as “fluff” or “flavour”. It’s an important part of the design process, but it’s enough of a topic by itself that we’ll have to leave it for another time.

The other question players most often ask is, “Well, what does it do?” Sometimes the answer is obvious, sometimes less so—but generally, the player will be trying to find a class which allows them to play the game in the way they want.

South Park Stick of Truth image
South Park’s “Jew” Class is a non-standard class with powerful lategame abilities.

Fulfilling a Role

Generally speaking, the purpose of a class is to allow the player to play the game in a way that they enjoy. Not everyone enjoys playing magic classes, so it’s important not to force players into roles they don’t enjoy. Of course, for multiplayer games, some players will be pressured into playing certain roles, but generally speaking players will play whatever is the most fun.

In certain games (like MMOs), the ability to fill a role becomes doubly important. If your party is planning to fight the Dragon Emperor, then you probably need to have a strategy. Typically, tank/damage/healer roles are primary, with other roles such as controller, leader, tracker and so forth dependent on the game. 

Because available party slots are generally limited, it’s important that your team is able to get the most out of its available party slots—all healer parties tend to do poorly. Players will want to choose roles that complement each other to maximise their chances of success, and this means giving the players the option to choose classes that they enjoy and feel are useful to the team.

Regardless of the game style, you want to create classes that allow for an enjoyable gameplay experience. The classes you design will determine how the game is played. If all your characters are swordsmen, then gameplay is going to be focused on close quarters fighting. If you add a single sniper to the game, then suddenly the whole dynamic changes—environment and cover suddenly become more important, and rushing around in the open is no longer a viable tactic. 

You need to understand what you want from your game, and the roles and abilities you have should promote that gameplay style. If you don’t want a role being fulfilled, then simply don’t add it to your game. Don’t like the idea of healers slowing down gameplay? Remove them. It’s your game, so there’s no reason you have to stick to “traditional” design roles.

Despite many games using the traditional tank/dealer/healer design, there are plenty of reasons to avoid it. The most obvious is that if you design your game around those classes as a central idea, anything which does not fit into those criteria is bad. Imagine a Warrior, Rogue and Cleric being joined by a Banker or Farmer. There’s no reason that players shouldn’t be allowed to play those alternative classes, but the chances are they have no place within the “holy trinity” framework. Classes not only have to be balanced with each other, but within the game itself.

Balancing the Classes

Sometimes, however, we can get obsessed with concepts like balance—making sure every class is fair to use. While for some games this is necessary, it’s not necessary for every single game. Bad classes can provide extra challenge, or a balancing factor for experienced players. The Binding of Isaac’s “The Lost” can fly, but dies in one hit. Street Fighter’s “Dan Hibiki” is a popular joke character. These “bad classes” are simply more options for players who choose to challenge themselves. Additionally, if every class is perfectly balanced, then what does it matter which one you choose? 

We should also ask what we’re balancing for. Do we balance based on win rates? Or how they compare for 1 on 1 combat? Some games, MMOs in particular, struggle to keep characters balanced between the PVE and PVP elements. In the Binding of Isaac, damage is often considered a “god stat” for characters—not only is it incredibly handy to be able to one-shot everything in sight, but the game rewards fast play with secret bosses and going unhurt with “devil items”, powerful items that serve to snowball a good character even further. The slower, tankier characters like Magdalene look fine on paper, but simply can’t compete with the bonuses that high-damage characters get. Whereas The Lost is an interesting character because of intentional difficulty, Magdalene is simply a boring character.

Binding of Isaac image
The Lost, one of the many characters from The Binding of Isaac.

League of Legends embraces this and uses an idea called “perfect imbalance” to keep gameplay fresh. The game is incredibly complex, and trying to balance over 130 characters is basically an impossible skill. Not only do the designers have to contend with how the characters interact, but every time a small change is made it could theoretically throw everything out of balance again. 

They try to ensure that no single character is overpowered, but there are plenty of “bad characters”—and due to the evolving of the game, sometimes characters which are seen as bad suddenly become viable. The complexity and ever-changing nature of the game mean that players are constantly forced to re-evaluate the best strategies, ensuring that gameplay is never “solved”.

“Solving” is a problem for many games. When you look at classes, sometimes you can put down all the abilities on paper and work out what exactly each class is capable of. What this means is that in team games, classes are often judged by a single metric: how much damage you can output, how quickly you can heal, or how quickly you can race to the end. Your character has one job, and the best character for that job is whoever has the highest numbers. This raises an interesting question: is it better to have a class that is exceptional at just one task, or to have a class that can do everything satisfactorily? 

Specialisation vs. Flexibility

When we create a
class, we should generally have a rough idea of what we want from it.
In an MMO, the perfect tank is basically a granite boulder—something
that will just sit there and soak up damage while the rest of the
team throws flaming death. This creates a sort of “arms race”
which means the most specialised characters are (almost always) the
best ones for the jobs.

The problem with
this is that if one character is the best at the job, every other
character is (by default) not the best—and why would you
intentionally play a bad character? This is a problem for MMOs who
are trying to juggle balancing dozens of character classes. Why play
a rogue if mages have better DPS? 

Imagine making a game, similar to Civilization, wherein you try to take over the world. You can achieve victory through political, military, or cultural might. You can also choose a race, and each race has a benefit: elves are better at politics, orcs are good at military, and so forth. Why would a military fan ever choose anything other than orcs? Also, if you’re playing against orcs, why would you invest into political defence? The specialisation of the races restricts your playstyle and forces you into certain options.

Civ 6 image
The races in Civilization IV encourage players to use certain tactics without forcing them down any particular avenue.

This is the biggest problem with specialised classes. If
specialisation is doing one thing well, then it means not doing
anything else. If choice is a core component of gameplay, then
doing the same thing over and over again is bad design. This is a
problem many games face, and it’s a problem especially with regards to
healing classes
.

So what’s the
solution? As we discussed in the healing article, you need to
make sure the player has a range of options available during the
game. It’s one of the most fundamental aspects of game design: keep
the player engaged. If the player doesn’t have to make any choices for their actions, then they’re not engaged, and that’s when things become boring.

So when you make your class, make sure they’re able to participate in the game at all
times. If you’re designing an RPG, make sure the classes all have
skills for both inside and outside of combat, rather than creating a “skill-monkey” character. If you’re
designing a game with multiple paths to victory, try to make sure each race has the option of winning in different ways. 

Allow players to adapt to the flow of the game, and if they realise they need to change tactics, allow them. The more specialised a class or race is, the more likely it
can only do one thing, and the more likely it is to get stuck doing that thing over and over again. Choice is important.

Soft and Hard
Counters

Players like to win. In a competitive, class-based game, players will generally choose the best class. Best is often subjective—it depends on the player’s skill, playstyle, the map, and even recent gameplay changes. For most players, “what is best” is really just “whatever beats your opponent”. 

For some games, this means trying to anticipate what your opponent is going to play. For CCGs like Magic and Hearthstone, players talk about “the meta“—what the most popular decks are and what cards your opponents are likely to be running. A player might choose to play a deck specifically to beat the meta, running cards that shut down certain decks. In magic, some deck archetypes can be entirely shut down by a single card, meaning that playing the meta can be an effective way to win.

In other games, players take turns “drafting” their characters. Knowing what your opponent has chosen means that the ability to choose a counter becomes especially important. The tactic of trying to pick
a character or class specifically to beat your opponents is known as
counterpicking.

Having counters in
games is generally a positive mechanic. It allows a certain amount of
self-balancing from the players themselves, as any player who uses an
overpowered class can expect to hit a higher share of
counter-classes. The existence of a meta-game allows players to discuss
the best tactics, the best counters to those tactics, and the best
way to play in the current environment.

The question is then
to what extent counters should be effective. Generally, counters
fall into the category of “soft counters” and “hard counters”.

Soft counters are
classes that have a slight bonus against certain character types. High mobility characters are generally a soft counter to snipers—although the sniper can win, they need to be skillful or lucky to stand a chance.

Team Fortress 2 image
Team Fortress 2s “Meet the Spy”. Some would argue whether the Spy is a soft or hard counter to the sniper, although it largely depends on the player’s skill and general awareness. 

Hard counters are
classes which completely obliterate another class with little to no
effort. Spearmen are often given as a hard counter to cavalry charges—although the cavalry could win, it’s more than likely not going to happen. The best answer here is to call in some archers.

So are soft counters or hard counters better for your game? Well, obviously it depends on what you’re aiming for, but for nearly every game out there the answer is simple: soft counters are better.

The reason for this
is simple: hard counters deny counterplay. Having a more difficult game
due to a counterpick is fine: being unable to do anything at all is bad.
Soft counters can generally be worked around, but hard counters leave
no room for creativity or tactical moves.

So can a hard counter
ever be acceptable design? Yes, under two scenarios:

  1. The player is
    able to change class midgame, allowing them to counter the counter.
  2. The player is
    part of a larger team and is able to “offload” the problem onto
    someone else.

That’s not to say
that hard counters are acceptable in these situations, but the
problem is less pronounced. The player still has some sort of choice available, and may be able to “avoid” the issue.

Boiling It All Down

So what can we take
away from all this? Really, class design isn’t all that complicated.
It comes down to a single idea:

Let the player play
the game in a way that they enjoy.

That’s it—the great
secret to class design. It doesn’t matter what sort of game you’re making, all that matters is that the players are having fun. 

The very essence of class design is, as we’ve said so many times, about choice. It’s about the player choosing to play something they enjoy, about being given meaningful choices throughout the game, and about how those choices interact with the challenges they face, be it enemy AI or other players.

And because new games often contain a world of information, it allows players to make choices more meaningful. A new player might be overwhelmed looking at 100 different statistics, but if you give them just a handful of choices—ask them which class they want to play—they can answer that easily. They don’t need to worry what the correct number of points to spend on vitality is; they simply pick a class and get stuck in.

Your class gives
players additional ways of playing your game, and in a way each class
is like making an entirely new game. As long as your class doesn’t stop other people having fun, it’s probably fine.

And remember, at the
end of the day, each game is different. There is no “correct” in game design, and there are no doubt many successful games that break some (or all) of these rules. Just try to consider them when designing your game, and don’t be afraid to break the mould and try something different. All of this is aimed at one simple idea: make your game fun.