Category Archives: Tutorial

Deferred 2D Lights continued

The lighting engine is available on Github here. The code also includes a example project that creates the brick wall in the previous article.

How does this work? I use a normalmaps.  The normalmap stores for each pixel what direction it faces. This way it ‘tells’ the engine if a pixel should be lit when light comes from a specific direction. Normalmaps are quite common in 3D graphics but can be used in a 2D context as well.

Now the game should render the gameworld twice: once with all the color information (the regular graphics we’re used to) and again with the normalmap information. It means that you should have your game assets twice.

For example; this is a test dungeon in Tiled:

  1. Note the 2 layers, one is the “diffuse” or colormap of the dungeon, the second (‘hidden’ in the screenshot) is the normalmap information.
  2. Note the 2 tilesets. These have identical tiles. You can optimize this to have both the diffuse and normal data in one texture.

You can create these normalmaps using tools such as Sprite LampSprite Illuminator, Sprite DLight  or this plugin for GIMP.

Draw the Tiled layers in your game to the diffuse/colormap and normalmaps and you’re good to go:

The screenshot isn’t the best example of what’s possible but there are several of good examples out there that use this to great effect such as Legend of Dungeon (see the Devlog by RobotLovesKitty) or Full Bore (also a Devlog worth checking out!).

Take a look at the code on GitHub and be sure to have a look at the example code!

Gamecamera implementation

After reading the excellent article on Gamasutra: The Theory and Practice of Camera’s in 2D Sidescrollers by Itay Keren, I thought of showing the code I use in my 2D game camera.

The camera I use is of the type ‘Camera window’ with ‘LERP Smoothing’. It meams the camera follows the player, if the player pushes the boundaries of a window inside the camera viewport:

The blue area (1) is the gameworld. This area is bigger than the part shown on the player’s screen. The yellowish area (2) is the actual viewport that is rendered to the player screen. The reddish (3) part is the ‘camera window’: the player controlled character will always be inside that part. Only when the player actually pushes the boundaries of that inside section the view will move. Imagine the character moves to the right edge- the camera gets moved until the player is back inside the reddish area and will nog move again until  the player reaches the edge again.

LERP smoothing means the camera ‘lags’ a bit and moves smoothly instead of immediate.

On to some code…

Continue reading

Handling Gamestates

Every game has a few gamestates; the most simple version may be ‘Title’, ‘Gameplay’ and ‘Game Over’. In this simple version one might declare an enum like this:

This leads to a switch statement in the Update() and Draw() sections of your code. However, what if the player can access an ‘options’ screen from the title and gameplay states? Or switch between inventory or map modes? Soon you’ll be adding substates and trying to remember what the previous state was…

Let’s start with a clean slate, rid ourselves of the Gamestate enum… how can we handle this in a more elegant way? Read on …

Continue reading

Addendum Content processors

In the previous article I wrote about the Content Reader in contentprojects. I stated that the content reader can happily live inside the content project. This means that the “filetype” project also hold all classes to process the content. Seems I have to expand on that…

If the project is fairly simple, such as the flatshaded model, that is acceptable. However, I was making some more complex projects.
For example, the Darkfunction Editor uses a few files to store animations: the .png file that holds the images, the .sprite file that holds the definition of sprites inside the image and finally the .anim file that constructs the animations out of those components. In the final game project, I wanted to simply load an animated content file that combines these things automatically for me.

So I made an AnimatedSprite that held all the relevant information and a AnimationController that could play animations from the AnimatedSprite data. So far things went great.

When I started creating the ContentProcessor, I found that the structure to write the .xnb file was quite different than the structure I needed to actually use the data. For example; the textures that I needed in the AnimatedSprite data are handled differently in a content project (more on that later). In fact, the content processor was much simpeler than the actual object.

So by separating the content project from the AnimatedSprite project, things were much simpeler.

Keep in mind that in the ContentWriter function you have to point to the correct ContentReader:

Textures in Content Processors

Sometimes you might want to include textures in a content project. In the example above, the .anim file is added to the Content. Inside the .anim file a .sprite file is referenced. The .sprite file in turn references a texture.  It would be awesome if the game project could simply load the anim content and the rest is magically loaded as well.

This is the trick (in the ContentProcessor code):

In the above code a texture texturefilename is build and saved as texturetargetname.  By storing the targetname in your contentfile, you can simply read the texture in your ContentReader:

Content processors in Monogame

The flatshaded models are the result of modelling the object in Wings3D then exporting it to XML (vertices and faces). I made my own tool that reads the XML and converts the model to my own VertexPositionNormalColor format. In the tool I can do various tweaks and color the model. This tool then exports the data to an XML file with the extension VPNC.

The file is quite simple really:

It defines 1 black triangle with the normal pointing up.

To handle the meshes in my game I made a simple content processor- this makes using my own models as easy as:

I may make more content processors later for the tracks or any other objects that I use external tools for.

Disclaimer: There may be easier ways to handle flatshaded models (via shaders for example) that’s another topic. This topic shows how to create a content processor, I use my flatshaded model as example.

Continue reading if you want to know how to make a content processor yourself.

Continue reading