Author Archives: Felsir

Deferred 2D Lights continued

The lighting engine is available on Github here. The code also includes a example project that creates the brick wall in the previous article.

How does this work? I use a normalmaps.  The normalmap stores for each pixel what direction it faces. This way it ‘tells’ the engine if a pixel should be lit when light comes from a specific direction. Normalmaps are quite common in 3D graphics but can be used in a 2D context as well.

Now the game should render the gameworld twice: once with all the color information (the regular graphics we’re used to) and again with the normalmap information. It means that you should have your game assets twice.

For example; this is a test dungeon in Tiled:

  1. Note the 2 layers, one is the “diffuse” or colormap of the dungeon, the second (‘hidden’ in the screenshot) is the normalmap information.
  2. Note the 2 tilesets. These have identical tiles. You can optimize this to have both the diffuse and normal data in one texture.

You can create these normalmaps using tools such as Sprite LampSprite Illuminator, Sprite DLight  or this plugin for GIMP.

Draw the Tiled layers in your game to the diffuse/colormap and normalmaps and you’re good to go:

The screenshot isn’t the best example of what’s possible but there are several of good examples out there that use this to great effect such as Legend of Dungeon (see the Devlog by RobotLovesKitty) or Full Bore (also a Devlog worth checking out!).

Take a look at the code on GitHub and be sure to have a look at the example code!

Deferred 2D Lights

The lighting used in the game ‘Full Bore’ is fascinating. The devs wrote a nice blog post about it. Reason enough for me to replicate the effect. In 2014 I did a succesful test (see the result on Youtube).

The code wouldn’t run in Monogame and the shader files wouldn’t compile in the native Monogame Content tool. Also the 2014 code was quite messy and other than just a proof of concept was hard to implement in a game.

So I went back to the drawingboard and rewrote everything so it can be used as a library in a game project. The result of my test project can be seen in the gif below:

I’m cleaning up the code a bit and plan to upload it to Github if there is interest. The code is now available on Github!

Gamecamera implementation

After reading the excellent article on Gamasutra: The Theory and Practice of Camera’s in 2D Sidescrollers by Itay Keren, I thought of showing the code I use in my 2D game camera.

The camera I use is of the type ‘Camera window’ with ‘LERP Smoothing’. It meams the camera follows the player, if the player pushes the boundaries of a window inside the camera viewport:

The blue area (1) is the gameworld. This area is bigger than the part shown on the player’s screen. The yellowish area (2) is the actual viewport that is rendered to the player screen. The reddish (3) part is the ‘camera window’: the player controlled character will always be inside that part. Only when the player actually pushes the boundaries of that inside section the view will move. Imagine the character moves to the right edge- the camera gets moved until the player is back inside the reddish area and will nog move again until  the player reaches the edge again.

LERP smoothing means the camera ‘lags’ a bit and moves smoothly instead of immediate.

On to some code…

Continue reading

Handling Gamestates

Every game has a few gamestates; the most simple version may be ‘Title’, ‘Gameplay’ and ‘Game Over’. In this simple version one might declare an enum like this:

This leads to a switch statement in the Update() and Draw() sections of your code. However, what if the player can access an ‘options’ screen from the title and gameplay states? Or switch between inventory or map modes? Soon you’ll be adding substates and trying to remember what the previous state was…

Let’s start with a clean slate, rid ourselves of the Gamestate enum… how can we handle this in a more elegant way? Read on …

Continue reading

It’s been a while

So it has been a while since I last posted something. Seems I have too many hobbies and too little spare time. As for my Virtua Formula game, I ran into a blocking issue that frustrated me a lot. The game engine I wrote had a flaw that caused a huge amount of draw-calls, so I had to get back to the drawingboard with that one. I didn’t feel like completely rewriting everything I did, so I tried a few other concepts and techniques to improve myself.

Some of the stuff I think is worth sharing so I plan to get a few blogposts out about things I’ve learned and experimental games I’ve created.

Addendum Content processors

In the previous article I wrote about the Content Reader in contentprojects. I stated that the content reader can happily live inside the content project. This means that the “filetype” project also hold all classes to process the content. Seems I have to expand on that…

If the project is fairly simple, such as the flatshaded model, that is acceptable. However, I was making some more complex projects.
For example, the Darkfunction Editor uses a few files to store animations: the .png file that holds the images, the .sprite file that holds the definition of sprites inside the image and finally the .anim file that constructs the animations out of those components. In the final game project, I wanted to simply load an animated content file that combines these things automatically for me.

So I made an AnimatedSprite that held all the relevant information and a AnimationController that could play animations from the AnimatedSprite data. So far things went great.

When I started creating the ContentProcessor, I found that the structure to write the .xnb file was quite different than the structure I needed to actually use the data. For example; the textures that I needed in the AnimatedSprite data are handled differently in a content project (more on that later). In fact, the content processor was much simpeler than the actual object.

So by separating the content project from the AnimatedSprite project, things were much simpeler.

Keep in mind that in the ContentWriter function you have to point to the correct ContentReader:

Textures in Content Processors

Sometimes you might want to include textures in a content project. In the example above, the .anim file is added to the Content. Inside the .anim file a .sprite file is referenced. The .sprite file in turn references a texture.  It would be awesome if the game project could simply load the anim content and the rest is magically loaded as well.

This is the trick (in the ContentProcessor code):

In the above code a texture texturefilename is build and saved as texturetargetname.  By storing the targetname in your contentfile, you can simply read the texture in your ContentReader:

Content processors in Monogame

The flatshaded models are the result of modelling the object in Wings3D then exporting it to XML (vertices and faces). I made my own tool that reads the XML and converts the model to my own VertexPositionNormalColor format. In the tool I can do various tweaks and color the model. This tool then exports the data to an XML file with the extension VPNC.

The file is quite simple really:

It defines 1 black triangle with the normal pointing up.

To handle the meshes in my game I made a simple content processor- this makes using my own models as easy as:

I may make more content processors later for the tracks or any other objects that I use external tools for.

Disclaimer: There may be easier ways to handle flatshaded models (via shaders for example) that’s another topic. This topic shows how to create a content processor, I use my flatshaded model as example.

Continue reading if you want to know how to make a content processor yourself.

Continue reading

The F1 car is complete

So today I completed the car. The wheels are spinning and the steering mechanism is working. I’ve added some simple physics to the car (springs in the pitch and roll of the car so it leans in corners and tilts when accelerating or braking. Still a lot of tuning needs to be done for the car to ‘feel’ right, but the basics are there. For more info about these ‘springs’: this article gave me the idea.

If you look carefully you’ll notice the lighting on the car changing when steering. It already moves in 3D space and the BasicEffect lighting is working.

In all a productive weekend!

F1 Modelling part 2

The result of the 3D mouse picking and modelling:

The black square is the current active color. The short clip shows what the model looks like in the flat shaded mode. I’m getting more comfortable with the 3D stuff. With the BasicEffect though so Shaders will be something I’m going to dive into later.

3D mouse picking

I have converted the 3D model of the Formula One car to my own VertexPositionNormalColor format. I want to create a few variants of the car with different colors so I needed a way to select vertices and give them a color. A simple editor would suit my needs so I started writing a tool for it. This (old) question on StackExchange helped me a lot: Get 3D mouse position.

This snippet reads the mouse and creates a Ray:

The ray is still in object space, so it needs to be rotated so it matches the 3D object I want to manipulate. This snippet rotates the ray so it matches the 3D object:

Now the ray matches the 3D object, so a triangle intersection on the faces of the mesh does the trick!

Note: the triangle-hitlocation function is not my own, I had it stored on my harddrive (I knew it would come handy some day!) Unfortunately I don’t know the author.