iL engine

iL engine (2008) was the 3D engine I made for my final university project. It was a really fun project and I was able to learn a lot with it.

Let me show you a video of it before describing all the characteristics inside:

iLengine 2 from llorens.marti on Vimeo.

Forward Rendering vs Deferred Rendering

Initially the engine was build with a Forward Rendering Pipeline (more info here) but after some considerations, I decided to go one step forward and re-write the whole pipeline as a Deferred Rendering Pipeline (take a look here).

This change allowed me to have more lights in the scene, which made things easier because materials where "detached" from the lights on the scene. On the other hand, it made things more complicated with transparent materials.

Shadows

The shadows rendered by iL Engine were generated by a normal Shadow Map algorithm (more info here), however, while I was reasearching information about shadows, I saw something very interesting.

There was a video from the Unreal Engine that was using a technique based on projecting cube maps with shadows integrated. Let me explain the scenario:

Lets imagine that we have an omni light at some point in space, also a mesh that will represent the object that is repesenting the light, for example, a candle with some filigree to cast shadows.

Now, we use some 3D Tool like MAX and we project the light from that candle into a cube map texture. As a result, we will have 6 textures (one for every face of the cube) with a gray scale color. And here comes the trick. We make a second cube map, similar to the first one, but with blur applied.

As a final step, while processing that light into the scene, we only need to:

  • Get the distance and direction for the omni to the pixel.
  • Use the direction and sample both cube maps (the first one will contain a crisp shadow representation while the second one will contain a soft/blurred shadow representation).
  • Interpolate these two values depending on the distance between the omni and the pixel.

Of course, this is not a "real" shadow representation, but it is far cheaper than making 6 shadow maps (one per cube map face) and can trick your mind on many scenarios.

Lighting

The first iteration of the lighting algorithm was normal Phong BRDF model (look here), however, while changing the rendering pipeline to a deferred one I wanted to try something.

With a Phong model, all the computation ends with pixels colors that range from 0.0f (black) to 1.0f (white). With some modifications to the equation, I never allowed any value to be clamped between 0.0f and 1.0f. This way, I ended up with values with range from zero to something very big. Then, I transformed these values from pure float to LDR and HDR ranges.

To make that I used something like this:

vec3 my_color;
vec3 ldr = clamp(my_color);
vec3 hdr = clamp(my_color - vec3(1,1,1));

After that, I had the pure float color separated into two ranges, LDR and HDR.

The LDR range was used as the diffuse value inside the rendering equation.

The HDR range was used as the glow value inside the rendering equation.

These values allowed the pipeline to make pixels glow if the color values of the light (probably greater than 1.0f) were high enough.

Materials

The material system used in iL engine was my first attempt on building a flexible solution to represent a wide range of different surfaces.

I was able to create complex and beautiful materials. One aspect that got me amazed was the ability to have multiple materials controlled by a mask. As a result, the engine was able to render distorted materials over time blending with other complex materials using a hand crafted mask texture.

One problem to note was that the system itself was very rigid and static. Almost all the materials were composed of 4 layers with many slots filled with default textures. This solution ended up with unnecessary instructions being executed.

Another problem that the material system had was about diffuse values not being linearized. This is very important if we want to render correct lighting. You can read more here.

Post process

There were some post process applied to the final image. Two of them where Glow and SSAO.

SSAO

The implementation I used to generate an Ambient Occlusion in screen space was a very simple one. Instead of using a cloud of points inside a sphere (take a look here for Crytek´s SSAO) I used a pure depth based algorithm, the basic idea was:

  • Sample the neighbor pixels and obtain its depths.
  • Sample the current pixel for its depth.
  • Define a min and max depth values.
  • Compute the average of all the depth values.
  • Return that average as a 0.0f - 1.0f value as occlusion.

It was fast and simple, but unfortunately I did not solved the problem of near pixel occlusion affecting far away pixels.

Glow

The implementation of glow came from an AMD paper (that I can no longer find) were there was an interesting trick. Lets begin with a bit of context. A basic glow is constructed first sampling some pixels on a single axis (for example X), and then, from that result, sample the other axis (take a look here).

The trick was instead of blurring X axis first, and then Y, the algorithm was sampling 4 times arranged in a square shape, making sure that the sampling coordinate was in between 4 pixels. This way the algorithm took advantage of the Sampler`s interpolators for free.

The algorithm was fast, but the final glow had a "blocky" look, which was a compromise between image quality and speed.

UI

The UI code was a similar feature implementation of what DXUT was providing at that time. There was nothing new in terms of features but the code was entirely original and not relying on DXUT controls.

From a the perspective of learning it was a great success because I was able to learn how the different controls were implemented internally and how they interacted together.

Last updated on