1 0
Read Time:12 Minute, 10 Second

In my, not very objective opinion, probably one of the most beautiful equations ever written in the field of CGI. The rendering equations is the key to a lot of questions when it comes to understand how lighting works in a 3D created rendering… This article is a general idea on how the mathematical formulations works, and breaking apart the concept and notations to understanting the equation in a more friendly way.

The equation:

I will start by showing the equation, before starting dissecting it, as it follow.

L_{o}(\omega_{o}) = \int_{\Omega}^{} = L_{i}(\omega_{i})\cos\theta_{i}f_{r}(\omega_{i}, \omega_{o}) d \omega_{i}

If you are somehow a bit around CGI, you have probably seen it around a few time, with different flavors or formulations, I will stick to this one for this article.

To have a better understanding on how it all work, we will need to go back a bit about the Lambertian Model(wich I previously covered in the context of Ray Marching), but still, the maths stays the same. If you are curious to know a bit deeper about how it works, I would suggest checking on my lighting series where we start by the Lambertian model and expanding deeper to include specular lighting…

From Lambertian to more control

et’s start all black

I started by only using a simple sky and black out every color on my scene. Sad, sad, sad…

The goal of lambertian lighting is quite simple, let’s throw the equation first…

I love to break every part of an equation, this way it makes it easier to understand and implement. What we want to know, or compute is “What is the color of the pixel”, and for now, we have a hit point in our scene. Well for now we do not see it, because I forced all the hitpoint to be in black. I will take a sphere as an example, because we love spheres 🙂

I made the sphere in white, then we need to shade it. Let’s walk every terms of the equation:

L dot N

Is the first part we need to look at.

L: Is the light position in worldspace. Normally defined by a vec3 in GLSL or float3 in HLSL.

N: is the normal in worldspace from the point we want to shade.

The dot part:

Most shading language have a dot operator, if not, the idea is simply to mesure how two vectors are oriented towards each others. In 2D we could express the dot product between vector a and b as follow.

The dot gives you the ability to track information on how the point N and the light pos L are oriented one vs the other.

Therefore we will use the power of GLSL to deal with the dot product. Let’s code this simple idea before moving to color and intensity.

vec3 lightPos = vec3(1.0, 1.0, -12.);
vec3 L = normalize(lightPos — pos);
vec3 N = computeNormal(pos);
col = vec3(dot(N, L));

I plug the dot product result into a vec3 constructor, as the result is a float, and we want to get express as a rgb out color. Synthaxic sugar, but very usefull 🙂

Giving us:

ShadertoyThis help only covers the parts of GLSL ES that are relevant for Shadertoy. For the complete specification please have…www.shadertoy.com

So at this point, we are almost done. The hardest part is done and everything that we need to warp our head around is simply: what is the color of the light, and what is it’s intensity?

C and I

C: the color of the light

I: the intensity

So we simply need to define a color and an intensity. Let’s say the light will be green.

vec3 ligthColor = vec3(0.01, 1., 0.01);

With an intensity of 1.5, intensity if usually define as a float, but to smash everything into GLSL, simply use the vec3(float A) to assign a float as every member of the vec3 class.

float intensity = 1.5; vec3 Li = vec3(intensity);

vec3 lightPos = vec3(1.0, 1.0, -12.);
vec3 L = normalize(lightPos — pos);
vec3 N = computeNormal(pos);
vec3 C = vec3(0.01, 1.0, 0.05);
col = vec3(dot(N, L)) * C;

Giving us:

For intensity let’s add this way:

vec3 lightPos = vec3(1.0, 1.0, -12.);
vec3 L = normalize(lightPos — pos);
vec3 N = computeNormal(pos);
vec3 C = vec3(0.01, 1.0, 0.05);
float scalarI = 1.5;
vec3 I = vec3(scalarI);
col = vec3(dot(N, L)) * C * I;

Giving us:

Here is the final Pixel shader code 🙂 Everything we implemented is at line 94!

Blinn-Phong models

Let’s review a bit of the of this lighting models:

I mean, pardon my super bad drawing skills on Paint… But here is a bit of naming and description

For a single point, we are working with these:

  • I: Is the lightsource
  • w: is the direction from the point toward the light
  • n: The normal
  • h: the halfway vector
  • r: The reflected vector from w
  • v: The viewer.

If you are looking for deeper explanation of the Blinn/Phong Model, there are a lot of ressources online.

The point on what I want to focus in here is not the Blinn/Phong model, but how the material and the gometry works together, and how they are actually to be later used in the equation.

So, this models alow us to compute the color of this tiny point using the following formulas using two types of properties: The geometry and the material properties. Where C is the color output color and Kd is the diffuse of the material and Ks is the specular of the material.

C = I(cos\theta K_{d} + K_{s}(cos\varphi)^\alpha)

Before diving deeper, let’s review a little bit of what properties mean from a material perspective:

Material Properties:

In the previous equation: K is the material with a diffuse and specular property.

Kd is the material Diffuse property and Ks is the specular property.

The material that we apply on the surface we want to shade is related to some properties that we define on material properties. Let’s assume you have in your kitchen, a table, on which you apply a very mirror-like material to, you would expect the material to fit the table, and material would impact the final look of the table, but your table into a different environnement, with different lighting, the table geometry does not change, but the material reacts differently. This is where material properties become handy! The material properties defines how to object will look, depending on how to object is placed, rotated, scaled and so on. But to work on a material, one needs geometry.

So let’s take a little look on geometry properties.

Geometry Properties:

Our scene is composed of geometries and materials, once a material is applied to that geometry, our shaders will make sure to properly handle computations for accurate lighting. Our geometry still has a big role to play: let’s assume we have a a simple plane in 3D… And remove all the material properties. We still want to have a look on how the surface would handle a super basic light to it.

Let’s start with a simple geometry term using w. This geometry term is telling us how much lights is coming from the light source and interacting with the plane position!

Let’s assume we have a light source and a plane, where the plane is directly underneath the lightsource

If we take this surface and rotate it, we can clearly see that the surface will receive less light! The amount of light per area will be less and less as we rotate and therefore be 0.

In a simpler form, we can see this effect in action as if we rotate this plane from not facing directly the light, the portion illuminated by the light on the surface gets bigger and bigger.

As we move the surface normal perpendicular to the light source, the light energy is spread-out to a bigger surface, resulting therefore: In a darker result on our plane.

A simple experiment to understand how the energy of a light is illuminating a surface orientation would be to place yourself into a super dark-room, take a simple flashlight on a plane sheet, the more you rotate the sheet the less more the energy is being spread over, and therefore, the less light is being seen.

So in our simple equation: cosine theta gives us exactly how the light was being sprayed on the surface.

So in short: How many light do we get on the surface, how does it get sprayed around and so on… All the material stuff are to be thrown in based on what we have built.


We have seen how simple material are to be applied on surfaces. We have seen how a simple surface orientation can change the way light is getting handled by a surface material. Let’s now take a look to a simple material-model.

BRDF material model

Let’s make it a bit simpler, now that we have dived a bit in. We have three main components we want to work with: The viewer, the light and the normal of the surface. From those, we can compute the angle between the normal and the light position, wich gives us a very important variable to have in our box!

The goal of dealing with material model is that we do not really give a fuck about how all those little details are being computed! We need 3 key things, as I mentioned before: The light, the normal of the surface and the viewer. If you have the angle between the normal and the light, you pretty much are all good to go with basic BRDF.

So in a very simple graphics we have something like that!


Our equations and mathematics are quite interesting, but what about making it even simpler. What about rephrasing into a simpler way: Here are few ways to describe how the rendering equations works.

  • From the camera position in 3D space, how does the material models change the way light is being use to show stuff in the eye.
  • With the light and normals, V being our view, what is the computation required to compute that color.
  • Is there a way to encapsulate the way light works in CG?

I wanted to reformulate those questions are they somehow all wrap around this single idea: Given a point, what is the color of this point? Based on lighting, geometry, material and so on… We unpack a lot, but from a basic point, this is how simple the rendering get’s usefull for us! We have a single point, what is the color of that point? We use the rendering equation to dive a bit deeper and understand how it all works.

From this point on, I will resume this question into a single line: From an eye, how does a single surface is rendered?

We will need to make our equations a bit simpler…

C = Icos(\theta) f_{r}(w,v)

From the previous image, you can easily see that the formula we are dealing with is way simpler: ϴ is simply is the angle between our surface normal n and the incoming light from our source I that we defined as w.

The fucker we need to dissect is now hiding a bit more…


I mean, how magical does it get, to simply pass a light and viewer to a function that will compute everything to make this point somehow a bit more pbr-ish… This is where is becomes magical, from now how, we won’t really focuss how specifid material model, but mostly have some hands-on example on how it works with our good friend: The rendering equationBRDF material model

Going all over that hemisphere

Well, how good was it to render a single light source on a surface… The sad truth is that we need more to achieve PBR result, either in path-tracing or rasterization world(they really-deeply differs but for the sike of the article I will keep it on the overview ideas.).

So we have computed a light source, and we then passed all the computing to this secret friend. Have you ever wondered how it this fucker is working? Why use a single light when you can have them all…


In very raw-terms, I would define this terms of the equations onto what it means for a 25 years old dude like myself to install Grindr! For sure, it bring a bit more rules, but why deal with detailed rules when you can just have them all. I mean we have a single point to shade, and we need to delegate all the computing to this functions, that only requires two thing we already have the w and v.

But what about getting sure we have all incoming lights not from a single light, but from from all direction we can throw from a point to shade? It would make it way more dynamic(not that much dynamic) but still, we would have a better control on how light is reaching the surface we want to show from the viewer eyes.

This way, we can easily compute a single formula that would represent this idea: What about making a point to shade a summation of all possible incoming lights?

In short, this is how we deal with it:

L_{o}(\omega_{o}) = \ \sum_{i} L_{i}(\omega_{i}) cos\theta_{i} f_{r}(\omega_{i}, \omega_{o})

So we sum all lights from the point we want to shade.

Sounds good, we have the summation of all light from the point… But what about making it a bit more none-discrete. I a case we do not really deal with fixed lights, but handle all the enegery of the scene to shade that point, like a sky per example. This is where we switch to the integral notation, while the idea stays the same.

L_{o}(\omega_{o}) = \ \sum_{i} L_{i}(\omega_{i}) cos\theta_{i} f_{r}(\omega_{i}, \omega_{o})

Dealing with all lights…

So, end of part1…

I am feeling a bit tired to be honest(feeling a bit pizza). But in next post we will cover how to break this integral…

Until then,

Big love my friend!

0 %
0 %
0 %
0 %
0 %
0 %

Average Rating

5 Star
4 Star
3 Star
2 Star
1 Star

Leave a Reply

Your email address will not be published.