Welcome back 🙂 So in the last article on how to use Blink Node, how a simple source code could be achieve to create a single color, expose parameters and so on… In this article, I want to get a bit more into the code side of things! It will be a bit more technical, we will dive a bit into the mathematics aspects, but remember that you can use Blink without coding experiences.

So let’s rewind a little bit:

In the latest tutorial, I covered how to use a simple texture to generate those images using signed distance functions and Blink code. I have to be honest, that code was written before going to bed and it was not the most verbose one… At this point, I would like to rewind a little bit and remember that this node is a “per-pixel” one(not sure on the Nuke terms on this), meaning that our code will run for every pixel.

So let’s go back to basic again:

The Process function is used to assign a color for every pixel of what we want to manipulate. (See it a bit like a fragment shader you would use on a single quad).

 void process(int2 pos) {
     dst() = float4(1.0, 0.0f, 0.0f, 0.0f);                        
}

As you could easily expect… It simply produces a red image. Very boring, but still very usefull for understanding a bit how it works. Have fun manipulating those float4 arguments to produce new colors 🙂

But what is float4…

In Blink, we deal with different type of variables, just like we do in C or C++! If you ever written C code, it won’t be a huge surprise to see those strange words like int, float, bool all over the place. They simply mean the type of the variable we want to store…

For example:

int type are used to store integer -> Your age, your appartement unit and so on…

float are used to describe decimal numbers.

Float4 is simply a type of variable that will hold 4 floating points (decimal numbers) into a simple variable. In language like C++ or C, you will not have those type of variables out of the box, but in the GPU languages such as GLSL or HLSL, it comes build in.

Because we are dealing with colors, and colors have some RGB components, it would somehow be easier to deal with a way to structure them using a single variable name, and store into the variable the value of this color. For example, when using Red, you do not really want to deal with every component of this color, but mostly as a combination of the 3 together.

Good thing: Nuke Blink allow us to deal with color this way, as we would do in GLSL or HLSL (wich are the shader languages for the GPU).

Note: But why float4 and not float3? -> Because we deal with alpha therefore RGBA.

To see it in actions:

Take a look at every variable red, green and blue…

What will be the color of our textures?

Code is here:

    float3 red = float3(1.0f, 0.0f, 0.0f);
    float3 green = float3(0.0f, 1.0f, 0.0f);
    float3 blue = float3(0.0f, 0.0f, 1.0f);

    dst() = float4(green, 0.0f);   

To understand these types, have fun simply changing the value of the x,y,z(see them like r,g,b) of the variables.

Let’s go a bit deeper!

At this point, you should be able to generate a single color… All of that for this, how fucking boring you might think. But, here is the deal: This code runs for every pixels, just like a fragment shader. Nuke made a nice wrapper over all of this to make you not very bother on how the GPU deals will all of that, but to be honest, it is quite fucking awesome!

So wait: Does-that mean I could go on and write Blink code to create SIMD-ish code?

Yes Sir, and this is where the real power happens! This code, will run for every pixels of the textures, without actually taking a shit about what the others pixel is. This code is running for every pixels, and the output is the image, and that’s it.

Blink is dealing extremelly well at this: Send me a pipe, I will turn a pixel A into pixel B.

Write a function that will process pixel per pixel on for-all pixel code.

Nice, we now know that Blink is awesome to generate pixel. You might ask youserlf: But how do I generate a simple line? Or how you do raster a triangle? (Spoiler, we won’t go into triangle rasterization), but we will deal with a single line.

Now that we understand a bit more how the Blink node works, let’s use it to write a single line, review the equation of a line, and how to output it in Nuke.

Single line in Blink

We will start with a very single color: Red and make a Blink that will simply output the same color for every pixel. Let’s do it very direct and raw! The question I will ask myself at this point is:

How can I change the color of a certain pixel to the output texture?

So far, we only been using dst() = float4(1.0f, 0.0f, 0.0f, 1.0f) to ouput a red color… But how could we access only a certain pixel of the output ? Let’s do that:

Let’s make a Blink Node that change a certain pixel at x and y location of the image. (x = width and y = height)

So the basic in here is to access the x and y coordinate of the texture we want to change the color from:

So every pixel got a X and Y coordinate value as we see in this very horrible drawing!

Good thing, Blink Script allow us to access a specific pixel location, using the dst() constructor.

In practice, we now have a red pixel at x5 and y5…

Have fun around and load the whole code into your own Blink Node 🙂

kernel PixelColorAT : ImageComputationKernel<ePixelWise> 
{
  Image<eRead, eAccessRandom> format; 
  Image<eWrite, eAccessRandom> dst;  
  void process(int2 pos) {

    int number_of_points = 50;
    /*

    
    for (int y = 0; y < number_of_points; y++) { 
      dst(0,y) = float4(1.0f); 
    }
*/    
    int xRed = 5.0f;
    int yRed = 5.0f;

    dst(xRed, yRed) = float4(1.0f, 0.0f, 0.0f, 1.0f);

  }
};

To deal with triangles, we need lines, so from this point let’s write a single line. We need our origin, wich is the x position(left to right) and then need to spawn pixel from this origin on the y axis(bottom to top).

Note: Could you make a simple line using math and UV? -> Yes but we not there!

So we add these few lines to our Blink node, wich produces a line from x origin to yAmount. Meaning we draw from x positioin from left to right and then draw those pixel in white from into the y position:

This code is quite simple to go: We simply make a for loops to make the dist(12, y) always starts from x = 12 and print y+1. It makes every pixel quite easy to acess. Also note that we deal with the same red pixel we made previously, meaning we can easily layer pixel assignation. That could be usefull from stuff like 3D GBuffer(but more on that later). For now, let’s just output a fucking green line!

Here is the code

kernel PixelColorAT : ImageComputationKernel<ePixelWise> 
{
  Image<eRead, eAccessRandom> format; 
  Image<eWrite, eAccessRandom> dst;  
  void process(int2 pos) {

    int ysteps = 50;
    for (int y = 0; y < ysteps; y++) { 
      dst(12,y) = float4(1.0f); 
    }
   
    int xRed = 5.0f;
    int yRed = 5.0f;

    dst(xRed, yRed) = float4(1.0f, 0.0f, 0.0f, 1.0f);

  }
};

Here is the result:

Triangles are my friends, but let’s make single line deeper

Cool we now have a line: but what do we need to draw a triangle: More lines. Before diving into triangles and rasterization, let’s go over what makes a line a line.

Let’s change those pixels to make a line pointing from the top down corners to the top-right corners! From the previous code, let’s change the x position of the pixel to be y from the loop:


    int ysteps = 50;
    for (int y = 0; y < ysteps; y++) { 
      dst(y,y) = float4(0.0f, 1.0f, 0.0f, 1.0f); 
    }

And then: we have a line, not pointing straight, but having different x and y. Thus at this point, we only deal with a single loop and mapping xy pixel position from the single loop iteration that we remap to y!

dst(y,y)

At least we have a single line!

Let’s tweak a bit more:

What about changing the X axis to be x *2 … Easy to do:

  void process(int2 pos) {

    int ysteps = 50;
    for (int y = 0; y < ysteps; y++) { 
      dst(y * 2,y) = float4(0.0f, 1.0f, 0.0f, 1.0f); 
    }

From this point, it is quite easy to see that we manipulate every pixels using this loop. I change the slope of that line by y*2, and then assigning vanilla-y to this. We can see that this line changes as we deal with a single y from x operation.

I do not want this article to be dealing with line drawing, but now that we have the idea expressed and understand how Blink works… Let’s move to 3D.

Let’s move to 3D!

Once again we are dealing with a single 2D Texture! What about building a 3D world out of that using only one node? This tutorial is only about one node… The Blink one!

Blink is some incredible, it allows us to write fragment-like code to merge code with creativity. While we have been dealing with simple lines and operations in the previous part: What about making our Blink knowledges fucking reach the level to generate a sphere, without using PointRender…

In this part, I will dive a bit more into math and shaders!

What are Signed distances functions?

If you made it to this point: SDF are mathematical functions we use the query the 3D space(in short). We define the 3D space and query it, if a surface is very close from the point we want to render, we compute the color of this point. We use mathematics to create those shapes.

This is not an article about ray-marching… So I wont dive into details!

(If you are curious to understand how it works, I wrote a long serie on SDF and Ray-Marching)

But in short, let’s include a single sphere:

I start by using this script:

kernel UVmap : ImageComputationKernel<ePixelWise>
{
  Image<eRead,eAccessRandom, eEdgeClamped> src;    //I
  Image<eWrite> dst;                               //O  
  local:
    float width;                                   
    float height;                                  
  
  void init() {
    width = src.bounds.x2;                         //function to find right edge
    height = src.bounds.y2;                        //function to find top edge
  }
  float sphere(float3 pos, float r)
  {
    return length(pos) - r;
  }
  float map(float3 pos)
  {
  
    float3 q = pos;
   
/*
   q.x = fmod(q.x + 4.0f, 8.0f) - 4.0f;
     q.z = fmod(q.z + 4.0f, 8.0f) - 4.0f;
     q.y = fmod(q.y + 4.0f, 8.0f) - 4.0f;
*/

     float d = sphere(q, 1.5f);
 
      return d;
  }
  
  float3 computeNormal(float3 pos)
  { 
    return normalize(float3(
        map(pos + float3(0.1f, 0.0f, 0.0f)) - map(pos - float3(0.1f, 0.0f, 0.0f)),
        map(pos + float3(0.0f, 0.1f, 0.0f)) - map(pos - float3(0.0f, 0.1f, 0.0f)),
        map(pos + float3(0.0f, 0.0f, 0.1f)) - map(pos - float3(0.0f, 0.0f, 0.1f))
    ));
  }
  void process(int2 pos) {

    float2 fg = float2(pos.x, pos.y);
    float2 ires = float2(width, height);
    // (fragCoord -.5 * iResolution.xy) / iResolution.y;
    float2 uv = (fg - .5f * ires)/ires.y;    
    
   float3 rd = normalize(float3(uv,1.0));
    float3 pos2 = float3(25.0f, 5.0f, -15.0f);
   float3 color = float3(1.0f,0.0f, 0.0f);
   
  color = float3(uv.x, uv.y, uv.x * uv.y);


    for(int i=0; i < 512; i++)
    {
      float d = map(pos2);
      if(d < 0.01)
      {
        
         color = float3(1.0f); 
         float3 nws = computeNormal(pos2);
         float3 sunPos = float3(-2.f, 2.f, -4.f);
         float sunFactor = dot(normalize(nws), normalize(sunPos));
         
         float3 basicColor = float3(.2f, .3f, .88f);
         color = basicColor * sunFactor * normalize(pos2);
        
      }
       pos2 += d * rd;    
    }
//rendering with a fog calculation (further is darker)
    
    dst(0) = color.x;                        
    dst(1) = color.y;                          
    dst(2) = color.z;
    dst(3) = 1.0f; 
  }
};

SDF are queriesd from the map() function.

Now we have a single sphere! We can see that the sphere color is not very credible… Let’s add a bit shading from what we have build so far! From this point we have a single sphere, being white… To achieve this image I use that code:

  color = float3(uv.x, uv.y, uv.x * uv.y);


    for(int i=0; i < 512; i++)
    {
      float d = map(pos2);
      if(d < 0.01)
      {
        color = float3(1.0f); 
      }
       pos2 += d * rd;    
    }

Let’s shade that sphere! To shade we need normals from that hit point. Let’s compute them using that simple math. We simply query that distance field with a single offset on each access to get the normal of what we want to work with.

  float3 computeNormal(float3 pos)
  { 
    return normalize(float3(
        map(pos + float3(0.1f, 0.0f, 0.0f)) - map(pos - float3(0.1f, 0.0f, 0.0f)),
        map(pos + float3(0.0f, 0.1f, 0.0f)) - map(pos - float3(0.0f, 0.1f, 0.0f)),
        map(pos + float3(0.0f, 0.0f, 0.1f)) - map(pos - float3(0.0f, 0.0f, 0.1f))
    ));
  }

So from this point we are able to generate a simple Lambertian lighting model (no BRDF). But for now, let’s simply generate a single sphere with normals is worlspace

Awesome we now have a sphere!

Lambertian in short

We will now add lighting to this sphere SDF

The goal of lambertian lighting is quite simple, let’s throw the equation first…

I love to break every part of an equation, this way it makes it easier to understand and implement. What we want to know, or compute is “What is the color of the pixel”, and for now, we have a hit point in our scene. Well for now we do not see it, because I forced all the hitpoint to be in black. I will take a sphere as an example, because we love spheres 🙂

I made the sphere in white, then we need to shade it. Let’s walk every terms of the equation:

L dot N

Is the first part we need to look at.

L: Is the light position in worldspace. Normally defined by a vec3 in GLSL or float3 in HLSL.

N: is the normal in worldspace from the point we want to shade.

The dot part:

Most shading language have a dot operator, if not, the idea is simply to mesure how two vectors are oriented towards each others. In 2D we could express the dot product between vector a and b as follow.

The dot gives you the ability to track information on how the point N and the light pos L are oriented one vs the other.

Therefore we will use the power of GLSL to deal with the dot product. Let’s code this simple idea before moving to color and intensity.

vec3 lightPos = vec3(1.0, 1.0, -12.);
vec3 L = normalize(lightPos — pos);
vec3 N = computeNormal(pos);
col = vec3(dot(N, L));

I plug the dot product result into a vec3 constructor, as the result is a float, and we want to get express as a rgb out color. Synthaxic sugar, but very usefull 🙂

Giving us:

C and I

C: the color of the light

I: the intensity

So we simply need to define a color and an intensity. Let’s say the light will be green.

vec3 ligthColor = vec3(0.01, 1., 0.01);

With an intensity of 1.5, intensity if usually define as a float, but to smash everything into GLSL, simply use the vec3(float A) to assign a float as every member of the vec3 class.

float intensity = 1.5; vec3 Li = vec3(intensity);

vec3 lightPos = vec3(1.0, 1.0, -12.);
vec3 L = normalize(lightPos — pos);
vec3 N = computeNormal(pos);
vec3 C = vec3(0.01, 1.0, 0.05);
col = vec3(dot(N, L)) * C;

Giving us:

Overview

At this point, you will be able to generate SDF in Nuke! Using those SDF, it becomes a tool from any Nuke pipelines! I simply wanted to explore how to use that Blink Node to create stuff!

The full code is here:

kernel UVmap : ImageComputationKernel<ePixelWise>
{
  Image<eRead,eAccessRandom, eEdgeClamped> src;    //I
  Image<eWrite> dst;                               //O  
  local:
    float width;                                   
    float height;                                  
  
  void init() {
    width = src.bounds.x2;                         //function to find right edge
    height = src.bounds.y2;                        //function to find top edge
  }
  float sphere(float3 pos, float r)
  {
    return length(pos) - r;
  }
  float map(float3 pos)
  {
  
    float3 q = pos;
    q.x = fmod(q.x + 4.0f, 8.0f) - 4.0f;
 q.z = fmod(q.z + 4.0f, 8.0f) - 4.0f;
 q.y = fmod(q.y + 4.0f, 8.0f) - 4.0f;


     float d = sphere(q, 1.5f);
 
      return d;
  }
  
  float3 computeNormal(float3 pos)
  { 
    return normalize(float3(
        map(pos + float3(0.1f, 0.0f, 0.0f)) - map(pos - float3(0.1f, 0.0f, 0.0f)),
        map(pos + float3(0.0f, 0.1f, 0.0f)) - map(pos - float3(0.0f, 0.1f, 0.0f)),
        map(pos + float3(0.0f, 0.0f, 0.1f)) - map(pos - float3(0.0f, 0.0f, 0.1f))
    ));
  }
  void process(int2 pos) {

    float2 fg = float2(pos.x, pos.y);
    float2 ires = float2(width, height);
    // (fragCoord -.5 * iResolution.xy) / iResolution.y;
    float2 uv = (fg - .5f * ires)/ires.y;    
    
   float3 rd = normalize(float3(uv,1.0));
    float3 pos2 = float3(25.0f, 5.0f, -15.0f);
   float3 color = float3(1.0f,0.0f, 0.0f);
   
  color = float3(uv.x, uv.y, uv.x * uv.y);


    for(int i=0; i < 512; i++)
    {
      float d = map(pos2);
      if(d < 0.01)
      {
        
         color = float3(1.0f); 
         float3 nws = computeNormal(pos2);
         float3 sunPos = float3(-2.f, 2.f, -4.f);
         float sunFactor = dot(normalize(nws), normalize(sunPos));
         
         float3 basicColor = float3(.2f, .3f, .88f);
         color = basicColor * sunFactor * normalize(pos2);
        
      }
       pos2 += d * rd;    
    }
//rendering with a fog calculation (further is darker)
    
    dst(0) = color.x;                        
    dst(1) = color.y;                          
    dst(2) = color.z;
    dst(3) = 1.0f; 
  }
};

Leave a Reply

Your email address will not be published. Required fields are marked *