If you come from a shader or 3D GPu programming background, chances are, you will use time in your fragment shaders. Every engine and every languages will deal with timing differently(as it is still a very large subject of explorations in computer science)… Anyhow, let’s explore timing in Nuke and Blink! As we saw in the last post: Blink got the C++ syntax feeling, while also being HLSL-friendly. When writting shaders, often you will want to deal with time, or have a way to reach time in your GPU program… Nuke does not provide this, so in this tutorial we will find a way to access time using a hacking texture sampling.

#### Time

Why to use time ?

In the camera world, timing is what we send to the captor of our cameras, and then stacking all of those images together we have a “video”. In the field of computer graphics, we do not have this luxury… We deal with boring static numbers, but when changing these numbers, we got a sense of “videos”.

Time is a precious value when dealing with CG programming: it allow us to write code that will not only produce an image, but also make a sequence of image change and therefore produce a “video”.

I was a surprised that Nuke did not had a “Time” node that I could simply use to drive my Blink code… But let’s build a “somewhat” timing in Blink.

Nuke deals with frames so time to frame:

So the question is: How do we actually deal with time in Blink?

We have videos, and we have a way to access every frame of a video when using Blink… So let’s get our hands dirty and produce some videos that we will use to “represent” time into our Blink Node.

The key here is very simple: I will render different Red, Green and Blue value into a simple video, then I will query this video in my Blink node. In short, it looks like that:

  void process() {

float time = src().x; // The red value from the frame

dst() = src();    

And I made a single 0 to 1 on the red Channel in Blender:

The video looks like that:

So I won’t go into FPS from one software to another… But at this point you should get the idea: We generated a video where the red channel goes from 0 to 1, and then we sample this texture into our Blink Node, so we hitting play -> the second on our timeline growing up, makes the texture more red!

So the texture we sample change on the red channel! So we can now sample it into our Blink Node!

The setup looks like this:

#### Fraction part of a float.

At this point we need to deal with the fraction part of a decimal number. In GLSL we would use the fract function. Thus in Nuke, we do not have this sexyness, so we would need to write it ourselves. In short: here is how the fract function works:

Given 1.5 -> it returns .5;

Given 12983448.55 -> It returns .55;

So: In short we simply feed a float to the functions at it returns what is after the decimal parts. If you are curious about knowing how to algorithms works, I could make a post on it, but it is very simple!

For every none-integer numbers, we simply take that number, take the closest integer and remove the source x from it. I won’t go into details as it is quite simple to understand, but Nuke does not provide these type of functions to be used in Blink! So I simply wrote my own one 🙂

  float BlinkFract(float x)
{
return x - floor(x);
}

Pass any numbers in X and then it returns the fractionnal part! To see it in action: We can now pass any numbers above 1, and still got a nice Red value from 0 to 1!

In this example, I use a 1234.09 value to feed the RGB value. As you can see to this output, it got a .09 value on the red channel!

Let’s make it more red by cranking it to .9!

As we shoot the dst() to a float4() and passing the red channel to the BlinkFract function, is produces exacxtly what we wanted 🙂 Pass any decimal numbers, and simply returns the decimal part.

#### In action

Time for particles!

kernel Custom_ColorMult : ImageComputationKernel<ePixelWise>
{
Image<eRead,eAccessPoint,eEdgeClamped> src;                       //input image with edges clamped
Image<eWrite> dst;                                                //output image

param:
float4 color;                                                   //parameter
local:
float width;
float height;

void define() {
defineParam(color, "Custom color", float4(0.0f,1.0f,0.0f,0.0f));//default value
}

void init() {
width = src.bounds.x2;                         //function to find right edge
height = src.bounds.y2;                        //function to find top edge
}
{
return x - floor(x);
}

float2 Hash12(float t)
{

float x = BlinkFract(sin(t * 674.3f) * 453.2f);
float y = BlinkFract(sin(t * 2674.3f) * 453.2f);
float2 resS = float2(x,y);
return resS;

}

void process(int2 pos) {

float time = src().x;

float2 fg = float2(pos.x, pos.y);
float2 ires = float2(width, height);
float2 uv = (fg - .5f * ires)/ires.y;
float4 col = float4(0.0f, 0.0f, 0.0f, 1.0f);

float2 shit;

for(float i = 0.0f; i < 120.0f; i++)
{
float2 shit = Hash12(i);
float t = 1.0f;
float d = length(uv - shit * t);

float brightness = 0.002f;

col.x += brightness / d;
col.y += brightness / d;
col.z += brightness / d;

}
dst() = col - 0.5f * float4(1.0f, 1.0f, 0.0f, 0.0f);                                          //input multiplied by our color
}
};