1 0
Read Time:28 Minute, 36 Second

I always been a music head, it is one of the reason I am writting code, music is what keeps me alive. There is nothing like a bad songs, but only the results of playing a wrong song in an appropriate moment. Well, I am actually writting a new article on this, but for now, let’s get back to basic. How does music works, what is music and what is sound? How does the perception from the human eardrums defined rules to write music one could ask? How would A440 defined the root of western approach to the anatomy of a sound, while populations all around the world would describe this root-pitch out of tune.

Notes on a scale are what we abstract as a combination of frequencies designed to fit our needs to making music: “sounds good”. This article is not about music, but about sound, and how we made computer parts of our dreams to emulate the nature of sound into tools for creating music.

Music is everything to me, and it will always be my first, second and fifth passion. While this article is technical, it is not about

Etiquette of audio

I recently reloaded a bunch of old Ableton projects and realized that my production worlkflow changed a lot from the early college days. I started messing around Ableton when I was around 14 years old, mostly recording old vinyls from my father’s collection. I was deeply inspired by the quality of the software and the flexibility to produced complex songs as well as acting as a playground for experimentations. Most of the sound productions I do nowadays are purely experiment-driven and does not focus on clear and official result. In the recent years, I spent a lot reading and studying frequencies behavior, and therefore, creating my own drum patches to be a bit more flexible using purely synthesis.

Let’s start with a audio-production approach, using Ableton. At this point, if you have ever messed around any daw, Midi protocol or synth hardware, this should be just a wrapper on how sound works in the digital domain. 😀

Audio and midi differences

Let’s start using Operator and setting a signal

One of the main differences between sound and midi for drums is a trade to considerate. I do prefer to work with my own drum patches as they allowed a complete control over what is actually generated and not strictly manipulating a sound files by adding layers of processing onto it. It is indeed possible to layer a lot of effects and modify the core sound of a drum sample, but it will somehow degrade the original signal.

For drum sounds, I prefer to manipulate how the frequencies are actually being populated by different buffers of DSP. There is a lot of way one can achieve such a process of tweaking a tone to sound like a kick, but in Ableton, we have this awesome built-in tool that is named: Operator. I won’t go over all the steps as I assume the reader might already have some knowledge on how to populate a midi signal and tweak it in real time.

Usually, one of the best way to get a driving kick for a song is to select the root scale and pick the middle note of the chord being major or minor, therefore, let’s select the basic C major scale and write the simple major chord to pick our root midi signal.

I choose C2 as root scale, but you can easily make a trade off by either up-down pitching the signal from a 12*n factors such as: Where n = -3

This would be same as having our midi files processed here.

As we want to middle note, let’s pick E, from now, we will use the frequency notation instead of the musical notation. E would be 82hz. You can easily mess around the same logic where N*2 = hz and N being the hertz of the note.

Therefore, let’s define a simple formulas to allow better handling kick pitched down or up from the 82. We can easily make the sum of change from every note of the core chord using this trick.

First we populate the scale spacing by adding all the frequency from the degrees of the scale. (Make sure you start from the root of your scale, in our case being C2 = 65.41): Let A be=

Where being the note to hertz and n the step between 1 and 12 and we divided by

Once compute we now have the step from each of our notes to mess around the core of our 82hz kick noted E. We simply r0 = E-(A/12) * 2 or to compute the next root signals: r1 = E+(A/12) and so on.

Setting the kick driving clock

The kick driving clock is the signal repetition on a simple musical term. You could easily translate repetition over a hertz driven clock, but I do prefer to do it using the musical tempo, as it follows the core of my current sessions. One way to handle these kind of task could be to hand write every signal where the kick should pump, but I prefer using automation and controls.

To handle the kick-driving clock we want to use a midi-driven signal alteration, this way we actually drive how a single clock reach the frequencies generator IE: the operator. I set it to 1/8 being quite interesting to build an interesting signal.I also leave the gate to 50%(more on this later.).

At this point, we could go on adding custom velocity at random seed, but I will focus on the kick generation. To model a low frequencies, let’s dissect a bit more the Operator.

Operator part 1

By default, Operator is defined to populate a 4 layers frequencies processing where a goes to b, b goes to c, c goes to d… We could change the way we process those signals by clicking this menu.

As you can see, they are additive from the root A and no other signal are to be driven if B is off. All 4 combinations are to be defined here as a core signal processing.

To change how a simple single signal behave, you can click on this tab.

You define A by the menu in blue, where you tweak around ADSR paramter of a single frequencies. For a kick, we want quick attack, low substain and A being the double of the summation of the next core note. This will allows to handle frequencies attenuation on the end rumble of the kick. As the kick goes low, you really want to make the rumble present, but not to differ on what your core scale. By adding a release time being *2 you make it slowly fade away, while other frequencies can reached on.

Note the fixed button:

Once clicked, you can described the core frequencies of the root signal, I do prefer to handle it as coarse. Where coarse simply handle root of note and * N where being the coarse.

At this point, we are able to define a basic kick driving signal by shaping the kick as follow.

Operator part 2 — Midi Parallelization

Once we have our kick signal, we could easily be tempted to drive the core of our kick based on this channel. Using this technique would make our kick signals glues to the final output. I prefer to use this method.

We double the operator in a rack, mute the clone and send then clone as a ghost driving for the modulation processing (where sounds really crunch).

The best way to to this would be to chain the clone. But I will use the raw method to illustrate the idea. As you can see, the second operator is muted, so we can bounced its signal to another rack.

And select the driver post-process. You can easily use it as pre-process, but I tend to use this technique based on the post-process of the root midi signal I will use the sound layer on.

We now have a kick midi clone to process the sounds of our kick, for simplicity I will simply send the midi signals to a clone of the output sounds modules from our A channel to the B. As seen on the last image, one could easily think that the audio signal of A drive the midi signal of B, but do not believe the visual, we have an audio driven as a signal in track A, while B simply copy the post process signals of deck A to be a “brand new” copy of the deck A running in parallel.

Operator part 3 — Adding effect midi effects on parallelized buffer

Let’s say you have the driving midi signals on A and want to process a sublayer of midi effect on a defined frequencies. You do not want to mess around the core A signal to process such an effect, this is why cloning the output Midi of A becomes handy, you can layer how Midi layers are being processed.

Let’s say A drives at 8 and you want to tweak 32 times every 4 of A on B. You can now do this. By adding a sub-midi signals on track B you define your own logic from the core A. I added a simple velocity randomness after altering the A to 32 pumped from the A 1/4 pumped.

You can now add the same operator from A to B or simply map different tools for A->B frequencies behaviour.

Sound pipeline on midi signals.

This is only the signal processing of this tutorial, you would normally apply this logic to signal processing and apply sounds. There are many ways one could apply the logic of midi processing to allow a better handling on sound. As most of the Midi signals are simply output to sweet nada, we can fuse our rack and apply the output of each signals to a single audio layer where one can process the A and B independently or in same time.

Let’s define to layer, the first one being the midi-clock and the second being the audio processing of this midi layer.

You can simply pipe the midi-clock to a midi layer where the output of the clocking system will be send to input of the MidiSoundLayer and then apply your favorite VST 🙂

I normally use this logic: My midi-clock drives the logic of midi, I send this to my MidiSoundLayer, where I process this signal to VST and then the AUDIO output of the MidiSoundLayer to a purely sound where I apply sound effects.

So at this point, we layered some midi processing, but did not properly created audio. The audio pipeline is easily abstractable when using ableton, so let use this idea of making “per midi-thread operation” into audio.

Sidechaining buffers result from the audio pipeline

Sidechain, the first thing you learn in your audio production classes to make a mix sound better. While the idea behind sidechain is quite simple and straight to the point, the theories of implementation on a more technical level can lead to creative ideas and ways to treat this tool as not only a simple volume reduction from one channel to another.

The good old compressor!

Ok, I will confess before starting this article, I use this technique most of the time, as I usually need to get my audio output decent in a matter of a few single minutes, such as live show or on the fly recording. I wanted to note this because the good and old raw way is still honestly one of the most practical and technical approaches to the concept to show the whole idea of the sidechain.

As we will go a lot deeper in this article, let’s put in place the dirtiest and quickest way of implementing sidechain. I will use Ableton, but the concept can be applied to any software or hardware.

The most basic use case.

I will not spend a lot of time to put in place this idea, as it is quite basic, but just a little refresher on what I define by the “old and dirty way”. But let’s define the pink track being the kick and the purple being a simple wave playing over the kick. You would normally put a compressor on the bass track and then pipe the kick channel to the sidechain compressor input unit, and then can start to deal with your implementation of what you want to compressor to do…

As defined, the blue surrounded is the region representing the audio input of the channel to drive the compressor unit.

The anatomy of a compressor.

Let’s take a closer look to the Ableton compressor, and see how it would behave, without having the input sidechain unit plugged in for now. The most basic analysis on how the compressor works is quite straight the point.

As we see in the graph bellow, the compressor receive a signal and output a transformed signal from the input one. You could make the compressor work in parallel, and we will indeed take a look to it later on, but for now, let’s assume the compressor works on a single signal input and output a transformed/implemented version of the signal.

There is two main part of the compressor we need to take a look on. Let’s first take a look to the input settings.

Input settings

In this part of the chain, we simply define how our input signal is going to be treated from the next modification layer. This layer of the signal is somehow not really changing the signal itself, but putting a layer of abstraction for a better control on how the signal is going to behave in the processing unit.

Ratio : Defines the amount of reduction the signal will get in the processing node. SImply a ratio between the input and the output. If a signal reach the treshold of the processing unit, the ratio defines on what ratio is the signal needs to compress.

Attack and release: Simply defines the attack and the release on the processing unit

Processing unit

This is where the magic happens. In this part of the signal stack, the compressor will take the input signal and transform it over itself to provide the output signal. It is in this unit that our signal actually get transformed.

Threshold:At what intensity the signal will start to get compressed

Gain Reduction: Gain reduction of the input signal…

Output: The intensity of the output processed signal

Time for real stuff!

First of all, let’s take a look at how we can have more control over our sidechain compression unit. Let’s define a kick based on sinewaveMost boring kick ever, but this is to represent the idea…

One of the quickest ways to start using the sidechain would be to pipe the audio output of this guy the input compressor unit… But that would make our sine-wave signal explicitly bound to the compressor unit, and we do not want this! We want more control and more abstraction!

As the graphic suggests, you might be tempted to see the Kick as a module where you can control every layer of the signal generator, but in fact, you simply bind the output audio of this signal to the input of the compressor, which is good but far from being efficient.

Bind your sound to another channel

Let’s use an 808 Kick as an example… Here is how I propose a first abstraction of signal.

You could simply replicate the kick occurrence on a different channel where it would have its own wave to drive the sidechain. The model bellow show the idea.

The kick drives a signal to the master output but also have a midi-channel, where you have a simple representation of the KICK midi to another module where you can pipe the midi signal and therefore treat the kick a simple midi file to build your sidechain module.

Let’s take a deeper look to the signal builder, which is where we are going to use this pipeline to drive better sidechaining on various items of our audio pipeline.

In blue, we have the midi signal of the kick as a copy of our 808 channel. So we can easily start to build on top of this. Here is the basic graph on our sidechain signal machine.

Based on our kick, we send the midi signal to a “blackbox” that we will design later. Out module takes care of parsing both the midi and audio signal of the signal. We simply bypass the audio for now… But inside the blue circlular module lays our implementation. As we will feed a final audio signal, we need to make sure we have two node, one being the master output (the audio of the module node) and the SidechainOutput wich is in blue, so we can build how our signal will be treated before being piped into the audio output of our node to sidechain any other channel on it.

In the next article, we will take a deeper look to the module…

For now, we simply designed a broad idea on how we take an 808 kick and abstract it to a simple sine wave for better sidechaining. Once we have the audio and midi signal of this abstraction, we can start building our signal to feed different types of sidechain audio output based on this node.

For now, see this as a template that you can pipe into your our audio effect rack to handle dynamic sidechaining 🙂

But this is not real DSP audio module ?

No, it is not… But as mentioned earlier, this article redactions is about exploring some core concept inside a software, and then building upon it. This is far from the low-dsp level, but I feel like we can have some fun with Ableton to explore signal chaining.

Let’s make sound with C++, a dsp-ish approach.

Ok let’s remove everything, and just build a simple c++ that creates sinewave and render them into an audio file.wav.

When writting DSP, we need to wrap our head around this simple thing: There is no such thing as none-discrete signals when we use computers. We need to provide a framework in wich to works to allow a discretization of signal. We work in discrete domain, wich is easier to handle for our single example to sample a single sinewave. We will use a 440 hertz frequency and we will sample it 44100 times every seconds. For this article, I will use C++, but note that GNU Octave have a very handy way to handle in a much more higher level those type of computation.

Discrete signal

A broad overview would be to think of a continuous signal as a sinewave going smoothly from -1 to 1… Easy to imagine, but somehow a bit harder to deal with when we deal with speakers, as we need to send them data to be “discrete”.

In this example, we can see how a signal goes from the continuous approach to a “per sample” approach. We simply sample along side the sinewave on a constant interval to create a digital-signal to send to the speaker. We are not there yet, but you get the idea: for one seconds of audio, we sample(or create) 44100 times a signal, this signal is then send to your speakers to play that stuff.

I won’t go over all the details of sampling an incoming signal, as it’s out of this ballgame, this article is oriented towards generating a single sinewave from an offline C++ program and to play the audio file to generate a simple tone(we will dive into chord creation later….)!

So at this point: we need a C++ program that will “punch in” 44100 samples per seconds(it’s a lot), and we will arange those samples to generate a single sinewave at 400.

While playing a single frequency is quite boring, let’s image playing 2 of them at the same time: How exciting?

We combine those 2 using c in green: a*b wich will output those two frequency played at the sametime. Therefore, we will sample the c result.

C++ program

Making a simple sine

When working with raw audio data, in this example we use .wav, we want to sample on a discrete domain, therefore time, in .wav we use 44100 sample per seconds. I won’t go into all the setup to output those data into a file, as it has been a lot covered online. So let’s define a simple sinewave class, this way we can start messing around with signals.

class SineOscillator
{
	float frequency, amplitude, angle = 0.0f, offset = 0.0f;
public: 
	SineOscillator(float freq, float amp) : frequency(freq), amplitude(amp) 
	{
		offset = 2 * PI * frequency / sampleRate;
	};
	float process()
	{
		auto sample = amplitude * std::sin(angle);
		angle += offset;
		return sample;
	}
};

In the “audio-loop” we can call a simple process to render a simple 440hz(A4), this is how we do it.

        SineOscillator sineOscillator2(440, .5);

        float duration = 2.0f;
        int samplerate = 44100;
	// Audiorender Loop
	for (int i = 0; i < sampleRate * duration; i++)
	{

		float sample = sineOscillator1.process();
		int intSample = static_cast<int>((sample) * maxAmplitude);
		writeToFile(audioFile, intSample, 2);
		
	}

We generate with this loop a 2 seconds output with a simple tone of 440 hertz. If you are interested in reading more on processing tone to musical <—> note, I wrote an article on that too. But in short, every note on your piano has a root frequencies associated to it. In the case of a A on the fourth octave, it is 440 hz.

If you are curious, here are the formulas to map around those fuckers. Let’s note that we use the 440 standard tuning.

Or in more simpler format:

A is 440, let’s render the data and listen that(very ugly) sound.

Let’s analyse the data

On a audio spectrum analyzer

We clearly see that the frequency is 440hz. I used Ableton Spectrum, for this snapshot.

Let’s take a the file sampling itself.

As mentionned, for 1 second of audio, we generate 44100 samples in code, one sinewave is very boring but let’s take a look at it. For a more decent understanding of the sinewave, let’s use a visual approach of the function sin(x)

sin(0) = 0 and sin(2pi) = 0, wich defines the core idea of sinewave meaning sine function is periodic. Complete cycle is 2pi. We do not use x in time but in concrete domain of time with 44100 sampling.

In our code, we defined a sinewave being sampled 44100 time per seconds, wich leads to this amount of sampling data.

So if we scale by two the sine in code we will have a twice more.

With out two oscillator in code:

	SineOscillator topImage(880, .5);
	SineOscillator bottomImage(440, .5);

We see clearly that idea of a 440 * 2, or playing the upper note on your piano. Somehow like hitting those two note at the same time in blue it’s 440, and in red it 440*2.

How to combine those two.

We are a bit closer to a chord, but we are not there yet. Let’s add 2 sinewave for now. We already have two sinewaves, one playing a 440hz and another one playing a 880hz, but how can we combine them?

Let’s start with a bit of code:

	// Define oscillator shit
	SineOscillator sinewave1(880, .5);
	SineOscillator sinewave2(440, .5);
	// Audiorender Loop
	for (int i = 0; i < sampleRate * duration; i++)
	{

		float sample = sinewave1.process();
		int intSample = static_cast<int>((sample) * maxAmplitude);
		writeToFile(audioFile, intSample, 2);
		
	}

One naive way to do it would be to check for sample every two iteration, this way we could play them side by side with only one sampling different every (1/44100) seconds.

By doing that :

	// Audiorender Loop
	for (int i = 0; i < sampleRate * duration; i++)
	{
		if (i % 2 == 0)
		{
			float sample = sinewave1.process();
			int intSample = static_cast<int>((sample)*maxAmplitude);
			writeToFile(audioFile, intSample, 2);
		}
		else
		{
			float sample = sinewave2.process();
			int intSample = static_cast<int>((sample)*maxAmplitude);
			writeToFile(audioFile, intSample, 2);
		
		}
	}

In our render loop, we have a function that prints 44000 sampling per seconds. We can therefore generate a simple audio signal with a frequency of choice, for now being a sinewave. Let’s add two of them playing at the same time.

Adding three sinewaves


    int duration = 12;
    SineOscillator sineOscillator(440, 0.5);
    SineOscillator sineOscillator2(880, 0.5);
    SineOscillator sineOscillator3(880 * 2, 0.5);

    /* Audio render loop */
    for (int i = 0; i < sampleRate * duration; i++) {
        auto sample = sineOscillator.process();
        auto sample2 = sineOscillator2.process();
        auto sample3 = sineOscillator3.process();
        int intSample = static_cast<int> (
            (sample * maxAmplitude)  + 
            (sample2 * maxAmplitude) + 
            (sample3 * maxAmplitude)) 
            * .05;

        writeToFile(audioFile, intSample, 2);
    }

The code is pretty straight to the point, and it generates this audio file of 12 seconds.

The one part we need to take a look at is the intSample static casting.

Here are all of the three frequencies being displayed on Demos. Note the *2 and *4, those means that for the frequency 1 we simply sample x, and for the freq two, we sample *2, and for frequency 3 we sample *4. This meaning that we start from whatever core frequency of a note, and play it +1 octave by multiplying the frequency by 2, and for 2 octaves higher we multiply by 4.

So for A4 on your piano, you would later get A5, and A6.

We want the red signal to be used in our sampling amplitude.

This would make the 3 sine oscillators to be “playing” at the same time.

        int intSample = static_cast<int> (
            (sample * maxAmplitude)  + 
            (sample2 * maxAmplitude) + 
            (sample3 * maxAmplitude)) 
            * .05;

In our super friends the Spectrum, we have something like that:

Full code looks something like this:

https://gist.github.com/antoinefortin/e05ec2f1302209be0e1bd85ea0531eaa

#include <iostream>
#include <cmath>
#include <fstream>
#define M_PI 3.1415926
using namespace std;

const int sampleRate = 44100;
const int bitDepth = 16;

class SineOscillator {
    float frequency, amplitude, angle = 0.0f, offset = 0.0f;
public:
    SineOscillator(float freq, float amp) : frequency(freq), amplitude(amp) {
        offset = 2 * M_PI * frequency / sampleRate;
    }
    float process() {
        auto sample = amplitude * sin(angle);
        angle += offset;
        return sample;
        // Asin(2pif/sr)
    }
};

void writeToFile(ofstream& file, int value, int size) {
    file.write(reinterpret_cast<const char*> (&value), size);
}

int main() {
  
    ofstream audioFile;
    audioFile.open("waveform.wav", ios::binary);

    //Header chunk
    audioFile << "RIFF";
    audioFile << "----";
    audioFile << "WAVE";

    // Format chunk
    audioFile << "fmt ";
    writeToFile(audioFile, 16, 4); // Size
    writeToFile(audioFile, 1, 2); // Compression code
    writeToFile(audioFile, 1, 2); // Number of channels
    writeToFile(audioFile, sampleRate, 4); // Sample rate
    writeToFile(audioFile, sampleRate * bitDepth / 8, 4); // Byte rate
    writeToFile(audioFile, bitDepth / 8, 2); // Block align
    writeToFile(audioFile, bitDepth, 2); // Bit depth

    //Data chunk
    audioFile << "data";
    audioFile << "----";

    int preAudioPosition = audioFile.tellp();

    auto maxAmplitude = pow(2, bitDepth - 1) - 1;

    int duration = 12;
    SineOscillator sineOscillator(440, 0.5);
    SineOscillator sineOscillator2(880, 0.5);
    SineOscillator sineOscillator3(880 * 2, 0.5);

    /* Audio render loop */
    for (int i = 0; i < sampleRate * duration; i++) {
        auto sample = sineOscillator.process();
        auto sample2 = sineOscillator2.process();
        auto sample3 = sineOscillator3.process();
        int intSample = static_cast<int> (
            (sample * maxAmplitude)  + 
            (sample2 * maxAmplitude) + 
            (sample3 * maxAmplitude)) 
            * .05;

        writeToFile(audioFile, intSample, 2);
    }
    int postAudioPosition = audioFile.tellp();

    audioFile.seekp(preAudioPosition - 4);
    writeToFile(audioFile, postAudioPosition - preAudioPosition, 4);

    audioFile.seekp(4, ios::beg);
    writeToFile(audioFile, postAudioPosition - 8, 4);

    audioFile.close();
    return 0;
}

Note on wave dissection:

Virtually everything in the world can be described via a waveform — a function of time, space or some other variable. For instance, sound waves, electromagnetic fields, the elevation of a hill versus location, a plot of VSWR versus frequency, the price of your favorite stock versus time, etc. The Fourier Transform gives us a unique and powerful way of viewing these waveforms.

When it comes to fundamental mathematics, the idea of beauty can easily be trade for complex formula none-mathematicians would not be able to grasp the idea. The idea of representing the world using math equations and their level of abstractions can lead a single human being to get lost in all the terms and writing of concepts. The beauty of maths, is the ability to play with equations and abstraction concepts to represent the concept that any idea could fit in. To a very simple level, just the fact of adding two numbers together could be express as this idea of abstraction terms over practical finalities, adding bananas together will lead you to the sum of those two single amount of bananas added together. The conceptual approach over complex mathematics should sometime be refer as simply as adding two sets of bananas together to have a final sum of bananas, and then comes addition, a very basic equation concept.

Abstraction of concepts

Addition is dead simple, but the concept of abstraction between adding bananas or cars is quite the same. You take a set of one single item, and add it to another set of the unique items, could be cars, bananas or numbers.

Representing this simple level of abstraction is simply a way to put a single concept of adding two single sets together to a final sets representing the finite sum of A1 + A2 = AFinal. When it comes to more advanced mathematics concepts, they idea of abstraction has been a key idea since the first day of humans being. From the sum of lines on the wall of caves to represent the amount of killed mammoths from the first steps of human evolution, addition is a key to human evolution.

This simple concept of representing the world we are surrounded with, and the way we are interacting with it through equations is old as the we world is. From the early days of human species, the single concept of abstraction over concrete finalities fascinated humans. When it comes to evolutions, concepts followed the spectrum of analysis of humans cognitive knowledges to fits it’s needs. The desire of abstraction to understand finalities is somehow a key root to a lot of accurate progress in our world.

The repetitions of things, and the abstraction of time

Everything we are surrounded with could be represented to an infinite amount of repetition. The single fact of going to buy a bread to the corner store could easily by represent to a repetition of the same act over your lifetime. The courbatures of your appartement walls, due to the lack of renovations could also be reprensented to an amount of waves through times. To some extends, anything that fits a need of something over something else could be represented as a function, just as we need to plot the good result into the related two sides of analysis.

Practical approach of a signal is somehow always represented as a influence over times, but representation of complex behaviour could be a lot more complex, as we can abstract time from a finite axis of our analysis. We tend to think about repetition over time, but signals of repetition could really be treated as an abstraction of a unite of times.

If you look to a picture of mountains, you can easily see patterns that are not represented as a first look through time, peaks repeat, but time is a root value to represent the peak of mountains. However, if you look to mountains over any N abstraction of what forms those mountains, it is possible to built a sum of all those oscillations differences to built a quite finite representation of what mountains are.

This idea of leading any representation of repetition over something is the beauty of Fourrier Legacy… Even a single none-consecutive signal could be repeated over it’s domain of analysis to extend to an infinite repetitions and then dissect the repetition to find a single root key of dissectation of this signal.

Anything that repeat over itself, even if the change of itself is not perfectly infinitesimal, can be analyse and propose as a change over itself if it’s final change is not linear.

Joseph Fourrier, and the discectomy of signals

Fourrier is someone a lot of humans living in the 21st century would like to have a beer with. Talking with this mathematicians is a dream a lot of us have. However, we simply have his concept and theory, studied in school and then forgot. The work of this genius goes of over a lot of thing we take for granted.

Fourrier represented this idea of change over a concept that change over itself. Where as the finite sum of a change could be treated as an integration of all those change over itself. The concept is very easy to represent, but the beauty of Fourrier is about change over a repetition. Because anything than change over a X period, can be treated as an influence over the root representation of the core influence of itself over this X period.

By adding all changes in one finite function, we can easily dissect it into single part, and so the opposite allows the recomposition of change to a final representation of the sum of change over this function.

Dissection of signals

If we treat signal as a repetition, the legacy of Joseph Fourrier comes inspiring to represent how something could change overitself even if none-linear. Fourrier is a king, and sometime, I dream about taking a beer with him, because we would have so much to talk about!

Nowadays, Joseph Fourrier works is still, according to me, one of the beauty of mathematics. The concept of his works is more than beautiful, and describe a lot of things we can simply resume as ‘life’.

Happy
Happy
50 %
Sad
Sad
0 %
Excited
Excited
50 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

Your email address will not be published.