Coding, Sounds and Colors | A blog about algorithmic experiments in music and visual art. Sort of.

Overlook

posted by on 2018.07.21, under Processing
21:

Combining some techniques from the previous posts on shaders, here’s the render of an audio reactive application which I used for a video of “Overlook”, a track of my musical alter ego

The code uses vertex and fragment shaders to create a glitchy environment which reacts to the audio in real time.
The track “Overlook” is available for listening here

Dust From A G String

posted by on 2018.06.27, under Processing, Uncategorized
27:

Here’s “Dust From A G String”, a piece about the corrosive power of passing time, and the beauty it leaves behind, just before the end.

The video was made in Processing, using a custom shader based on FBO techniques. The audio is a reworking of Bach’s “Air on the G String”.

Reaction-Diffusion algorithm and FBO techniques

posted by on 2018.06.08, under Processing
08:

Reaction-Diffusion algorithms are very fascinating, since they are capable of producing incredibly organic patterns. They can also be computationally expensive if the grid of choice is fine enough. In a nutshell, we regard every pixel of an image as a cell containing two types of chemicals in different proportions, and whose combination produces a given color on the screen. The “diffusion equation” is such that, as time goes on, the proportion of the two chemicals changes according to that of the neighborhood cells. Since the algorithm is pixel* based, at its finest, we might think this is a job for a fragment shader. And that’s indeed the case! We have to be careful though concerning two aspects. First, the algorithm uses information about the adjacent pixels, and we know that a fragment shader only treats fragment by fragment information, it does not allow sharing among fragments. This is solved by using a texture to store information about the chemicals. This brings us to the second point: we need to store the previous state of the chemical proportions to compute the next one. On the other hand, a shader is not “persistent”, in the sense that all the information it has concerning fragments is lost on the next frame. Enter FBO and ping-pong technique! Framebuffer objects allows what is called “off-screen rendering”. In other words, instead of rendering the pixels directly to screen, they are rendered to a buffer, and only later displayed to the screen. Hence, we can pass the FBO as a texture to the shader, use, say, the red and green values of the texture at the given fragment coordinate as our chemicals percentage, and set the color of the fragment using the new values of the percentages. This technique is usually referred to as “ping-pong technique”, because we go back and forth from the buffer to the screen. It is particularly useful for modelling particle systems directly on the GPU. In Processing, a FBO is an object described by the class PGraphics, and the shader becomes a method that can be sent to the object.
Here’s the code

PGraphics pong;
PShader diff;

void setup(){
  size(800, 800, P2D);
  pong = createGraphics(width, height, P2D);
  diff = loadShader("diffFrag.glsl");
 
  pong.beginDraw();
  pong.background(255, 0, 0);
  pong.endDraw();
 
  diff.set("u", 1.0/width);
  diff.set("v", 1.0/height);

  pong.beginDraw();
  pong.noStroke();
  pong.fill(0, 255, 0);
  pong.ellipse(width/2, height/2, 10, 10);
  pong.endDraw();
}

void draw(){

 
 
  pong.beginDraw();
  pong.shader(diff);
  pong.image(pong, 0, 0);
  pong.resetShader();
  pong.endDraw();
 
  image(pong, 0, 0);
}



//// diffFrag.glsl

varying vec4 vertColor;
varying vec4 vertTexCoord;


uniform float u;
uniform float v;


uniform sampler2D texture;

float laplaceA(in vec2 p, in float u, in float v){
float A = 0.05 * texture2D(texture, vertTexCoord.st + vec2(-u,-v))[0] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(0,- v))[0] + 0.05 * texture2D(texture, vertTexCoord.st  + vec2(u,-v))[0] +
 0.2 * texture2D(texture, vertTexCoord.st + vec2(-u,0))[0] - 1.0 * texture2D(texture, vertTexCoord.st + vec2(0,0))[0] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(u, 0))[0] +
0.05 * texture2D(texture, vertTexCoord.st + vec2(-u,v))[0] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(0,v))[0] + 0.05 * texture2D(texture, vertTexCoord.st + vec2(u,v))[0];
return A;
}

float laplaceB(in vec2 p, in float u, in float v){
float B = 0.05 * texture2D(texture, vertTexCoord.st + vec2(-u,-v))[1] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(0,- v))[1] + 0.05 * texture2D(texture, vertTexCoord.st  + vec2(u,-v))[1] +
 0.2 * texture2D(texture, vertTexCoord.st + vec2(-u,0))[1] -1.0 * texture2D(texture, vertTexCoord.st + vec2(0,0))[1] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(u, 0))[1] +
0.05 * texture2D(texture, vertTexCoord.st + vec2(-u,v))[1] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(0,v))[1] + 0.05 * texture2D(texture, vertTexCoord.st + vec2(u,v))[1];
return B;
}



void main(){

float A = texture2D(texture, vertTexCoord.st )[0] ;
float B = texture2D(texture, vertTexCoord.st )[1] ;

float A_1 = A + (0.9 * laplaceA(vertTexCoord.st, u , v) - A * B * B + 0.0545 * (1 - A)) ;
float B_1 = B + ( 0.18 * laplaceB(vertTexCoord.st, u, v) + A * B * B - (0.062 + 0.0545) * B)  ;



gl_FragColor =  vec4(A_1, B_1, 1.0, 1.0);

}

And here is an example:

1

Tip: try to change the numerical values in the definition of A_1 and B_1 in the fragment shader code.

*: A fragment shader technically deals with fragments rather than pixels.

Voicings

posted by on 2018.04.21, under Supercollider
21:

Here’s a little exercise concerning voicings management in SuperCollider. The idea is very simple: we have a collection of samples we would like to trig randomly, but a retrig is allowed only if the whole sample has finished playing. To do this we have to keep track of active Synths (or “voices”), in order to avoid retriggering those. This role is played by the array ~voices: indeed, the index of the array identifies the buffer to be played, while a value of 0 or 1 denotes an available or unavailable voice, respectively. At the moment of the instantiation of a Synth on the server, SuperCollider allows us to assign a function to be executed when the given synth is free, which in our case sets the ~voices value corresponding to the given buffer to 0. In the infinite loop cycle we can then check for the value of ~voices at a random position i: if this value is 0, we create a new Synth with the corresponding buffer, and set the correspondent voice number to 1. Otherwise, we continue with the inf cycle. By changing the values in the rrand function you can decide how sparse the various instances will be.
You can use this technique with any type of SynthDef, in order to have a fixed voice system which does not allow retriggering or voice stealing. Also, the way I have done it is not the most elegant one: you can search for NodeWatcher (a way to monitor Nodes on the server) for an alternative approach.
Here’s the code

s.boot;



(
SynthDef(\voice, {|buff, trig = 1, out = 0, amp = 1|

    var sig = PlayBuf.ar(2, buff, 1, trig, doneAction: 2);

    Out.ar(out, sig *  amp);

 }).add;

SynthDef(\reverb, {|in = 0|
    var sig = In.ar(in, 2);
    sig = CombC.ar(sig, 0.5, 0.5, 3);
    sig = FreeVerb.ar(sig, 0.5, 0.5, 0.7);
    Out.ar(0, sig);
}).add;
)



(

fork({

var samplePath;
var ind;


    //Setting up reverb line

~rev = Bus.audio(s, 2);

y = Synth(\reverb, [\in: ~rev]);

~voices = [];

~buffers = [];

//Loading buffers
    samplePath = thisProcess.nowExecutingPath.dirname ++ "/sounds/*";
    ~buffers = samplePath.pathMatch.collect {|file| Buffer.read(s, file, 0, 44100 * 9);};

s.sync;


~buffers.do({
    ~voices = ~voices.add(0);
});

    ind = Prand(Array.fill(~buffers.size, {|i| i}), inf).asStream;

    inf.do({
        ~voices.postln;
        i = ind.next;
       
        z = ~voices[i];

            if( (z == 0),  {

            x = Synth(\voice, [\buff: ~buffers[i], \out: ~rev, \amp: rrand(0.8, 1.0)]);
                x.onFree({~voices[i] = 0});
            ~voices[i] = 1;
           
            }, {});
   
        rrand(0.1, 0.6).wait;
        });

}).play;
)

s.quit;

All the samples have to be in a folder called “sounds” inside the same folder your .scd file is. I have used some few piano samples from Freesounds.org, since I wanted to achieve a minimalist piano atmosphere. Here’s how it sounds

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Reactive applications, Shaders and all that

posted by on 2018.04.06, under Processing
06:

We have already discussed the advantage of using shaders to create interesting visual effects. This time we will have to deal with fragment shaders *and* vertex shaders. In a nutshell, a vertex shader takes care of managing the vertices position, color, etc. which are then passed as “fragments” to the fragment shader for rasterization. “OMG, this is so abstract!!”. Yeah, it is less abstract than it seems, but nevertheless it requires some know how. As previously, I really suggest this : I find myself going back and forth to it regularly, always learning new things.
Good, so, what’s the plan? The main idea in the following code is to use a PShape object to encode all the vertices: we basically are making a star shaped thing out of rectangles, which in 3d graphics parlance are referred to as “quads”. Once we have created such a PShape object, we will not have to deal with the position of vertices anymore: all the change in the geometry will be dealt by the GPU! Why is this exciting? It’s because the GPU is much much faster at doing such things than the CPU. This allows in particular for real-time reactive fun. Indeed, the code gets input from the microphone and the webcam, separately. More precisely, each frame coming from the webcam is passed to the shader to be used as a texture for each quad. On the other hand, the microphone audio is monitored, and its amplitude controls the variable t, which in turns control the rotation (in Processing) and more importantly the jittering in the vertex shader. Notice that the fragment shader doesn’t do anything out of the ordinary here, just apply a texture.
Here’s how the code looks like

import processing.video.*;
import processing.sound.*;

Amplitude amp;
AudioIn in;



PImage  back;
PShape mesh;
PShader shad;

float t = 0;
float omega = 0;
float rot = 0;
int count = 0;

Capture cam;


void setup() {
  size(1000, 1000, P3D);
  background(0);
 
  //Set up audio

  amp = new Amplitude(this);
  in = new AudioIn(this, 0);
  in.start();
  amp.input(in);

  //Set up webcam

  String[] cameras = Capture.list();

  cam = new Capture(this, cameras[0]);

  cam.start();

  textureMode(NORMAL);  

  mesh = createShape();
  shad = loadShader("Frag.glsl", "Vert.glsl");

  back = loadImage("back.jpg");


  //Generates the mesh;

  mesh.beginShape(QUADS);
  mesh.noStroke();

  for (int i = 0; i < 100; i++) {
    float phi = random(0, 2 * PI);
    float theta = random(0, PI);
    float radius = random(200, 400);
    PVector pos = new PVector( radius * sin(theta) * cos(phi), radius * sin(theta) * sin(phi), radius * cos(theta));
    float u = random(0.5, 1);

    //Set up the vertices of the quad with texture coordinates;

    mesh.vertex(pos.x, pos.y, pos.z, 0, 0);
    mesh.vertex(pos.x + 10, pos.y + 10, pos.z, 0, u);
    mesh.vertex(-pos.x, -pos.y, -pos.z, u, u);
    mesh.vertex(-pos.x - 10, -pos.y - 10, -pos.z, 0, u);
  }

  mesh.endShape();
}

void draw() {

    background(0);
    //Checks camera availability;

    if (cam.available() == true) {
      cam.read();
    }
 

    image(back, 0, 0); //Set a gradient background;

    pushMatrix();
    translate(width/2, height/2, 0);
    rotateX( rot * 10 * PI/2);
    rotateY( rot * 11 * PI/2);

    shad.set("time", exp(t) - 1); //Calls the shader, and passes the variable t;

    shader(shad);
    mesh.setTexture(cam); //Use the camera frame as a texture;
    shape(mesh);

    popMatrix();

    t += (amp.analyze() - t) * 0.05; //Smoothens the variable t;

    omega +=  (t  - omega) * 0.01; //Makes the rotation acceleration depend on t;

    rot += omega * 0.01;

    resetShader(); //Reset shader to display the background image;
   
}

// Frag.glsl

varying vec4 vertColor;
varying vec4 vertTexCoord;


uniform float time;
uniform sampler2D texture;

void main(){

gl_FragColor = texture2D(texture, vertTexCoord.st ) * vertColor;

}

// Vert.glsl

uniform mat4 transform;
uniform mat4 modelview;
uniform mat4 texMatrix;


attribute vec4 position;
attribute vec4 color;
attribute vec2 texCoord;

varying vec4 vertColor;
varying vec4 vertTexCoord;
varying vec4 pos;


uniform float time;


void main() {
  gl_Position = transform * position;

  gl_Position.x += sin(time * 2 * 3.145 * gl_Position.x) * 10 ;
  gl_Position.y += cos(time * 2 * 3.145 * gl_Position.y) * 10 ;

  vertColor = color;

  vertTexCoord = texMatrix * vec4(texCoord, 1.0, 1.0);


}

Notice the call to reset the shader, which allows to show a gradient background, loaded as an image, without it being affected by the shader program.
Here’s a render of it, recorded while making some continuous noise, a.k.a. singing.

Try it while listening to some music, it’s really fun!

Worlds

posted by on 2018.03.18, under Processing
18:

Yesterday I have been to the beautiful exhibition by Pe Lang at the Museum of Digital Art here in Zurich. The exhibition consists of several kinetic systems producing complex behaviours. I was in particular fascinated by a piece called “polarization”, where different disks with polarized filters provide very interesting visual patterns. Those who read this blog know that I am really into systems, and their emergent features, so I was inspired to make the following piece, called “Worlds”. It is also an excuse to show how object oriented programming allows very quickly to replicate a little “cosmos” over and over.
The idea is the following. We have discussed more than once systems of particles which bounce on the canvas, but we never gave the canvas its own ontological properties, a fancy way to say that we never considered the canvas to be an object itself. That’s precisely what is going on in the code below. Namely, there is a class World whose scope is to be the box in which the particles are bound to reside. It comes with a position vector for its center, with a (half) length for the box itself, and with a particle system. The bounce check is done internally to the class World, in the update() function, so to make it behave like its own little universe. Once you have such a gadget, it’s immediate to replicate it over and over again! I disposed the box in a simple array, and I really like the visual effect that comes from it. I also did something else: inspired by statistical mechanics, each box has a “temperature”, which is influenced by how often the particles bounce on the walls of the box. The “hotter” the box, the more red it becomes. There is also a cooling factor: each box tends to cool down. So, after some time, the system goes to equilibrium, and each box stabilizes on a shade of red. This shows also something very nice, and at first counter-intuitive: there are boxes with a lot of particles, which are though very slow, making the box very “cold”.
Here is the code

// Worlds
// Kimri 2018

ArrayList<World> boxes;
int n = 10;



void setup(){
  size(1000, 1000);
 init();
 
 frameRate(30);
 

 
}


void draw(){
  background(255);

  for (int i = 0; i < boxes.size(); i++){
   World w = boxes.get(i);
  w.display();
  w.update();
  }
 
 
}

void init(){
 
  background(255);
 
 boxes = new ArrayList<World>();
 
 float step = width/n;
//Generate the array of boxes;

  for (float x = step; x < width; x+= step){
    for (float y = step; y < height; y+= step){
      boxes.add(new World(x, y, step * 0.4));
    }
  }

}

void keyPressed(){
  init();
}

// World class


class World {
  PVector pos;
  int num;
  float len;
  float temp = 255;
  float coeff = 1.7;

  ArrayList<Particle> particles;

  World(float _x, float _y, float _len) {
    pos = new PVector(_x, _y);
    len = _len;
    num = int (random(10, 60));
    //Add particles to the box
    particles = new ArrayList<Particle>();

    for (int i = 0; i < num; i++) {
      float part_x = pos.x + random(-len, len);
      float part_y = pos.y + random(-len, len);
      particles.add(new Particle(new PVector(part_x, part_y)));
    }
  }

  World(float _x, float _y, float _len, int _num) {
    pos = new PVector(_x, _y);
    len = _len;
    num = _num;
    //Add particles to the box
    particles = new ArrayList<Particle>();

    for (int i = 0; i < num; i++) {
      float part_x = pos.x + random(-len, len);
      float part_y = pos.y + random(-len, len);
      particles.add(new Particle(new PVector(part_x, part_y)));
    }
  }

  void display() {
    fill(255, temp, temp, 90);

    stroke(0, 100);
    strokeWeight(1.2);
    rectMode(CENTER);
    rect(pos.x, pos.y, 2 * len, 2 * len);
  }

  void update() {
    for (int i = 0; i < num; i++) {
      Particle p = particles.get(i);
      p.move();

      if ( (p.pos.x - pos.x) >= len - p.rad) {
        p.pos.x = pos.x + len - p.rad;
        p.vel.x  = -p.vel.x;
        temp -= 1;
      }
      if ( (p.pos.x - pos.x) <= -(len - p.rad)) {
        p.pos.x = pos.x - (len - p.rad);
        p.vel.x  = -p.vel.x;
        temp -= 1;
      }
      if ( (p.pos.y - pos.y) >= len - p.rad) {
        p.pos.y = pos.y + len - p.rad;
        p.vel.y  = -p.vel.y;
        temp -= 1;
      }
      if ( (p.pos.y - pos.y) <= -(len - p.rad)) {
        p.pos.y = pos.y - (len - p.rad);
        p.vel.y  = -p.vel.y;
        temp -= 1;
      }
      p.display();
    }
    if (temp < 0) temp = 0;
    temp += coeff;
  }
}

//Particle class



class Particle {
  PVector pos;
  PVector vel;
  float rad = 2;

  Particle(PVector _pos) {
    pos = new PVector(_pos.x, _pos.y);
    vel = new PVector(random(-3, 3), random(-3, 3));
  }

  void move() {
    pos.add(vel);
  }

  void display() {
    noStroke();
    fill(0, 100);
    ellipse(pos.x, pos.y, 2 * rad, 2 *rad);
  }

}

And here is how it looks like

Where have I been?

posted by on 2018.03.13, under Uncategorized
13:

This blog has been dormient for over a year, now. Probably nobody asked this, and nobody cares, but: where have I been? Due to a major computer failure, the effect of which was a loss of months of work, for a long time I have substantially reduced my coding activity. I guess I also needed time to see things from a distance.
I didn’t stop creative activities, though: I used the time in between to work on music, which had been buried under many lines of codes during the recent years.
If you are interested, you can check some outcomes here

and here

I plan to come back to coding soon, maybe I’ll talk about a couple of ideas I have which involve poetry generation for an art installation.

Glitch Art and Shaders

posted by on 2017.02.11, under Processing
11:

It’s been a while since the last post. I have been busy with (finally!) starting to set up a website to collect some of my works, and I’ve been more or less finishing a couple of interactive installations. For this reason, interactivity and real-time processing have captured my attention recently. It turns out that when you want to interact with a piece of code which produces graphics, and as soon as what you are doing involves more than just a couple of pairs of colored circles, you run quickly into performance issues. So, unless you are one of those digital artists drawing a blinking white circle in the middle of the screen and call it art (it’s fine, don’t worry, go on with it), you need to find your way around these types of issues. In practice, this amounts to get comfortable with words like Vertex Buffer Object, C++, and shaders, to which this post is dedicated.
The story goes like this: modern graphic cards (GPU) have a language they use, called GLSL . For instance, when in Processing you draw a line or a circle, what is actually happening behind the curtains is a communication between the CPU and the graphic card: Processing informs the GPU about the vertices of the line, the fact that it has to be line, the color of the vertices, etc. There are several stages from when the vertices are comunicated to the final result that you see on your screen. Some of these stages are user programmable, and the little programs that take care of each of these stages are called “shaders”. Shaders are notoriously difficult to work with: you have to program them in C, basically, and they are quite unforgiving with respect to errors in the code. On the other hand, they are really really fast. If you want to know why it is so, and how a (fragment) shader operates, give a look here.
So, why the hell would you want to learn such a tool? Well, if you, like me, are fond of glitch art, you must have realized that interactive real-time glitch art is almost impossible if you try to work pixel by pixel: even at a resolution of 800×600, the amount of computations for the CPU to get a framerate of 30fps is impractical. Enter fragment shaders! If you delegate the work to the GPU, it becomes more than doable.
I can’t go into the detail of the code I present in the following, but there are very good tutorials on the web that slowly teach you how to tame shaders. In particular, give a look here. Rest assured: you really need to be programming friendly, and have a lot of patience to work with shaders!

PImage img;
PShader glitch;


void setup(){
  size(800, 600, P2D);
  background(0);
  img = loadImage(insert_link_to_image);
  img.resize(800, 600);
 
 
  glitch = loadShader("glitchFrag.glsl");
  glitch.set("iResolution", new PVector(800., 600., 0.0) );
 
}

 
}

void draw(){
 
  glitch.set("iGlobalTime", random(0, 60.0));
 
   if (random(0.0, 1.0) < 0.4){
  shader(glitch);
   }
 
  image(img, 0, 0);
 
  resetShader();
 
}

---------------

// glitchFrag.glsl

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif


#define PROCESSING_TEXTURE_SHADER

varying vec4 vertTexCoord;
uniform sampler2D texture;
uniform vec3      iResolution;          
uniform float     iGlobalTime;          



float rand(vec2 co){
    return fract(cos(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}


void main(){
   vec3 uv = vec3(0.0);
   vec2 uv2 = vec2(0.0);
   vec2 nuv = gl_FragCoord.xy / iResolution.xy;
   vec3 texColor = vec3(0.0);

   if (rand(vec2(iGlobalTime)) < 0.7){
    texColor = texture2D(texture, vertTexCoord.st).rgb;
}
 else{
   texColor = texture2D(texture, nuv * vec2(rand(vec2(iGlobalTime)), rand(vec2(iGlobalTime * 0.99)))).rgb;
}
       
    float r = rand(vec2(iGlobalTime * 0.001));
    float r2 = rand(vec2(iGlobalTime * 0.1));
    if (nuv.y > rand(vec2(r2)) && nuv.y < r2 + rand(vec2(0.05 * iGlobalTime))){
    if (r < rand(vec2(iGlobalTime * 0.01))){
       
   if ((texColor.b + texColor.g + texColor.b)/3.0 < r * rand(vec2(0.4, 0.5)) * 2.0){
       
        uv.r -= sin(nuv.x * r * 0.1 * iGlobalTime ) * r * 7000.;
        uv.g += sin(vertTexCoord.y * vertTexCoord.x/2 * 0.006 * iGlobalTime) * r * 10 *rand(vec2(iGlobalTime * 0.1)) ;
        uv.b -= sin(nuv.y * nuv.x * 0.5 * iGlobalTime) * sin(nuv.y * nuv.x * 0.1) * r *  20. ;
        uv2 += vec2(sin(nuv.x * r * 0.1 * iGlobalTime ) * r );
   
}
       
    }
}

  texColor = texture2D(texture, vertTexCoord.st + uv2).rgb;
  texColor += uv;
   gl_FragColor = vec4(texColor, 1.0);  
   
}

In the following, you can see the result applied to a famous painting by Caravaggio (yes, I love Caravaggio): it matches real time framerate.
If you want to apply the shader to the webcam, you just need to set up a Capture object, called, say, cam, and substitute img with cam in the Processing code. Enjoy some glitching! :)

Glitch Shader from kimri on Vimeo.

n-grams, Markov chains and text generators

posted by on 2016.11.03, under Processing, Supercollider
03:

An n-gram is a contiguous sequence of n elements of a text, where each element can be a phoneme, a syllable, a character, a word, etc. Given a text, the collection of its n-grams allows to infer some statistical correlation, and moreover to assemble the n-grams collected into a Markov chain . Right, what’s a Markov chain, then? A Markov chain describes a process in which the next step depends probabilistically only on the current step. For instance, a random walk is an example of a Markovian process. One way to assign a Markov chain to a text is to collect all its n-grams, and for each n-gram we keep track of the next n-gram. We then go through the collection of all the n-grams, and for each of them we choose randomly among the list of the subsequent n-grams. We form then the new n-gram, and proceed. Confusing? Let’s see an example. Suppose we have the text “this is a book and this is my pen”, and suppose we are interested in 2-grams, where a single gram is a word. We have then then the pair (This, is), the pair (is, a), etc. Then, we keep track of all the 1-grams which follow a single 2-gram: for instance, after (this, is) we can have (a) or (my), and we assign to each of them an equal probability. Suppose we start from (This, is): if we happen to choose (a), we form the pair (is, a), to which it must follow (a, book), and so on, until we reach the end of the text. In this way, we can generate a text which has similar statistical distribution of n-grams, in this case pairs of words. The greater n, the closer your generated text will be to the original one.
Inspired by this, I have written a code in p5.js, a set of libraries for Javascript, that generates text starting from n-grams in words. Here “word” only means “groups of characters separated by a whitespace”. Punctuation is not considered from the grammatical point of view, nor the role of articles, nouns, etc. are analysed; neverthless, the results are still quite interesting. The choice of Javascript is dictated by the fact that Processing/Java is not very friendly with dictionaries, and in this case they are really useful.
Here’s the code

var words;
var ngrams = {}
var order = 2;
var txt = "";
var a = 0.1;
var target = 255;

function setup() {
  createCanvas(600, 600);

  words = source.split(" ");

  for (var i = 0; i < words.length - order; i++){
    gram_temp = [];
    for (var j = 0; j < order; j++){
    gram_temp.push(words[i + j]);
      }
      gram = join(gram_temp," ");
      if (!ngrams[gram]){
     ngrams[gram] = [];
    }
      if (i < words.length - order){
     ngrams[gram].push(words[i + order])   
  }
}
  markovIt(ngrams);
  txt = spText(txt);
}

function draw() {
    background(255);
    a += (target - a) * 0.1;
    textSize(12);
        fill(0, a);
    textDisplay(txt);
    if (a  < 0.099 ){
     restart();
     }
}

function restart(){
    markovIt(ngrams);
    txt = spText(txt);
    a = 0;
    target = 255;
}


function textDisplay(ttext){
    textAlign(CENTER);
    text(ttext, 100, 60, width - 100, height - 60);
}

function spText(txt){
    return txt.split(".");
}

function mousePressed(){
   target = 0;
}

function markovIt(ngrams) {
    var index = int(random(0, words.length - order + 1));
    curr_temp = [];
    for (var j = 0; j < order; j++){
     curr_temp.push(words[index + j]);
      }
      current = join(curr_temp, " ");
      result = current;
      if (!ngrams[current]){
        return null;
    }
      var range = int(random(30, 500));
      for (var i = 0; i < range; i++){
        if (!ngrams[current]){
        break;
          }
        possibilities = ngrams[current];
        if (possibilities.length == 0){
          break;
        }
        next = possibilities[int(random(0, possibilities.length))];
        result = result + " " + next;
        tokens = result.split(" ");
        curr_temp = [];
        for (var j = order; j > 0; j--){
        curr_temp.push(tokens[tokens.length - j]);
    }
         current = join(curr_temp, " ");
         }
    txt = result;
   
}

Notice that you need to declare a variable source, which should contain the input text.
As a little homage to Noam Chomsky, a pioneer of grammar studies (and much more), here you can find a working version of the code above using 2-grams in words, and based on this. Click on the canvas to generate some new text.

OSC messaging, Processing and SuperCollider

posted by on 2016.10.15, under Supercollider
15:

OSC stands for Open Sound Control, and consists in a protocol for networking between computers, synths and various multimedia devices. For instance, it allows a software, like Ableton Live, say, to communicate with a hardware synth, whenever the latter supports OSC. You might think that you already now how to do this via MIDI, and you’d be partially right. The differences between OSC and MIDI are many: accuracy, robustness, etc. One of the most important, or rather most useful difference, though, is that OSC allows to send *any* type of messages at high resolution to any address. Differently, the MIDI protocol has its own specific messages, like note On, not Off, pitch, etc., with low resolution (0-127). This means that if you use MIDI to communicate between devices, you’ll be required to translate your original message, say for instance the position of a particle or the color of a pixel at mouse point, via the standard MIDI messages. And this is often not enough. An example is usually better than many principled objections, so here comes a little Processing sketch communicating to SuperCollider. The idea is quite simple: in the Processing sketch, systems of particles are spawned randomly, with the particles having a color, a halflife, and also a tag. The user can click on the screen and generate a circle. If a particle traverses one of the circles, it will tell SuperCollider to generate a synth, and it will pass by various information, like its position, velocity, color, etc. This data will affect the synth created in SuperCollider. Here’s the Processing code, which uses the library oscP5

//// Setup for OscP5;

import oscP5.*;
import netP5.*;
OscP5 oscP5;
NetAddress Supercollider;

//// Setting up the particle systems and the main counter;


ArrayList<Parsys> systems;
ArrayList<Part> circles;
int count = 0;

void setup(){
  size(800, 800);
  background(255);
  oscP5 = new OscP5(this,12000);
  Supercollider = new NetAddress("127.0.0.1", 57120);
 
  systems = new ArrayList<Parsys>();
  circles = new ArrayList<Part>();
 
}

void draw(){
  background(255);
  for (int i = systems.size() - 1; i>=0; i--){
    Parsys sys = systems.get(i);
    sys.update();
    sys.show();
    if (sys.isDead()){
      systems.remove(i);
    }
    for (int j = 0; j < circles.size(); j++){
      Part circ = circles.get(j);
      sys.interact(circ);
    }
  }
 
  for (int i = 0; i < circles.size(); i++){
    Part circ = circles.get(i);
    circ.show();
  }
  if (random(0, 1) < 0.04){
    Parsys p = new Parsys(random(0, width), random(0, height));
    systems.add(p);
  }
}

void mousePressed(){
  Part circ = new Part(mouseX, mouseY, 0, 0, 0, 30);
  circ.stuck = true;
  circ.halflife = 80;
  circles.add(circ);
}

/////Define the class Parsys

class Parsys {
  ArrayList<Part> particles;
 
  Parsys(float x, float y){
    particles = new ArrayList<Part>();
   
    int n = int(random(20, 80));
    for (int i = 0; i < n; i++){
      float theta = random(0, 2 * PI);
      float r = random(0.1, 1.2);
      Part p = new Part(x, y, r * cos(theta), r * sin(theta), int(random(0, 3)));
      p.tag = count;
      count++;
      particles.add(p);
    }
  }
 
  void update(){
    for (int i =  particles.size() - 1; i>=0; i--){
      Part p = particles.get(i);
      p.move();
      if (p.isDead()){
        particles.remove(i);
      }
    }
  }
 
  void show(){
    for (int i =  particles.size() - 1; i>=0; i--){
      Part p = particles.get(i);
      p.show();
    }
  }
 
  boolean isDead(){
    if (particles.size() <=0){
      return true;
    }
    else return false;
  }
 
  void interact(Part other){
    for (int i = 0; i < particles.size(); i++){
      Part p = particles.get(i);
      if (p.interacting(other)){
        float dist = (p.pos.x - other.pos.x)* (p.pos.x - other.pos.x) + (p.pos.y - other.pos.y)*(p.pos.y - other.pos.y);
        if (!p.active){
          float start = other.pos.x/width;
        p.makeSynth(start, dist/(other.rad * other.rad));
        p.active = true;
        }
        else {
          p.sendMessage(dist/(other.rad * other.rad));
        }
      }
      else
      {
        p.active = false;
      }
    }
  }
}

/////Define the class Part

class Part {
  PVector pos;
  PVector vel;
  int halflife = 200;
  int col;
  float rad = 2;
  boolean stuck = false;
  boolean active = false;
  int tag;

  Part(float x, float y, int _col) {
    pos = new PVector(x, y);
    col = _col;
  }

  Part(float x, float y, float vx, float vy, int _col) {
    pos = new PVector(x, y);
    vel = new PVector(vx, vy);
    col = _col;
  }

  Part(float x, float y, float vx, float vy, int _col, float _rad) {
    pos = new PVector(x, y);
    vel = new PVector(vx, vy);
    rad = _rad;
  }

  void move() {
    if (!stuck) {
      pos.add(vel);
      halflife--;
    }
  }

  void show() {
    noStroke();
    fill(255 / 2 * col, 255, 100, halflife);
    ellipse(pos.x, pos.y, rad * 2, rad * 2);
  }

  boolean isDead() {
    if (halflife < 0) {
      active = false;
      return true;
    } else return false;
  }

  boolean interacting(Part other) {
    if (dist(pos.x, pos.y, other.pos.x, other.pos.y) < rad + other.rad) {
      return true;
    } else return false;
  }

  void makeSynth(float start, float dist) {
      OscMessage message = new OscMessage("/makesynth");
      message.add(col);
      message.add(vel.x);
      message.add(vel.y);
      message.add(start);
      message.add(dist);
      message.add(tag);
      oscP5.send(message, Supercollider);
    }
   
    void sendMessage(float dist){
    OscMessage control = new OscMessage("/control");
    control.add(tag);
    control.add(dist);
    oscP5.send(control, Supercollider);
  }
}

Notice that the messaging happens in the functions makeSynth() and sendMessage() in the class Part. The OSC message is formed in the following way: first there is its symbolic name, like “/makesynth”, and then we add the various information we want to send. These can be integers, floats, strings, etc. The symbolic name allows the “listening” media device, in this case SuperCollider, to perform an action whenever the message with the specific symbolic name arrives. You need to specify an address where to send the message: in my case, I used the default incoming SuperCollider address. Here is the code on the SuperCollider side

s.boot;

(

fork{
SynthDef(\buff, {|freq = 10, dur = 10, gate = 1, buff, rate = 1, pos = 0, pan = 0, out = 0, spr|
    var trig = Impulse.ar(freq);
    var spread = TRand.ar((-1) * spr * 0.5, spr * 0.5, trig);
    var sig = GrainBuf.ar(2, trig, dur, buff, rate, pos + spread + spr, 0, pan);
        var env = EnvGen.kr(Env([0.0, 1, 1, 0], [Rand(0.05, 0.8), Rand(0.1, 0.4), Rand(0.01, 0.5)]), doneAction: 2);
    Out.ar(out, sig * env * 0.02 );
}).add;

SynthDef(\rev, {|out, in|
    var sig = In.ar(in, 2);
    sig = FreeVerb.ar(Limiter.ar(sig), 0.1, 0.5, 0.1);
    Out.ar(0, sig);
    }
).add;

~rev = Bus.audio(s, 2);
s.sync;
Synth(\rev, [\in: ~rev]);

~synths = Dictionary.new;


~buffers =  "pathFolder/*".pathMatch.collect({|file| Buffer.readChannel(s, file, channels:0)});

~total = 0;

OSCdef(\makesynth, {|msg|
    //msg.postln;
    if(~total < 60, {
        x = Synth(\buff, [\freq: rrand(0.1, 40), \dur: rrand(0.01, 0.5), \buff: ~buffers[msg[1]], \rate: msg[3] * rrand(-3, 3), \pan: msg[2], \pos: msg[4], \spr: msg[5], \out: ~rev]);
        ~synths.put(msg[6], x);
        x.onFree({~total = ~total - 1; ~synths.removeAt(msg[6]);});
        ~total = ~total + 1;
        //~total.postln;
    }, {});
}, "/makesynth");

 OSCdef(\control, {|msg|
    ~synths[msg[1]].set(\freq2, msg[2]);
 }, "/control");

}
)

The listening is performed by OSCdef(name, {function}, symbolic name), which passes as argument to the function the message we have sent via Processing. Notice that the first entry of the array msg will always be the symbolic name of the message. Pretty simple, uh? Neverthless, the SuperCollider code has some nuances that should be explained. First, the synth you create should better be erased when it finishes its job, otherwise you’ll have an accumulation of them which will eventually freeze your computer. Also, I’ve put a cap on the total number of synth which I allow at a given time, to avoid performances issues. We also want to be able to control various parameters of a given synth while the particle that generated it is still inside one of the circles. To do this, we have to keep track of the various synths which are active at a given moment. I’ve done this by “tagging” each new created particle with an integer, and by using the Dictionary variable ~synths. Recall that a Dictionary is a collection of data of the form [key, value, key, value,…]: you can retrieve the given value, in this case a synth node, via the associated key, in this case the particle tag. When the given synth node is freed, via the method onFree() we decrease the total number of active synths, and remove the key corresponding to the particle tag.
I hope the example above shows how powerful OSC communication is, and how nuanced one can be in the resulting actions performed.
Here it is an audio snippet.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

pagetop