Data retrieving and asynchronicity

posted by on 2019.03.09, under Processing
09:

I am very much fascinated by data, its retrieval and possible applications to art. We are constantly surrounded by data in text, visual and auditory form, and they tell something about us all, as a society and a species.
In this post I want to write about a simple scenario. Imagine we want to retrieve all the images appearing on a newspaper page, and do something with that. For this simple case, I have chosen The New York Times. We have then a couple of questions to which we want to answer. First of all, how do we get the urls of all the images present in the given page? And second: how do we get these images without compromising the animation happening? To answer these questions, we start at the beginning, and we stop at the end, like the judge suggests Alice during her trial. 😉
Data contained in a webpage is usually represented via a markup language: for instance, HTML is such a language. In a markup language, the different structural pieces of a webpage are “tagged”: each item might have a “title” tag, for instance, which tells us that its content will be a title of a sort. In the present case, we will use XML, since The New York Times provides a .xml file for its various pages. In XML parlance, a xml file can be thought as a collection of boxes called “children” that can contain objects which have “content”, or other boxes which have other children, and so on. Now, each XML file is structured in a slightly different way, so one has to investigate case by case. For instance, you could have problem lifting the very same code that will appear later to, say, The Guardian, since its xml file can have a different arrangement.
Processing offers a class XML to deal with XML files, and to search through its tags. Great! So, after spending some time investigating the RSS feed of the home page of The New York Times, we discover that the XML has a child called “channel”, which inside contains children tagged “item”, which themselves contain a child tagged “media:content”: finally, this child contains a url, which is what we are interested in. Pheeew! Once we get the list of urls, we can download the images with loadImage(), which accepts also urls. Here the problem addressed in the second question above appears. We have to talk about “asynchronicity”. Namely, both the function loadXML() and loadImage() are so called “blocking functions”: in other words, until they complete their task, the code doesn’t go forward. This means that any animation would stutter. If we need to load the images only once, this is not a great problem: we do everything in the setup() function, and forget about it. For the sake of fun, I have decided that I would like to randomly add a new image from some other page while the animation goes on. The way to circumnavigate the problem created by these blocking functions is to use a different “thread”. What does this mean? Java allows to “thread functions”, which means that the function is executed in parallel with the main thread, which in our case is given by the so called “animation” thread. By threading a function, we allow the main thread not to be affected by any slowing of the threaded function. In our case, the function getData() loads up another .xml file, grabs an image, and adds it to the list of images to display.
We can now look at the code

String[] urls ={ "http://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml",
  "http://rss.nytimes.com/services/xml/rss/nyt/Africa.xml", "http://rss.nytimes.com/services/xml/rss/nyt/ArtandDesign.xml",
  "http://rss.nytimes.com/services/xml/rss/nyt/Technology.xml", "http://rss.nytimes.com/services/xml/rss/nyt/Europe.xml"};

String url;

XML xml;
ArrayList<PImage> images;
int count;
PImage img;
boolean locked = false;

void setup() {
  size(1000, 1000);
  background(0);
  url = urls[int(random(0, urls.length))];
  images = new ArrayList<PImage>();

  xml = loadXML(url); //Loading the XML file;
  String[] names = {};

  XML[] children = xml.getChildren("channel"); //This is the first child of the XML file;

  for (int i = 0; i < children.length; i++) {
    XML[] items = children[i].getChildren("item");  //Images are cointained in items;

    for (int j = 0; j < items.length; j++) {
      XML media = items[j].getChild("media:content"); //Media:content is the tag that cointains images;
      if (media != null) {
        names = append(names, media.getString("url")); //This provides the url which appears as an option in the tag media:content;
      }
    }
  }

  for (int i = 0; i < names.length; i++) {
    images.add(loadImage(names[i]));
    println("Uploaded!");
  }
}

void draw() {
  PImage im = images.get(count % images.size());

  tint(255, int(random(30, 100)));

  image(im, random(0, width), random(0, height), im.width * 0.3, im.height * 0.3);


  count++;
  if ((random(0, 1) < 0.01) && !locked) {
    thread("getData");
  }
}

//Function to be threaded

void getData() {  
  locked = true;
  url = urls[int(random(0, urls.length))]; //Choose a random url among those available;
  xml = loadXML(url);
  String[] names = {};

  XML[] children = xml.getChildren("channel");

  for (int i = 0; i < children.length; i++) {
    XML[] items = children[i].getChildren("item");

    for (int j = 0; j < items.length; j++) {
      XML media = items[j].getChild("media:content");
      if (media != null) {
        names = append(names, media.getString("url"));
      }
    }
  }
  images.add(loadImage(names[int(random(0, names.length))])); //Add the new image to the main list;
  locked = false;
}

If you run the code, you should get something like

scree

As an exercise, try to do something similar with a different website, so to get comfortable with the process of understanding how the given XML file is organized.

Overlook

posted by on 2018.07.21, under Processing
21:

Combining some techniques from the previous posts on shaders, here’s the render of an audio reactive application which I used for a video of “Overlook”, a track of my musical alter ego

The code uses vertex and fragment shaders to create a glitchy environment which reacts to the audio in real time.
The track “Overlook” is available for listening here

Dust From A G String

posted by on 2018.06.27, under Processing, Uncategorized
27:

Here’s “Dust From A G String”, a piece about the corrosive power of passing time, and the beauty it leaves behind, just before the end.

The video was made in Processing, using a custom shader based on FBO techniques. The audio is a reworking of Bach’s “Air on the G String”.

Reaction-Diffusion algorithm and FBO techniques

posted by on 2018.06.08, under Processing
08:

Reaction-Diffusion algorithms are very fascinating, since they are capable of producing incredibly organic patterns. They can also be computationally expensive if the grid of choice is fine enough. In a nutshell, we regard every pixel of an image as a cell containing two types of chemicals in different proportions, and whose combination produces a given color on the screen. The “diffusion equation” is such that, as time goes on, the proportion of the two chemicals changes according to that of the neighborhood cells. Since the algorithm is pixel* based, at its finest, we might think this is a job for a fragment shader. And that’s indeed the case! We have to be careful though concerning two aspects. First, the algorithm uses information about the adjacent pixels, and we know that a fragment shader only treats fragment by fragment information, it does not allow sharing among fragments. This is solved by using a texture to store information about the chemicals. This brings us to the second point: we need to store the previous state of the chemical proportions to compute the next one. On the other hand, a shader is not “persistent”, in the sense that all the information it has concerning fragments is lost on the next frame. Enter FBO and ping-pong technique! Framebuffer objects allows what is called “off-screen rendering”. In other words, instead of rendering the pixels directly to screen, they are rendered to a buffer, and only later displayed to the screen. Hence, we can pass the FBO as a texture to the shader, use, say, the red and green values of the texture at the given fragment coordinate as our chemicals percentage, and set the color of the fragment using the new values of the percentages. This technique is usually referred to as “ping-pong technique”, because we go back and forth from the buffer to the screen. It is particularly useful for modelling particle systems directly on the GPU. In Processing, a FBO is an object described by the class PGraphics, and the shader becomes a method that can be sent to the object.
Here’s the code

PGraphics pong;
PShader diff;

void setup(){
  size(800, 800, P2D);
  pong = createGraphics(width, height, P2D);
  diff = loadShader("diffFrag.glsl");
 
  pong.beginDraw();
  pong.background(255, 0, 0);
  pong.endDraw();
 
  diff.set("u", 1.0/width);
  diff.set("v", 1.0/height);

  pong.beginDraw();
  pong.noStroke();
  pong.fill(0, 255, 0);
  pong.ellipse(width/2, height/2, 10, 10);
  pong.endDraw();
}

void draw(){

 
 
  pong.beginDraw();
  pong.shader(diff);
  pong.image(pong, 0, 0);
  pong.resetShader();
  pong.endDraw();
 
  image(pong, 0, 0);
}



//// diffFrag.glsl

varying vec4 vertColor;
varying vec4 vertTexCoord;


uniform float u;
uniform float v;


uniform sampler2D texture;

float laplaceA(in vec2 p, in float u, in float v){
float A = 0.05 * texture2D(texture, vertTexCoord.st + vec2(-u,-v))[0] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(0,- v))[0] + 0.05 * texture2D(texture, vertTexCoord.st  + vec2(u,-v))[0] +
 0.2 * texture2D(texture, vertTexCoord.st + vec2(-u,0))[0] - 1.0 * texture2D(texture, vertTexCoord.st + vec2(0,0))[0] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(u, 0))[0] +
0.05 * texture2D(texture, vertTexCoord.st + vec2(-u,v))[0] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(0,v))[0] + 0.05 * texture2D(texture, vertTexCoord.st + vec2(u,v))[0];
return A;
}

float laplaceB(in vec2 p, in float u, in float v){
float B = 0.05 * texture2D(texture, vertTexCoord.st + vec2(-u,-v))[1] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(0,- v))[1] + 0.05 * texture2D(texture, vertTexCoord.st  + vec2(u,-v))[1] +
 0.2 * texture2D(texture, vertTexCoord.st + vec2(-u,0))[1] -1.0 * texture2D(texture, vertTexCoord.st + vec2(0,0))[1] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(u, 0))[1] +
0.05 * texture2D(texture, vertTexCoord.st + vec2(-u,v))[1] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(0,v))[1] + 0.05 * texture2D(texture, vertTexCoord.st + vec2(u,v))[1];
return B;
}



void main(){

float A = texture2D(texture, vertTexCoord.st )[0] ;
float B = texture2D(texture, vertTexCoord.st )[1] ;

float A_1 = A + (0.9 * laplaceA(vertTexCoord.st, u , v) - A * B * B + 0.0545 * (1 - A)) ;
float B_1 = B + ( 0.18 * laplaceB(vertTexCoord.st, u, v) + A * B * B - (0.062 + 0.0545) * B)  ;



gl_FragColor =  vec4(A_1, B_1, 1.0, 1.0);

}

And here is an example:

1

Tip: try to change the numerical values in the definition of A_1 and B_1 in the fragment shader code.

*: A fragment shader technically deals with fragments rather than pixels.

Reactive applications, Shaders and all that

posted by on 2018.04.06, under Processing
06:

We have already discussed the advantage of using shaders to create interesting visual effects. This time we will have to deal with fragment shaders *and* vertex shaders. In a nutshell, a vertex shader takes care of managing the vertices position, color, etc. which are then passed as “fragments” to the fragment shader for rasterization. “OMG, this is so abstract!!”. Yeah, it is less abstract than it seems, but nevertheless it requires some know how. As previously, I really suggest this : I find myself going back and forth to it regularly, always learning new things.
Good, so, what’s the plan? The main idea in the following code is to use a PShape object to encode all the vertices: we basically are making a star shaped thing out of rectangles, which in 3d graphics parlance are referred to as “quads”. Once we have created such a PShape object, we will not have to deal with the position of vertices anymore: all the change in the geometry will be dealt by the GPU! Why is this exciting? It’s because the GPU is much much faster at doing such things than the CPU. This allows in particular for real-time reactive fun. Indeed, the code gets input from the microphone and the webcam, separately. More precisely, each frame coming from the webcam is passed to the shader to be used as a texture for each quad. On the other hand, the microphone audio is monitored, and its amplitude controls the variable t, which in turns control the rotation (in Processing) and more importantly the jittering in the vertex shader. Notice that the fragment shader doesn’t do anything out of the ordinary here, just apply a texture.
Here’s how the code looks like

import processing.video.*;
import processing.sound.*;

Amplitude amp;
AudioIn in;



PImage  back;
PShape mesh;
PShader shad;

float t = 0;
float omega = 0;
float rot = 0;
int count = 0;

Capture cam;


void setup() {
  size(1000, 1000, P3D);
  background(0);
 
  //Set up audio

  amp = new Amplitude(this);
  in = new AudioIn(this, 0);
  in.start();
  amp.input(in);

  //Set up webcam

  String[] cameras = Capture.list();

  cam = new Capture(this, cameras[0]);

  cam.start();

  textureMode(NORMAL);  

  mesh = createShape();
  shad = loadShader("Frag.glsl", "Vert.glsl");

  back = loadImage("back.jpg");


  //Generates the mesh;

  mesh.beginShape(QUADS);
  mesh.noStroke();

  for (int i = 0; i < 100; i++) {
    float phi = random(0, 2 * PI);
    float theta = random(0, PI);
    float radius = random(200, 400);
    PVector pos = new PVector( radius * sin(theta) * cos(phi), radius * sin(theta) * sin(phi), radius * cos(theta));
    float u = random(0.5, 1);

    //Set up the vertices of the quad with texture coordinates;

    mesh.vertex(pos.x, pos.y, pos.z, 0, 0);
    mesh.vertex(pos.x + 10, pos.y + 10, pos.z, 0, u);
    mesh.vertex(-pos.x, -pos.y, -pos.z, u, u);
    mesh.vertex(-pos.x - 10, -pos.y - 10, -pos.z, 0, u);
  }

  mesh.endShape();
}

void draw() {

    background(0);
    //Checks camera availability;

    if (cam.available() == true) {
      cam.read();
    }
 

    image(back, 0, 0); //Set a gradient background;

    pushMatrix();
    translate(width/2, height/2, 0);
    rotateX( rot * 10 * PI/2);
    rotateY( rot * 11 * PI/2);

    shad.set("time", exp(t) - 1); //Calls the shader, and passes the variable t;

    shader(shad);
    mesh.setTexture(cam); //Use the camera frame as a texture;
    shape(mesh);

    popMatrix();

    t += (amp.analyze() - t) * 0.05; //Smoothens the variable t;

    omega +=  (t  - omega) * 0.01; //Makes the rotation acceleration depend on t;

    rot += omega * 0.01;

    resetShader(); //Reset shader to display the background image;
   
}

// Frag.glsl

varying vec4 vertColor;
varying vec4 vertTexCoord;


uniform float time;
uniform sampler2D texture;

void main(){

gl_FragColor = texture2D(texture, vertTexCoord.st ) * vertColor;

}

// Vert.glsl

uniform mat4 transform;
uniform mat4 modelview;
uniform mat4 texMatrix;


attribute vec4 position;
attribute vec4 color;
attribute vec2 texCoord;

varying vec4 vertColor;
varying vec4 vertTexCoord;
varying vec4 pos;


uniform float time;


void main() {
  gl_Position = transform * position;

  gl_Position.x += sin(time * 2 * 3.145 * gl_Position.x) * 10 ;
  gl_Position.y += cos(time * 2 * 3.145 * gl_Position.y) * 10 ;

  vertColor = color;

  vertTexCoord = texMatrix * vec4(texCoord, 1.0, 1.0);


}

Notice the call to reset the shader, which allows to show a gradient background, loaded as an image, without it being affected by the shader program.
Here’s a render of it, recorded while making some continuous noise, a.k.a. singing.

Try it while listening to some music, it’s really fun!

Worlds

posted by on 2018.03.18, under Processing
18:

Yesterday I have been to the beautiful exhibition by Pe Lang at the Museum of Digital Art here in Zurich. The exhibition consists of several kinetic systems producing complex behaviours. I was in particular fascinated by a piece called “polarization”, where different disks with polarized filters provide very interesting visual patterns. Those who read this blog know that I am really into systems, and their emergent features, so I was inspired to make the following piece, called “Worlds”. It is also an excuse to show how object oriented programming allows very quickly to replicate a little “cosmos” over and over.
The idea is the following. We have discussed more than once systems of particles which bounce on the canvas, but we never gave the canvas its own ontological properties, a fancy way to say that we never considered the canvas to be an object itself. That’s precisely what is going on in the code below. Namely, there is a class World whose scope is to be the box in which the particles are bound to reside. It comes with a position vector for its center, with a (half) length for the box itself, and with a particle system. The bounce check is done internally to the class World, in the update() function, so to make it behave like its own little universe. Once you have such a gadget, it’s immediate to replicate it over and over again! I disposed the box in a simple array, and I really like the visual effect that comes from it. I also did something else: inspired by statistical mechanics, each box has a “temperature”, which is influenced by how often the particles bounce on the walls of the box. The “hotter” the box, the more red it becomes. There is also a cooling factor: each box tends to cool down. So, after some time, the system goes to equilibrium, and each box stabilizes on a shade of red. This shows also something very nice, and at first counter-intuitive: there are boxes with a lot of particles, which are though very slow, making the box very “cold”.
Here is the code

// Worlds
// Kimri 2018

ArrayList<World> boxes;
int n = 10;



void setup(){
  size(1000, 1000);
 init();
 
 frameRate(30);
 

 
}


void draw(){
  background(255);

  for (int i = 0; i < boxes.size(); i++){
   World w = boxes.get(i);
  w.display();
  w.update();
  }
 
 
}

void init(){
 
  background(255);
 
 boxes = new ArrayList<World>();
 
 float step = width/n;
//Generate the array of boxes;

  for (float x = step; x < width; x+= step){
    for (float y = step; y < height; y+= step){
      boxes.add(new World(x, y, step * 0.4));
    }
  }

}

void keyPressed(){
  init();
}

// World class


class World {
  PVector pos;
  int num;
  float len;
  float temp = 255;
  float coeff = 1.7;

  ArrayList<Particle> particles;

  World(float _x, float _y, float _len) {
    pos = new PVector(_x, _y);
    len = _len;
    num = int (random(10, 60));
    //Add particles to the box
    particles = new ArrayList<Particle>();

    for (int i = 0; i < num; i++) {
      float part_x = pos.x + random(-len, len);
      float part_y = pos.y + random(-len, len);
      particles.add(new Particle(new PVector(part_x, part_y)));
    }
  }

  World(float _x, float _y, float _len, int _num) {
    pos = new PVector(_x, _y);
    len = _len;
    num = _num;
    //Add particles to the box
    particles = new ArrayList<Particle>();

    for (int i = 0; i < num; i++) {
      float part_x = pos.x + random(-len, len);
      float part_y = pos.y + random(-len, len);
      particles.add(new Particle(new PVector(part_x, part_y)));
    }
  }

  void display() {
    fill(255, temp, temp, 90);

    stroke(0, 100);
    strokeWeight(1.2);
    rectMode(CENTER);
    rect(pos.x, pos.y, 2 * len, 2 * len);
  }

  void update() {
    for (int i = 0; i < num; i++) {
      Particle p = particles.get(i);
      p.move();

      if ( (p.pos.x - pos.x) >= len - p.rad) {
        p.pos.x = pos.x + len - p.rad;
        p.vel.x  = -p.vel.x;
        temp -= 1;
      }
      if ( (p.pos.x - pos.x) <= -(len - p.rad)) {
        p.pos.x = pos.x - (len - p.rad);
        p.vel.x  = -p.vel.x;
        temp -= 1;
      }
      if ( (p.pos.y - pos.y) >= len - p.rad) {
        p.pos.y = pos.y + len - p.rad;
        p.vel.y  = -p.vel.y;
        temp -= 1;
      }
      if ( (p.pos.y - pos.y) <= -(len - p.rad)) {
        p.pos.y = pos.y - (len - p.rad);
        p.vel.y  = -p.vel.y;
        temp -= 1;
      }
      p.display();
    }
    if (temp < 0) temp = 0;
    temp += coeff;
  }
}

//Particle class



class Particle {
  PVector pos;
  PVector vel;
  float rad = 2;

  Particle(PVector _pos) {
    pos = new PVector(_pos.x, _pos.y);
    vel = new PVector(random(-3, 3), random(-3, 3));
  }

  void move() {
    pos.add(vel);
  }

  void display() {
    noStroke();
    fill(0, 100);
    ellipse(pos.x, pos.y, 2 * rad, 2 *rad);
  }

}

And here is how it looks like

Glitch Art and Shaders

posted by on 2017.02.11, under Processing
11:

It’s been a while since the last post. I have been busy with (finally!) starting to set up a website to collect some of my works, and I’ve been more or less finishing a couple of interactive installations. For this reason, interactivity and real-time processing have captured my attention recently. It turns out that when you want to interact with a piece of code which produces graphics, and as soon as what you are doing involves more than just a couple of pairs of colored circles, you run quickly into performance issues. So, unless you are one of those digital artists drawing a blinking white circle in the middle of the screen and call it art (it’s fine, don’t worry, go on with it), you need to find your way around these types of issues. In practice, this amounts to get comfortable with words like Vertex Buffer Object, C++, and shaders, to which this post is dedicated.
The story goes like this: modern graphic cards (GPU) have a language they use, called GLSL . For instance, when in Processing you draw a line or a circle, what is actually happening behind the curtains is a communication between the CPU and the graphic card: Processing informs the GPU about the vertices of the line, the fact that it has to be line, the color of the vertices, etc. There are several stages from when the vertices are comunicated to the final result that you see on your screen. Some of these stages are user programmable, and the little programs that take care of each of these stages are called “shaders”. Shaders are notoriously difficult to work with: you have to program them in C, basically, and they are quite unforgiving with respect to errors in the code. On the other hand, they are really really fast. If you want to know why it is so, and how a (fragment) shader operates, give a look here.
So, why the hell would you want to learn such a tool? Well, if you, like me, are fond of glitch art, you must have realized that interactive real-time glitch art is almost impossible if you try to work pixel by pixel: even at a resolution of 800×600, the amount of computations for the CPU to get a framerate of 30fps is impractical. Enter fragment shaders! If you delegate the work to the GPU, it becomes more than doable.
I can’t go into the detail of the code I present in the following, but there are very good tutorials on the web that slowly teach you how to tame shaders. In particular, give a look here. Rest assured: you really need to be programming friendly, and have a lot of patience to work with shaders!

PImage img;
PShader glitch;


void setup(){
  size(800, 600, P2D);
  background(0);
  img = loadImage(insert_link_to_image);
  img.resize(800, 600);
 
 
  glitch = loadShader("glitchFrag.glsl");
  glitch.set("iResolution", new PVector(800., 600., 0.0) );
 
}

 
}

void draw(){
 
  glitch.set("iGlobalTime", random(0, 60.0));
 
   if (random(0.0, 1.0) < 0.4){
  shader(glitch);
   }
 
  image(img, 0, 0);
 
  resetShader();
 
}

---------------

// glitchFrag.glsl

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif


#define PROCESSING_TEXTURE_SHADER

varying vec4 vertTexCoord;
uniform sampler2D texture;
uniform vec3      iResolution;          
uniform float     iGlobalTime;          



float rand(vec2 co){
    return fract(cos(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}


void main(){
   vec3 uv = vec3(0.0);
   vec2 uv2 = vec2(0.0);
   vec2 nuv = gl_FragCoord.xy / iResolution.xy;
   vec3 texColor = vec3(0.0);

   if (rand(vec2(iGlobalTime)) < 0.7){
    texColor = texture2D(texture, vertTexCoord.st).rgb;
}
 else{
   texColor = texture2D(texture, nuv * vec2(rand(vec2(iGlobalTime)), rand(vec2(iGlobalTime * 0.99)))).rgb;
}
       
    float r = rand(vec2(iGlobalTime * 0.001));
    float r2 = rand(vec2(iGlobalTime * 0.1));
    if (nuv.y > rand(vec2(r2)) && nuv.y < r2 + rand(vec2(0.05 * iGlobalTime))){
    if (r < rand(vec2(iGlobalTime * 0.01))){
       
   if ((texColor.b + texColor.g + texColor.b)/3.0 < r * rand(vec2(0.4, 0.5)) * 2.0){
       
        uv.r -= sin(nuv.x * r * 0.1 * iGlobalTime ) * r * 7000.;
        uv.g += sin(vertTexCoord.y * vertTexCoord.x/2 * 0.006 * iGlobalTime) * r * 10 *rand(vec2(iGlobalTime * 0.1)) ;
        uv.b -= sin(nuv.y * nuv.x * 0.5 * iGlobalTime) * sin(nuv.y * nuv.x * 0.1) * r *  20. ;
        uv2 += vec2(sin(nuv.x * r * 0.1 * iGlobalTime ) * r );
   
}
       
    }
}

  texColor = texture2D(texture, vertTexCoord.st + uv2).rgb;
  texColor += uv;
   gl_FragColor = vec4(texColor, 1.0);  
   
}

In the following, you can see the result applied to a famous painting by Caravaggio (yes, I love Caravaggio): it matches real time framerate.
If you want to apply the shader to the webcam, you just need to set up a Capture object, called, say, cam, and substitute img with cam in the Processing code. Enjoy some glitching! :)

Glitch Shader from kimri on Vimeo.

n-grams, Markov chains and text generators

posted by on 2016.11.03, under Processing, Supercollider
03:

An n-gram is a contiguous sequence of n elements of a text, where each element can be a phoneme, a syllable, a character, a word, etc. Given a text, the collection of its n-grams allows to infer some statistical correlation, and moreover to assemble the n-grams collected into a Markov chain . Right, what’s a Markov chain, then? A Markov chain describes a process in which the next step depends probabilistically only on the current step. For instance, a random walk is an example of a Markovian process. One way to assign a Markov chain to a text is to collect all its n-grams, and for each n-gram we keep track of the next n-gram. We then go through the collection of all the n-grams, and for each of them we choose randomly among the list of the subsequent n-grams. We form then the new n-gram, and proceed. Confusing? Let’s see an example. Suppose we have the text “this is a book and this is my pen”, and suppose we are interested in 2-grams, where a single gram is a word. We have then then the pair (This, is), the pair (is, a), etc. Then, we keep track of all the 1-grams which follow a single 2-gram: for instance, after (this, is) we can have (a) or (my), and we assign to each of them an equal probability. Suppose we start from (This, is): if we happen to choose (a), we form the pair (is, a), to which it must follow (a, book), and so on, until we reach the end of the text. In this way, we can generate a text which has similar statistical distribution of n-grams, in this case pairs of words. The greater n, the closer your generated text will be to the original one.
Inspired by this, I have written a code in p5.js, a set of libraries for Javascript, that generates text starting from n-grams in words. Here “word” only means “groups of characters separated by a whitespace”. Punctuation is not considered from the grammatical point of view, nor the role of articles, nouns, etc. are analysed; neverthless, the results are still quite interesting. The choice of Javascript is dictated by the fact that Processing/Java is not very friendly with dictionaries, and in this case they are really useful.
Here’s the code

var words;
var ngrams = {}
var order = 2;
var txt = "";
var a = 0.1;
var target = 255;

function setup() {
  createCanvas(600, 600);

  words = source.split(" ");

  for (var i = 0; i < words.length - order; i++){
    gram_temp = [];
    for (var j = 0; j < order; j++){
    gram_temp.push(words[i + j]);
      }
      gram = join(gram_temp," ");
      if (!ngrams[gram]){
     ngrams[gram] = [];
    }
      if (i < words.length - order){
     ngrams[gram].push(words[i + order])   
  }
}
  markovIt(ngrams);
  txt = spText(txt);
}

function draw() {
    background(255);
    a += (target - a) * 0.1;
    textSize(12);
        fill(0, a);
    textDisplay(txt);
    if (a  < 0.099 ){
     restart();
     }
}

function restart(){
    markovIt(ngrams);
    txt = spText(txt);
    a = 0;
    target = 255;
}


function textDisplay(ttext){
    textAlign(CENTER);
    text(ttext, 100, 60, width - 100, height - 60);
}

function spText(txt){
    return txt.split(".");
}

function mousePressed(){
   target = 0;
}

function markovIt(ngrams) {
    var index = int(random(0, words.length - order + 1));
    curr_temp = [];
    for (var j = 0; j < order; j++){
     curr_temp.push(words[index + j]);
      }
      current = join(curr_temp, " ");
      result = current;
      if (!ngrams[current]){
        return null;
    }
      var range = int(random(30, 500));
      for (var i = 0; i < range; i++){
        if (!ngrams[current]){
        break;
          }
        possibilities = ngrams[current];
        if (possibilities.length == 0){
          break;
        }
        next = possibilities[int(random(0, possibilities.length))];
        result = result + " " + next;
        tokens = result.split(" ");
        curr_temp = [];
        for (var j = order; j > 0; j--){
        curr_temp.push(tokens[tokens.length - j]);
    }
         current = join(curr_temp, " ");
         }
    txt = result;
   
}

Notice that you need to declare a variable source, which should contain the input text.
As a little homage to Noam Chomsky, a pioneer of grammar studies (and much more), here you can find a working version of the code above using 2-grams in words, and based on this. Click on the canvas to generate some new text.

Non-Real Time Analysis and Rendering

posted by on 2016.08.01, under Processing
01:

This is a “utility” post dealing with Non-Real Time analysis, which I hope can help someone who has struggled with this issues before.
Here’s the setting. You have a nice sketch in Processing which reacts to an external input, say music or mouse position, and want to render it to show the world how fun and joyful the time spent while coding might be. You then realize that using the function saveFrame() creates problems: each single frame takes too long to save to disk, and everything goes horribly out of sync. In this case it is convenient to have a sketch that retrieves the data needed frame by frame, say the frequency spectrum of a piece of audio. One can later load this data, and render it via saveFrame(), knowing that when the frames are reassembled at the prescribed framerate, everything will be in sync.
The following code does exactly that. It uses a class called Saver, which in this case keeps track of the frequency spectrum. Under the more than reasonable assumption that the Fast Fourier Transform done by the Minim library is computed in less than 1/30 of seconds, the data you’ll retrieve for each frame will be in sync with the audio. Then you can go to your sketch which visualizes this data, load the value saved in the .txt file, and use it anywhere you would use the array of frequencies, say. It should be immediate to adapt the piece of code to your need. To save to disk you have to press any key

import ddf.minim.*;
import ddf.minim.analysis.*;

Minim minim;
AudioPlayer song;
FFT fft;

String songPath = "InsertPathToSongHere";

Saver mySaver;

boolean saved = false;
boolean pressed = false;

void setup() {
  size(200, 200);
  background(0);

  minim = new Minim(this);
  song = minim.loadFile(songPath, 1024);

  frameRate(30);

  song.play();

  fft = new FFT(song.bufferSize(), song.sampleRate());
  mySaver = new Saver(fft.specSize(), "data/analysis.txt");
 
}

void draw() {
  background(0);

  fft.forward(song.mix);

  for (int i = 0; i < fft.specSize(); i++) {
   float a = fft.getBand(i);
   mySaver.setElement(a);
  }

  mySaver.update();

  if (pressed & !saved) {
    mySaver.write();
    saved = true;
  }
}

void keyPressed() {
  pressed = true;
}

//Define the Saver class

class Saver {

  int buffSize;
  String pathTosave;
  ArrayList data;
  int arraylength;

  Saver(int _buffSize, String _pathTosave) {

    buffSize = _buffSize;
    pathTosave = _pathTosave;
    arraylength = 0;  
    data = new ArrayList();
 
  }

  void setElement(float a) {
    data.add(a);
  }

  void update() {
    arraylength++;
  }

  void write() {
    String[] dataString = new String[arraylength];
    int index;
   
    for (int i = 0; i < dataString.length; i++) {
      String temp = "";
      for (int j = 0; j < buffSize; j++) {
        index = i * buffSize + j;
        if ( j == buffSize - 1){
          temp += (float) data.get(index);
        } else {
          temp += (float) data.get(index) + " ";
        }
      }

      dataString[i] = temp;
   
    }

    saveStrings(pathTosave, dataString);
    println("Data saved!");
  }
}

This technique was inspired by something I’ve read somewhere, I will try to retrieve it and point to it here. 😉

Abstract expressionism: a generative approach

posted by on 2016.06.28, under Processing
28:

I have always been fascinated by abstract expressionism, and in particular the work of Jackson Pollock. The way paint, gravity and artistic vision play together was always for me very representative of that tension between chaos and structural patterns one often finds in art.
So, here it is a little homage to the drip-painting style of Pollock. The Processing code is not clean enough to be useful, and I don’t think I understand what it exactly does yet (yes, it happens more than often that what I code is a surprise to me!). Let me say that it incorporates many of the topics discussed in this blog: object oriented programming, noise fields, etc. I’ll update the post when I’ll get it (hopefully) cleaned up.
Meanwhile, enjoy. 😉

pagetop