posted by on 2018.07.21, under Processing

Combining some techniques from the previous posts on shaders, here’s the render of an audio reactive application which I used for a video of “Overlook”, a track of my musical alter ego

The code uses vertex and fragment shaders to create a glitchy environment which reacts to the audio in real time.
The track “Overlook” is available for listening here

Dust From A G String

posted by on 2018.06.27, under Processing, Uncategorized

Here’s “Dust From A G String”, a piece about the corrosive power of passing time, and the beauty it leaves behind, just before the end.

The video was made in Processing, using a custom shader based on FBO techniques. The audio is a reworking of Bach’s “Air on the G String”.

Reaction-Diffusion algorithm and FBO techniques

posted by on 2018.06.08, under Processing

Reaction-Diffusion algorithms are very fascinating, since they are capable of producing incredibly organic patterns. They can also be computationally expensive if the grid of choice is fine enough. In a nutshell, we regard every pixel of an image as a cell containing two types of chemicals in different proportions, and whose combination produces a given color on the screen. The “diffusion equation” is such that, as time goes on, the proportion of the two chemicals changes according to that of the neighborhood cells. Since the algorithm is pixel* based, at its finest, we might think this is a job for a fragment shader. And that’s indeed the case! We have to be careful though concerning two aspects. First, the algorithm uses information about the adjacent pixels, and we know that a fragment shader only treats fragment by fragment information, it does not allow sharing among fragments. This is solved by using a texture to store information about the chemicals. This brings us to the second point: we need to store the previous state of the chemical proportions to compute the next one. On the other hand, a shader is not “persistent”, in the sense that all the information it has concerning fragments is lost on the next frame. Enter FBO and ping-pong technique! Framebuffer objects allows what is called “off-screen rendering”. In other words, instead of rendering the pixels directly to screen, they are rendered to a buffer, and only later displayed to the screen. Hence, we can pass the FBO as a texture to the shader, use, say, the red and green values of the texture at the given fragment coordinate as our chemicals percentage, and set the color of the fragment using the new values of the percentages. This technique is usually referred to as “ping-pong technique”, because we go back and forth from the buffer to the screen. It is particularly useful for modelling particle systems directly on the GPU. In Processing, a FBO is an object described by the class PGraphics, and the shader becomes a method that can be sent to the object.
Here’s the code

PGraphics pong;
PShader diff;

void setup(){
  size(800, 800, P2D);
  pong = createGraphics(width, height, P2D);
  diff = loadShader("diffFrag.glsl");
  pong.background(255, 0, 0);
  diff.set("u", 1.0/width);
  diff.set("v", 1.0/height);

  pong.fill(0, 255, 0);
  pong.ellipse(width/2, height/2, 10, 10);

void draw(){

  pong.image(pong, 0, 0);
  image(pong, 0, 0);

//// diffFrag.glsl

varying vec4 vertColor;
varying vec4 vertTexCoord;

uniform float u;
uniform float v;

uniform sampler2D texture;

float laplaceA(in vec2 p, in float u, in float v){
float A = 0.05 * texture2D(texture, vertTexCoord.st + vec2(-u,-v))[0] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(0,- v))[0] + 0.05 * texture2D(texture, vertTexCoord.st  + vec2(u,-v))[0] +
 0.2 * texture2D(texture, vertTexCoord.st + vec2(-u,0))[0] - 1.0 * texture2D(texture, vertTexCoord.st + vec2(0,0))[0] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(u, 0))[0] +
0.05 * texture2D(texture, vertTexCoord.st + vec2(-u,v))[0] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(0,v))[0] + 0.05 * texture2D(texture, vertTexCoord.st + vec2(u,v))[0];
return A;

float laplaceB(in vec2 p, in float u, in float v){
float B = 0.05 * texture2D(texture, vertTexCoord.st + vec2(-u,-v))[1] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(0,- v))[1] + 0.05 * texture2D(texture, vertTexCoord.st  + vec2(u,-v))[1] +
 0.2 * texture2D(texture, vertTexCoord.st + vec2(-u,0))[1] -1.0 * texture2D(texture, vertTexCoord.st + vec2(0,0))[1] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(u, 0))[1] +
0.05 * texture2D(texture, vertTexCoord.st + vec2(-u,v))[1] + 0.2 * texture2D(texture, vertTexCoord.st + vec2(0,v))[1] + 0.05 * texture2D(texture, vertTexCoord.st + vec2(u,v))[1];
return B;

void main(){

float A = texture2D(texture, vertTexCoord.st )[0] ;
float B = texture2D(texture, vertTexCoord.st )[1] ;

float A_1 = A + (0.9 * laplaceA(vertTexCoord.st, u , v) - A * B * B + 0.0545 * (1 - A)) ;
float B_1 = B + ( 0.18 * laplaceB(vertTexCoord.st, u, v) + A * B * B - (0.062 + 0.0545) * B)  ;

gl_FragColor =  vec4(A_1, B_1, 1.0, 1.0);


And here is an example:


Tip: try to change the numerical values in the definition of A_1 and B_1 in the fragment shader code.

*: A fragment shader technically deals with fragments rather than pixels.

Reactive applications, Shaders and all that

posted by on 2018.04.06, under Processing

We have already discussed the advantage of using shaders to create interesting visual effects. This time we will have to deal with fragment shaders *and* vertex shaders. In a nutshell, a vertex shader takes care of managing the vertices position, color, etc. which are then passed as “fragments” to the fragment shader for rasterization. “OMG, this is so abstract!!”. Yeah, it is less abstract than it seems, but nevertheless it requires some know how. As previously, I really suggest this : I find myself going back and forth to it regularly, always learning new things.
Good, so, what’s the plan? The main idea in the following code is to use a PShape object to encode all the vertices: we basically are making a star shaped thing out of rectangles, which in 3d graphics parlance are referred to as “quads”. Once we have created such a PShape object, we will not have to deal with the position of vertices anymore: all the change in the geometry will be dealt by the GPU! Why is this exciting? It’s because the GPU is much much faster at doing such things than the CPU. This allows in particular for real-time reactive fun. Indeed, the code gets input from the microphone and the webcam, separately. More precisely, each frame coming from the webcam is passed to the shader to be used as a texture for each quad. On the other hand, the microphone audio is monitored, and its amplitude controls the variable t, which in turns control the rotation (in Processing) and more importantly the jittering in the vertex shader. Notice that the fragment shader doesn’t do anything out of the ordinary here, just apply a texture.
Here’s how the code looks like

import processing.video.*;
import processing.sound.*;

Amplitude amp;
AudioIn in;

PImage  back;
PShape mesh;
PShader shad;

float t = 0;
float omega = 0;
float rot = 0;
int count = 0;

Capture cam;

void setup() {
  size(1000, 1000, P3D);
  //Set up audio

  amp = new Amplitude(this);
  in = new AudioIn(this, 0);

  //Set up webcam

  String[] cameras = Capture.list();

  cam = new Capture(this, cameras[0]);



  mesh = createShape();
  shad = loadShader("Frag.glsl", "Vert.glsl");

  back = loadImage("back.jpg");

  //Generates the mesh;


  for (int i = 0; i < 100; i++) {
    float phi = random(0, 2 * PI);
    float theta = random(0, PI);
    float radius = random(200, 400);
    PVector pos = new PVector( radius * sin(theta) * cos(phi), radius * sin(theta) * sin(phi), radius * cos(theta));
    float u = random(0.5, 1);

    //Set up the vertices of the quad with texture coordinates;

    mesh.vertex(pos.x, pos.y, pos.z, 0, 0);
    mesh.vertex(pos.x + 10, pos.y + 10, pos.z, 0, u);
    mesh.vertex(-pos.x, -pos.y, -pos.z, u, u);
    mesh.vertex(-pos.x - 10, -pos.y - 10, -pos.z, 0, u);


void draw() {

    //Checks camera availability;

    if (cam.available() == true) {

    image(back, 0, 0); //Set a gradient background;

    translate(width/2, height/2, 0);
    rotateX( rot * 10 * PI/2);
    rotateY( rot * 11 * PI/2);

    shad.set("time", exp(t) - 1); //Calls the shader, and passes the variable t;

    mesh.setTexture(cam); //Use the camera frame as a texture;


    t += (amp.analyze() - t) * 0.05; //Smoothens the variable t;

    omega +=  (t  - omega) * 0.01; //Makes the rotation acceleration depend on t;

    rot += omega * 0.01;

    resetShader(); //Reset shader to display the background image;

// Frag.glsl

varying vec4 vertColor;
varying vec4 vertTexCoord;

uniform float time;
uniform sampler2D texture;

void main(){

gl_FragColor = texture2D(texture, vertTexCoord.st ) * vertColor;


// Vert.glsl

uniform mat4 transform;
uniform mat4 modelview;
uniform mat4 texMatrix;

attribute vec4 position;
attribute vec4 color;
attribute vec2 texCoord;

varying vec4 vertColor;
varying vec4 vertTexCoord;
varying vec4 pos;

uniform float time;

void main() {
  gl_Position = transform * position;

  gl_Position.x += sin(time * 2 * 3.145 * gl_Position.x) * 10 ;
  gl_Position.y += cos(time * 2 * 3.145 * gl_Position.y) * 10 ;

  vertColor = color;

  vertTexCoord = texMatrix * vec4(texCoord, 1.0, 1.0);


Notice the call to reset the shader, which allows to show a gradient background, loaded as an image, without it being affected by the shader program.
Here’s a render of it, recorded while making some continuous noise, a.k.a. singing.

Try it while listening to some music, it’s really fun!


posted by on 2018.03.18, under Processing

Yesterday I have been to the beautiful exhibition by Pe Lang at the Museum of Digital Art here in Zurich. The exhibition consists of several kinetic systems producing complex behaviours. I was in particular fascinated by a piece called “polarization”, where different disks with polarized filters provide very interesting visual patterns. Those who read this blog know that I am really into systems, and their emergent features, so I was inspired to make the following piece, called “Worlds”. It is also an excuse to show how object oriented programming allows very quickly to replicate a little “cosmos” over and over.
The idea is the following. We have discussed more than once systems of particles which bounce on the canvas, but we never gave the canvas its own ontological properties, a fancy way to say that we never considered the canvas to be an object itself. That’s precisely what is going on in the code below. Namely, there is a class World whose scope is to be the box in which the particles are bound to reside. It comes with a position vector for its center, with a (half) length for the box itself, and with a particle system. The bounce check is done internally to the class World, in the update() function, so to make it behave like its own little universe. Once you have such a gadget, it’s immediate to replicate it over and over again! I disposed the box in a simple array, and I really like the visual effect that comes from it. I also did something else: inspired by statistical mechanics, each box has a “temperature”, which is influenced by how often the particles bounce on the walls of the box. The “hotter” the box, the more red it becomes. There is also a cooling factor: each box tends to cool down. So, after some time, the system goes to equilibrium, and each box stabilizes on a shade of red. This shows also something very nice, and at first counter-intuitive: there are boxes with a lot of particles, which are though very slow, making the box very “cold”.
Here is the code

// Worlds
// Kimri 2018

ArrayList<World> boxes;
int n = 10;

void setup(){
  size(1000, 1000);


void draw(){

  for (int i = 0; i < boxes.size(); i++){
   World w = boxes.get(i);

void init(){
 boxes = new ArrayList<World>();
 float step = width/n;
//Generate the array of boxes;

  for (float x = step; x < width; x+= step){
    for (float y = step; y < height; y+= step){
      boxes.add(new World(x, y, step * 0.4));


void keyPressed(){

// World class

class World {
  PVector pos;
  int num;
  float len;
  float temp = 255;
  float coeff = 1.7;

  ArrayList<Particle> particles;

  World(float _x, float _y, float _len) {
    pos = new PVector(_x, _y);
    len = _len;
    num = int (random(10, 60));
    //Add particles to the box
    particles = new ArrayList<Particle>();

    for (int i = 0; i < num; i++) {
      float part_x = pos.x + random(-len, len);
      float part_y = pos.y + random(-len, len);
      particles.add(new Particle(new PVector(part_x, part_y)));

  World(float _x, float _y, float _len, int _num) {
    pos = new PVector(_x, _y);
    len = _len;
    num = _num;
    //Add particles to the box
    particles = new ArrayList<Particle>();

    for (int i = 0; i < num; i++) {
      float part_x = pos.x + random(-len, len);
      float part_y = pos.y + random(-len, len);
      particles.add(new Particle(new PVector(part_x, part_y)));

  void display() {
    fill(255, temp, temp, 90);

    stroke(0, 100);
    rect(pos.x, pos.y, 2 * len, 2 * len);

  void update() {
    for (int i = 0; i < num; i++) {
      Particle p = particles.get(i);

      if ( (p.pos.x - pos.x) >= len - p.rad) {
        p.pos.x = pos.x + len - p.rad;
        p.vel.x  = -p.vel.x;
        temp -= 1;
      if ( (p.pos.x - pos.x) <= -(len - p.rad)) {
        p.pos.x = pos.x - (len - p.rad);
        p.vel.x  = -p.vel.x;
        temp -= 1;
      if ( (p.pos.y - pos.y) >= len - p.rad) {
        p.pos.y = pos.y + len - p.rad;
        p.vel.y  = -p.vel.y;
        temp -= 1;
      if ( (p.pos.y - pos.y) <= -(len - p.rad)) {
        p.pos.y = pos.y - (len - p.rad);
        p.vel.y  = -p.vel.y;
        temp -= 1;
    if (temp < 0) temp = 0;
    temp += coeff;

//Particle class

class Particle {
  PVector pos;
  PVector vel;
  float rad = 2;

  Particle(PVector _pos) {
    pos = new PVector(_pos.x, _pos.y);
    vel = new PVector(random(-3, 3), random(-3, 3));

  void move() {

  void display() {
    fill(0, 100);
    ellipse(pos.x, pos.y, 2 * rad, 2 *rad);


And here is how it looks like

Glitch Art and Shaders

posted by on 2017.02.11, under Processing

It’s been a while since the last post. I have been busy with (finally!) starting to set up a website to collect some of my works, and I’ve been more or less finishing a couple of interactive installations. For this reason, interactivity and real-time processing have captured my attention recently. It turns out that when you want to interact with a piece of code which produces graphics, and as soon as what you are doing involves more than just a couple of pairs of colored circles, you run quickly into performance issues. So, unless you are one of those digital artists drawing a blinking white circle in the middle of the screen and call it art (it’s fine, don’t worry, go on with it), you need to find your way around these types of issues. In practice, this amounts to get comfortable with words like Vertex Buffer Object, C++, and shaders, to which this post is dedicated.
The story goes like this: modern graphic cards (GPU) have a language they use, called GLSL . For instance, when in Processing you draw a line or a circle, what is actually happening behind the curtains is a communication between the CPU and the graphic card: Processing informs the GPU about the vertices of the line, the fact that it has to be line, the color of the vertices, etc. There are several stages from when the vertices are comunicated to the final result that you see on your screen. Some of these stages are user programmable, and the little programs that take care of each of these stages are called “shaders”. Shaders are notoriously difficult to work with: you have to program them in C, basically, and they are quite unforgiving with respect to errors in the code. On the other hand, they are really really fast. If you want to know why it is so, and how a (fragment) shader operates, give a look here.
So, why the hell would you want to learn such a tool? Well, if you, like me, are fond of glitch art, you must have realized that interactive real-time glitch art is almost impossible if you try to work pixel by pixel: even at a resolution of 800×600, the amount of computations for the CPU to get a framerate of 30fps is impractical. Enter fragment shaders! If you delegate the work to the GPU, it becomes more than doable.
I can’t go into the detail of the code I present in the following, but there are very good tutorials on the web that slowly teach you how to tame shaders. In particular, give a look here. Rest assured: you really need to be programming friendly, and have a lot of patience to work with shaders!

PImage img;
PShader glitch;

void setup(){
  size(800, 600, P2D);
  img = loadImage(insert_link_to_image);
  img.resize(800, 600);
  glitch = loadShader("glitchFrag.glsl");
  glitch.set("iResolution", new PVector(800., 600., 0.0) );


void draw(){
  glitch.set("iGlobalTime", random(0, 60.0));
   if (random(0.0, 1.0) < 0.4){
  image(img, 0, 0);


// glitchFrag.glsl

#ifdef GL_ES
precision mediump float;
precision mediump int;


varying vec4 vertTexCoord;
uniform sampler2D texture;
uniform vec3      iResolution;          
uniform float     iGlobalTime;          

float rand(vec2 co){
    return fract(cos(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);

void main(){
   vec3 uv = vec3(0.0);
   vec2 uv2 = vec2(0.0);
   vec2 nuv = gl_FragCoord.xy / iResolution.xy;
   vec3 texColor = vec3(0.0);

   if (rand(vec2(iGlobalTime)) < 0.7){
    texColor = texture2D(texture, vertTexCoord.st).rgb;
   texColor = texture2D(texture, nuv * vec2(rand(vec2(iGlobalTime)), rand(vec2(iGlobalTime * 0.99)))).rgb;
    float r = rand(vec2(iGlobalTime * 0.001));
    float r2 = rand(vec2(iGlobalTime * 0.1));
    if (nuv.y > rand(vec2(r2)) && nuv.y < r2 + rand(vec2(0.05 * iGlobalTime))){
    if (r < rand(vec2(iGlobalTime * 0.01))){
   if ((texColor.b + texColor.g + texColor.b)/3.0 < r * rand(vec2(0.4, 0.5)) * 2.0){
        uv.r -= sin(nuv.x * r * 0.1 * iGlobalTime ) * r * 7000.;
        uv.g += sin(vertTexCoord.y * vertTexCoord.x/2 * 0.006 * iGlobalTime) * r * 10 *rand(vec2(iGlobalTime * 0.1)) ;
        uv.b -= sin(nuv.y * nuv.x * 0.5 * iGlobalTime) * sin(nuv.y * nuv.x * 0.1) * r *  20. ;
        uv2 += vec2(sin(nuv.x * r * 0.1 * iGlobalTime ) * r );

  texColor = texture2D(texture, vertTexCoord.st + uv2).rgb;
  texColor += uv;
   gl_FragColor = vec4(texColor, 1.0);  

In the following, you can see the result applied to a famous painting by Caravaggio (yes, I love Caravaggio): it matches real time framerate.
If you want to apply the shader to the webcam, you just need to set up a Capture object, called, say, cam, and substitute img with cam in the Processing code. Enjoy some glitching! :)

Glitch Shader from kimri on Vimeo.

n-grams, Markov chains and text generators

posted by on 2016.11.03, under Processing, Supercollider

An n-gram is a contiguous sequence of n elements of a text, where each element can be a phoneme, a syllable, a character, a word, etc. Given a text, the collection of its n-grams allows to infer some statistical correlation, and moreover to assemble the n-grams collected into a Markov chain . Right, what’s a Markov chain, then? A Markov chain describes a process in which the next step depends probabilistically only on the current step. For instance, a random walk is an example of a Markovian process. One way to assign a Markov chain to a text is to collect all its n-grams, and for each n-gram we keep track of the next n-gram. We then go through the collection of all the n-grams, and for each of them we choose randomly among the list of the subsequent n-grams. We form then the new n-gram, and proceed. Confusing? Let’s see an example. Suppose we have the text “this is a book and this is my pen”, and suppose we are interested in 2-grams, where a single gram is a word. We have then then the pair (This, is), the pair (is, a), etc. Then, we keep track of all the 1-grams which follow a single 2-gram: for instance, after (this, is) we can have (a) or (my), and we assign to each of them an equal probability. Suppose we start from (This, is): if we happen to choose (a), we form the pair (is, a), to which it must follow (a, book), and so on, until we reach the end of the text. In this way, we can generate a text which has similar statistical distribution of n-grams, in this case pairs of words. The greater n, the closer your generated text will be to the original one.
Inspired by this, I have written a code in p5.js, a set of libraries for Javascript, that generates text starting from n-grams in words. Here “word” only means “groups of characters separated by a whitespace”. Punctuation is not considered from the grammatical point of view, nor the role of articles, nouns, etc. are analysed; neverthless, the results are still quite interesting. The choice of Javascript is dictated by the fact that Processing/Java is not very friendly with dictionaries, and in this case they are really useful.
Here’s the code

var words;
var ngrams = {}
var order = 2;
var txt = "";
var a = 0.1;
var target = 255;

function setup() {
  createCanvas(600, 600);

  words = source.split(" ");

  for (var i = 0; i < words.length - order; i++){
    gram_temp = [];
    for (var j = 0; j < order; j++){
    gram_temp.push(words[i + j]);
      gram = join(gram_temp," ");
      if (!ngrams[gram]){
     ngrams[gram] = [];
      if (i < words.length - order){
     ngrams[gram].push(words[i + order])   
  txt = spText(txt);

function draw() {
    a += (target - a) * 0.1;
        fill(0, a);
    if (a  < 0.099 ){

function restart(){
    txt = spText(txt);
    a = 0;
    target = 255;

function textDisplay(ttext){
    text(ttext, 100, 60, width - 100, height - 60);

function spText(txt){
    return txt.split(".");

function mousePressed(){
   target = 0;

function markovIt(ngrams) {
    var index = int(random(0, words.length - order + 1));
    curr_temp = [];
    for (var j = 0; j < order; j++){
     curr_temp.push(words[index + j]);
      current = join(curr_temp, " ");
      result = current;
      if (!ngrams[current]){
        return null;
      var range = int(random(30, 500));
      for (var i = 0; i < range; i++){
        if (!ngrams[current]){
        possibilities = ngrams[current];
        if (possibilities.length == 0){
        next = possibilities[int(random(0, possibilities.length))];
        result = result + " " + next;
        tokens = result.split(" ");
        curr_temp = [];
        for (var j = order; j > 0; j--){
        curr_temp.push(tokens[tokens.length - j]);
         current = join(curr_temp, " ");
    txt = result;

Notice that you need to declare a variable source, which should contain the input text.
As a little homage to Noam Chomsky, a pioneer of grammar studies (and much more), here you can find a working version of the code above using 2-grams in words, and based on this. Click on the canvas to generate some new text.

Non-Real Time Analysis and Rendering

posted by on 2016.08.01, under Processing

This is a “utility” post dealing with Non-Real Time analysis, which I hope can help someone who has struggled with this issues before.
Here’s the setting. You have a nice sketch in Processing which reacts to an external input, say music or mouse position, and want to render it to show the world how fun and joyful the time spent while coding might be. You then realize that using the function saveFrame() creates problems: each single frame takes too long to save to disk, and everything goes horribly out of sync. In this case it is convenient to have a sketch that retrieves the data needed frame by frame, say the frequency spectrum of a piece of audio. One can later load this data, and render it via saveFrame(), knowing that when the frames are reassembled at the prescribed framerate, everything will be in sync.
The following code does exactly that. It uses a class called Saver, which in this case keeps track of the frequency spectrum. Under the more than reasonable assumption that the Fast Fourier Transform done by the Minim library is computed in less than 1/30 of seconds, the data you’ll retrieve for each frame will be in sync with the audio. Then you can go to your sketch which visualizes this data, load the value saved in the .txt file, and use it anywhere you would use the array of frequencies, say. It should be immediate to adapt the piece of code to your need. To save to disk you have to press any key

import ddf.minim.*;
import ddf.minim.analysis.*;

Minim minim;
AudioPlayer song;
FFT fft;

String songPath = "InsertPathToSongHere";

Saver mySaver;

boolean saved = false;
boolean pressed = false;

void setup() {
  size(200, 200);

  minim = new Minim(this);
  song = minim.loadFile(songPath, 1024);



  fft = new FFT(song.bufferSize(), song.sampleRate());
  mySaver = new Saver(fft.specSize(), "data/analysis.txt");

void draw() {


  for (int i = 0; i < fft.specSize(); i++) {
   float a = fft.getBand(i);


  if (pressed & !saved) {
    saved = true;

void keyPressed() {
  pressed = true;

//Define the Saver class

class Saver {

  int buffSize;
  String pathTosave;
  ArrayList data;
  int arraylength;

  Saver(int _buffSize, String _pathTosave) {

    buffSize = _buffSize;
    pathTosave = _pathTosave;
    arraylength = 0;  
    data = new ArrayList();

  void setElement(float a) {

  void update() {

  void write() {
    String[] dataString = new String[arraylength];
    int index;
    for (int i = 0; i < dataString.length; i++) {
      String temp = "";
      for (int j = 0; j < buffSize; j++) {
        index = i * buffSize + j;
        if ( j == buffSize - 1){
          temp += (float) data.get(index);
        } else {
          temp += (float) data.get(index) + " ";

      dataString[i] = temp;

    saveStrings(pathTosave, dataString);
    println("Data saved!");

This technique was inspired by something I’ve read somewhere, I will try to retrieve it and point to it here. 😉

Abstract expressionism: a generative approach

posted by on 2016.06.28, under Processing

I have always been fascinated by abstract expressionism, and in particular the work of Jackson Pollock. The way paint, gravity and artistic vision play together was always for me very representative of that tension between chaos and structural patterns one often finds in art.
So, here it is a little homage to the drip-painting style of Pollock. The Processing code is not clean enough to be useful, and I don’t think I understand what it exactly does yet (yes, it happens more than often that what I code is a surprise to me!). Let me say that it incorporates many of the topics discussed in this blog: object oriented programming, noise fields, etc. I’ll update the post when I’ll get it (hopefully) cleaned up.
Meanwhile, enjoy. 😉

Textural Terrain Generation

posted by on 2016.06.22, under Processing

This post is inspired by some of the stuff Daniel Shiffman has on his YouTube channel.
The idea is based on the use of meshes and textures. So, what’s a mesh and what’s a texture?
A mesh is a collection of vertices, edges and faces that are used in 3D computer graphics to model surfaces or solid objects. If you are familiar with the mathematical notion of a triangulation, you are more or less in business. Even though the faces are not necessarily triangles in general, the analogy works quite well. In Processing there is a nice and quick way to generate a triangular mesh, and it’s via beginShape()/endShape(), which is what I have used in the code below. One starts with a grid (check out some earlier posts for rants about grids), and from the collection of points of the grid Processing will then build a triangular mesh*. This is achieved via the TRIANGLE_STRIP mode: we only need to specify the vertices (though in a precise order), and they will be connected via triangular shapes. Very cool. Ok, we have a tons of little triangles which assemble in a huge square: what do we do with this? Here comes the notion of a texture map. The idea is very simple: we have an image, and we want to “glue” it to a face of the mesh. Once it is glued to such a face, the image will follow the face: for instance, if we rotate such a face, the image which is stuck to that face rotates as well! Now, you should know that mapping textures on a complicated surface is kind of an art, but in our case it is pretty easy, since the surface is just a flat square. To achieve this gluing, we have to define some anchor points in the image. In other words, we have to give information about how points in the original image are associated to vertices in the mesh. The double loop in the code below does exactly this: the last two parameters in the vertex() function specify indeed the gluing.
If we had halted our imagination here, we would end up with something very static: an image attached to a square. Meh. Here comes the simple, but interesting idea. Since we are in 3D geometry, we can modify the z-coordinate of the vertex at point (x,y) with a function of the plane. In this case, the function used is Perlin noise. If we rotate our system of coordinates with respect to the x-axis, you start seeing little mountains appear. Nice! Still, though, there is no movement in place. To achieve such a movement effect, we can increment the y coordinate slightly at each frame, so that the new z-coordinate at point (x, y) will be the value of the z-coordinate at a previous point in the grid, achieving the moving effect. In the code, I’ve furthermore decided to control the offset of the z-coordinate with a variable r whose minimum value is 0, and gets triggered randomly. Notice that I’ve also allowed some “release” time for r, so to achieve some smoothness. In this way you obtain a nice beating feeling. Instead of doing it randomly, what happens if you trigger r via a beat detector listening to some music (using the Sound or Minim library, say)? Yep, you get a nice music visualizer. :)
Last couple of things I added is to “move along the image”, and using tint() instead of clearing the screen. The first one is achieved via the variable speed: basically, at each new frame, we don’t glue the image to the mesh in the exact same way, but we translate it a bit in the y-direction.
Oh, I’ve also used more than one image, to get a change in the color palette.
Here’s the code

int w = 1900;
int h = 800;
int cols;
int rows;
int scl = 30;

PImage[] img = new PImage[3];
PImage buff;

float speed = 0.01;
int speed2 = 0;
float r = 0;

void setup() {
  size(1200, 720, P3D);

  //Load the images we are using as textures;
  img[0] = loadImage("path_to_image1");
  img[1] = loadImage("path_to_image2");
  img[2] = loadImage("path_to_image3");

  for (int i = 0; i < img.length; i++) {
    img[i].resize(1900, 2000);
    img[i].filter(BLUR, 0.6);
  buff =img[0];


  cols = w / scl;
  rows = h / scl;

void draw() {

//Triggers a "beat";
  if ( (random(0.0, 1.0) < 0.05) & (r < 0.1) ) {
    r = 8.0;

  //This allows for some release time;

  if (r > 0.01) {
    r *= 0.95;

//From time to time, choose another texture image;
    if (random(0.0, 1.0) < 0.008) {
    int i = int(random(0, 3));
    buff = img[i];

  float yoff = speed;
  speed -= 0.03;
  if (frameCount%2 == 0) {
    speed2 += 1;
  speed2 = speed2 % 60;

  translate(width/2, height/8);
  translate(-w/2 + sin(frameCount * 0.003)* 20, -h/2, -450); //The sin function allows for some left/right shift

  //Building the mesh

  for (int y = 0; y < rows; y++) {
    tint(255, 14);
    texture(buff); //Using the chosen image as a texture;
    float xoff = 0;
    for (int x = 0; x < cols + 1; x++) {
      vertex(x * scl, y * scl, map(noise(xoff, yoff), 0, 1.0, -60, 60) * (r + 2.9), x * scl, (y + speed2) % 2000 * scl);
      vertex(x * scl, (y + 1) * scl, map(noise(xoff, yoff + 0.1), 0, 1.0, -60, 60) * (r + 2.9), x * scl, ((y + 1 + speed2) % 2000) * scl);
      xoff += 0.1;
    yoff += 0.1;

Here’s it how it looks like

Very floaty and cloud-like, no? 😉

Exercise 1: instead of loading images, use the frames of a video as textures.
Exercise 2: instead of controlling the z-coordinate of a point (x,y) via a Perlin noise function, use the brightness at (x,y) of the texture image obtained in Exercise 1.
Exercise 3: Enjoy. :)

*One should really think of a mesh as an object which stores information about vertices, edges, etc., while we are here concerned only with displaying a deformed grid.