Non-Real Time Analysis and Rendering

posted by on 2016.08.01, under Processing
01:

This is a “utility” post dealing with Non-Real Time analysis, which I hope can help someone who has struggled with this issues before.
Here’s the setting. You have a nice sketch in Processing which reacts to an external input, say music or mouse position, and want to render it to show the world how fun and joyful the time spent while coding might be. You then realize that using the function saveFrame() creates problems: each single frame takes too long to save to disk, and everything goes horribly out of sync. In this case it is convenient to have a sketch that retrieves the data needed frame by frame, say the frequency spectrum of a piece of audio. One can later load this data, and render it via saveFrame(), knowing that when the frames are reassembled at the prescribed framerate, everything will be in sync.
The following code does exactly that. It uses a class called Saver, which in this case keeps track of the frequency spectrum. Under the more than reasonable assumption that the Fast Fourier Transform done by the Minim library is computed in less than 1/30 of seconds, the data you’ll retrieve for each frame will be in sync with the audio. Then you can go to your sketch which visualizes this data, load the value saved in the .txt file, and use it anywhere you would use the array of frequencies, say. It should be immediate to adapt the piece of code to your need. To save to disk you have to press any key

import ddf.minim.*;
import ddf.minim.analysis.*;

Minim minim;
AudioPlayer song;
FFT fft;

String songPath = "InsertPathToSongHere";

Saver mySaver;

boolean saved = false;
boolean pressed = false;

void setup() {
  size(200, 200);
  background(0);

  minim = new Minim(this);
  song = minim.loadFile(songPath, 1024);

  frameRate(30);

  song.play();

  fft = new FFT(song.bufferSize(), song.sampleRate());
  mySaver = new Saver(fft.specSize(), "data/analysis.txt");
 
}

void draw() {
  background(0);

  fft.forward(song.mix);

  for (int i = 0; i < fft.specSize(); i++) {
   float a = fft.getBand(i);
   mySaver.setElement(a);
  }

  mySaver.update();

  if (pressed & !saved) {
    mySaver.write();
    saved = true;
  }
}

void keyPressed() {
  pressed = true;
}

//Define the Saver class

class Saver {

  int buffSize;
  String pathTosave;
  ArrayList data;
  int arraylength;

  Saver(int _buffSize, String _pathTosave) {

    buffSize = _buffSize;
    pathTosave = _pathTosave;
    arraylength = 0;  
    data = new ArrayList();
 
  }

  void setElement(float a) {
    data.add(a);
  }

  void update() {
    arraylength++;
  }

  void write() {
    String[] dataString = new String[arraylength];
    int index;
   
    for (int i = 0; i < dataString.length; i++) {
      String temp = "";
      for (int j = 0; j < buffSize; j++) {
        index = i * buffSize + j;
        if ( j == buffSize - 1){
          temp += (float) data.get(index);
        } else {
          temp += (float) data.get(index) + " ";
        }
      }

      dataString[i] = temp;
   
    }

    saveStrings(pathTosave, dataString);
    println("Data saved!");
  }
}

This technique was inspired by something I’ve read somewhere, I will try to retrieve it and point to it here. 😉

Abstract expressionism: a generative approach

posted by on 2016.06.28, under Processing
28:

I have always been fascinated by abstract expressionism, and in particular the work of Jackson Pollock. The way paint, gravity and artistic vision play together was always for me very representative of that tension between chaos and structural patterns one often finds in art.
So, here it is a little homage to the drip-painting style of Pollock. The Processing code is not clean enough to be useful, and I don’t think I understand what it exactly does yet (yes, it happens more than often that what I code is a surprise to me!). Let me say that it incorporates many of the topics discussed in this blog: object oriented programming, noise fields, etc. I’ll update the post when I’ll get it (hopefully) cleaned up.
Meanwhile, enjoy. 😉

Textural Terrain Generation

posted by on 2016.06.22, under Processing
22:

This post is inspired by some of the stuff Daniel Shiffman has on his YouTube channel.
The idea is based on the use of meshes and textures. So, what’s a mesh and what’s a texture?
A mesh is a collection of vertices, edges and faces that are used in 3D computer graphics to model surfaces or solid objects. If you are familiar with the mathematical notion of a triangulation, you are more or less in business. Even though the faces are not necessarily triangles in general, the analogy works quite well. In Processing there is a nice and quick way to generate a triangular mesh, and it’s via beginShape()/endShape(), which is what I have used in the code below. One starts with a grid (check out some earlier posts for rants about grids), and from the collection of points of the grid Processing will then build a triangular mesh*. This is achieved via the TRIANGLE_STRIP mode: we only need to specify the vertices (though in a precise order), and they will be connected via triangular shapes. Very cool. Ok, we have a tons of little triangles which assemble in a huge square: what do we do with this? Here comes the notion of a texture map. The idea is very simple: we have an image, and we want to “glue” it to a face of the mesh. Once it is glued to such a face, the image will follow the face: for instance, if we rotate such a face, the image which is stuck to that face rotates as well! Now, you should know that mapping textures on a complicated surface is kind of an art, but in our case it is pretty easy, since the surface is just a flat square. To achieve this gluing, we have to define some anchor points in the image. In other words, we have to give information about how points in the original image are associated to vertices in the mesh. The double loop in the code below does exactly this: the last two parameters in the vertex() function specify indeed the gluing.
If we had halted our imagination here, we would end up with something very static: an image attached to a square. Meh. Here comes the simple, but interesting idea. Since we are in 3D geometry, we can modify the z-coordinate of the vertex at point (x,y) with a function of the plane. In this case, the function used is Perlin noise. If we rotate our system of coordinates with respect to the x-axis, you start seeing little mountains appear. Nice! Still, though, there is no movement in place. To achieve such a movement effect, we can increment the y coordinate slightly at each frame, so that the new z-coordinate at point (x, y) will be the value of the z-coordinate at a previous point in the grid, achieving the moving effect. In the code, I’ve furthermore decided to control the offset of the z-coordinate with a variable r whose minimum value is 0, and gets triggered randomly. Notice that I’ve also allowed some “release” time for r, so to achieve some smoothness. In this way you obtain a nice beating feeling. Instead of doing it randomly, what happens if you trigger r via a beat detector listening to some music (using the Sound or Minim library, say)? Yep, you get a nice music visualizer. :)
Last couple of things I added is to “move along the image”, and using tint() instead of clearing the screen. The first one is achieved via the variable speed: basically, at each new frame, we don’t glue the image to the mesh in the exact same way, but we translate it a bit in the y-direction.
Oh, I’ve also used more than one image, to get a change in the color palette.
Here’s the code

int w = 1900;
int h = 800;
int cols;
int rows;
int scl = 30;

PImage[] img = new PImage[3];
PImage buff;

float speed = 0.01;
int speed2 = 0;
float r = 0;


void setup() {
  size(1200, 720, P3D);
  background(0);

  //Load the images we are using as textures;
  img[0] = loadImage("path_to_image1");
  img[1] = loadImage("path_to_image2");
  img[2] = loadImage("path_to_image3");

  for (int i = 0; i < img.length; i++) {
    img[i].resize(1900, 2000);
    img[i].filter(BLUR, 0.6);
  }
  buff =img[0];

  noStroke();

  cols = w / scl;
  rows = h / scl;
}


void draw() {

//Triggers a "beat";
  if ( (random(0.0, 1.0) < 0.05) & (r < 0.1) ) {
    r = 8.0;
  }

  //This allows for some release time;

  if (r > 0.01) {
    r *= 0.95;
  }


//From time to time, choose another texture image;
    if (random(0.0, 1.0) < 0.008) {
    int i = int(random(0, 3));
    buff = img[i];
  }


  float yoff = speed;
  speed -= 0.03;
  if (frameCount%2 == 0) {
    speed2 += 1;
  }
  speed2 = speed2 % 60;

  translate(width/2, height/8);
  rotateX(PI/3);
  translate(-w/2 + sin(frameCount * 0.003)* 20, -h/2, -450); //The sin function allows for some left/right shift

  //Building the mesh

  for (int y = 0; y < rows; y++) {
    beginShape(TRIANGLE_STRIP);
    tint(255, 14);
    texture(buff); //Using the chosen image as a texture;
    float xoff = 0;
    for (int x = 0; x < cols + 1; x++) {
      vertex(x * scl, y * scl, map(noise(xoff, yoff), 0, 1.0, -60, 60) * (r + 2.9), x * scl, (y + speed2) % 2000 * scl);
      vertex(x * scl, (y + 1) * scl, map(noise(xoff, yoff + 0.1), 0, 1.0, -60, 60) * (r + 2.9), x * scl, ((y + 1 + speed2) % 2000) * scl);
      xoff += 0.1;
    }
    endShape();
    yoff += 0.1;
  }
}

Here’s it how it looks like

Very floaty and cloud-like, no? 😉

Exercise 1: instead of loading images, use the frames of a video as textures.
Exercise 2: instead of controlling the z-coordinate of a point (x,y) via a Perlin noise function, use the brightness at (x,y) of the texture image obtained in Exercise 1.
Exercise 3: Enjoy. :)

*One should really think of a mesh as an object which stores information about vertices, edges, etc., while we are here concerned only with displaying a deformed grid.

Mondrian and recursion

posted by on 2016.05.03, under Processing
03:

As I mentioned more than once in this blog, one shouldn’t abuse object oriented programming. There is still a lot of beautiful art generated with a few lines of code. I hope the following Processing sketch, inspired by the geometric minimalism of Piet Mondrian, falls in this category

color[] cols;
int num = 5;

void setup() {
  size(600, 800);
  background(0);
  cols = new color[num];

  for (int i = 0; i < num; i++) {
    cols[i] = color(int(random(0, 255)), int(random(0, 255)), int(random(0, 255)));
  }

  strokeWeight(1.5);
  for (int i = 0; i < 10; i++) {
   // rotate(i * 0.1 * 2 * PI);
    Mondrian(0, width, height);
  }
}

void draw() {

}

void mousePressed() {
  fill(255, 255);
  rect(0, 0, width, height);

  for (int i = 0; i < 10; i++) {
    for (int j = 0; j < num; j++) {
      cols[j] = color(int(random(0, 255)), int(random(0, 255)), int(random(0, 255)));
    }
   
//    rotate(i * 0.1 * 2 * PI);
    Mondrian(0, width, height);
   
  }
}


void Mondrian(float x, float len1, float len2) {
  if (len1 > 20 & len2 > 20) {
    pushMatrix();
    translate(x, 0);
    color c1 = cols[int(random(0, cols.length))];

    if (random(0.0, 1.0) < 0.5) {
      float x_new = random(0, len1 * 0.5);
      fill(c1, 255);
      rect(0, 0, x_new, len2);
      Mondrian(x_new, len1 - x_new, len2);
    } else {
      float len_new = random(0, len2);
      fill(c1, 255);
      rect(0, 0, len1, len_new);
      Mondrian(0, len1, len_new);
    }
    popMatrix();
  }
}

It is essentially based on recursion, and exploits again the idea of a grid, even if in a conceptually different way from the latest post.
If you run the code, you should get something like this

1

I suggest you uncomment the line which contains the rotate call. :)
Also, here’s an exercise: modify the function Mondrian() in such a way that the same color is never chosen in successive iteration.
Remark: In his neoplastic paintings, Mondrian used mostly primary colors and white, with white being usually predominant. Moreover, he used thicker lines. Try to modify the code above to achieve the same effect.

Grids: an object oriented approach

posted by on 2016.04.07, under Processing
07:

I have been working recently on a small audio-visual installation based essentially on grid manipulations, inspired by the art of the amazing Casey Reas.
Grids are some of the first interesting repetitive structures one learns to build. For instance, a one parameter function which displays a regular grid of rectangles maximising the surface of the screen used might look like this

void grid(int n){
float stepx = width/n;
float stepy = height/n;

for (int x = 0; x < width; x+=stepx){
   for (int y = 0; y < height; y+=stepy){
    rectMode(CENTER);
    noStroke();
    fill(255);
    rect(x + stepx/2.0, y + stepy/2.0, 20, 20);
    }
  }
}

It will produce a grid of equally spaced white rectangles (so, better set the background to black before in your code). This is all fun and cosy for a couple of milliseconds, but if you are like me, you will look immediately for possibilities of explorations, and honestly the bit of code above doesn’t offer much. Let’s be precise: it *can* offer a lot of ways of tweaking, but they become cumbersome very very quickly. Also, I like to write codes which are conceptually structural and modular, and which allow to separate more clearly the “data holding” part from its representation. I find that this helps a lot the sense of surprise I get from playing with my newly created tool/toy. For instance, all that white is boring: how about we assign a different color to each rectangle? Not randomly though, which would be very cheap. Let’s use an image as a palette, and have each rectangle carry the color of the image pixel at the center of the rectangle. This can be attained by introducing the following line

fill(img.pixels[x + y * width]);

where img is the PImage variable holding your image (which I assume you have resized to the screen width and height, unless you have a fetish for ArrayOutOfBound messages :) ). Nice. But I guess you still can’t see my point, of course. Okay, let’s do the same but with a video. Now the image is changing at each frame, so the function grid() must be called in the animation loop, i.e. inside draw(). Here you can see my point: by proceding in this way, you are doing a certain amount of redundant computations. Indeed, the only thing that you would like to modify in this case is how each node of the grid is represented, and *not* the grid structure itself. So, it helps then to think of a grid as a data holder for the positions of its nodes, which, once we require some extra properties, are directly determined by a single integer n, the number of nodes per row and column. What we draw on the screen is then a representation of this bunch of information. Anytime you have something that behaves like a collection of data, object oriented programming (OOP) is not far from sight. Building objects (or rather classes) is kind of an art: extracting the relevant properties we want in our objects so that further manipulation becomes reactive and enjoyable is not an easy task. This to say that the are some basic rules in object oriented programming, but for artistic reason we can (and should) ignore many standard design patterns, and look at each case separately. Since I like thinking in terms of objects when programming, I feel the following warning is due: do *not* overuse OOP! There are fantastic pieces of algorithmic/generative art which do not use objects at all. The point is that you might run the risk to decompose the problem in its atomic parts, which has a certain intellectual appeal, no doubt, but then get lost when it comes to exploration and tweaking. So, case by case, see what works best, and, most importantly, what puts you in that spot where you can still get surprised. This said, let’s move on.
So, we can model a grid with the following Grid class

class Grid {
  int n;
  float[] posx;
  float[] posy;
  float offsetx;
  float offsety;
  Shape[] shapes = {};
 
  Grid(int _n){
    n = _n;
    int stepx = width/n;
    int stepy = height/n;
    offsetx = stepx/2.0;
    offsety = stepy/2.0;
    posx = new float[n];
    posy = new float[n];
   
    for (int i = 0; i < n; i++){
      posx[i] = i * stepx;
      posy[i] = i * stepy;
    }

    for (int i = 0; i < n; i++){
        for (int j = 0; j < n; j++){
        shapes = (Shape[]) append(shapes,new Shape(posx[i], posy[j]));
        }
      }

    }
   
    Grid(int _n, float _offsetx, float _offsety){
    n = _n;
    int stepx = width/n;
    int stepy = height/n;
    offsetx = _offsetx;
    offsety = _offsety;
    posx = new float[n];
    posy = new float[n];
   
    for (int i = 0; i < n; i++){
      posx[i] = i * stepx;
      posy[i] = i * stepy;
    }

    for (int i = 0; i < n; i++){
        for (int j = 0; j < n; j++){
        shapes = (Shape[]) append(shapes,new Shape(posx[i], posy[j]));
        }
      }

    }
   
    Grid(int _n, float len, float _offsetx, float _offsety){
    n = _n;
    int stepx = int(len/n);
    int stepy = int(len/n);
    offsetx = _offsetx;
    offsety = stepy;
    posx = new float[n];
    posy = new float[n];
   
    for (int i = 0; i < n; i++){
      posx[i] = i * stepx;
      posy[i] = i * stepy;
    }

    for (int i = 0; i < n; i++){
        for (int j = 0; j < n; j++){
        shapes = (Shape[]) append(shapes,new Shape(posx[i], posy[j]));
        }
      }

    }  
   
    void display(){
      pushMatrix();
      translate(offsetx, offsety);

      for (int i = 0; i < shapes.length; i++){
        shapes[i].display(20);
      }

      popMatrix();
    }
   
    void display(PImage _img){
      _img.loadPixels();
      pushMatrix();
      translate(offsetx, offsety);

      for (int i = 0; i < shapes.length; i++){
        shapes[i].setColor(_img.pixels[int(shapes[i].x) + int(shapes[i].y) * _img.width]);
        shapes[i].display(20);
      }

      popMatrix();
    }
   
  }

So, you can see that I have heavily used that you can overload constructors and methods, so to give some default behaviours when we don’t want to bother passing a lot of parameters. An instance of Grid will also carry an array of Shape objects: this is the “representational” informations of our grid. Here’s how the Shape class looks like:

class Shape {
  float x, y;
  color c;
 
  Shape(float _x, float _y){
    x = _x;
    y = _y;
    c = color(255, 255, 255);
  }
 
  Shape(float _x, float _y, color _c){
    x = _x;
    y = _y;
    c = _c;
  }
 
  void display(float w){
    rectMode(CENTER);
    noStroke();
    fill(c, 255);
    rect(x, y, w , w);
    rectMode(CORNER);
  }
 
  void setColor(color _c){
    c = _c;
  }
 
}

If you don’t specify any color, the rectangle will be white.
So, now the previous function grid() is subsumed in the following lines of code

Grid grid = new Grid(20);
grid.display(img);

Apart from the elegance of it, that as I mentioned before should not be the only criterion of judgement, the code now is amenable to vast explorations. For instance, by modifying the display() method of the Shape class we can completely change the appearance of our grid, even so much that it doesn’t look like a grid anymore! Moreover, we don’t have all those redundant computations to be made.
One thing that comes to mind when you have a class is that you can produce many objects from that class with slightly different properties. In this case I have decided to use this expedient in order to explore “fractalization”, which is another (made up?) word for “apply recursion carefully”. Add the following function to your code

void fractalize(int n){
  if (n > 1){
  grid = new Grid(n, x, y);
  grid.display();
  x+= grid.offsetx;
  y+= grid.offsety;
  fractalize(n - 1);
  }
}

To make the most of it, you might want to lower the alpha of the filling for the rectangles. Actually, as I mentioned before, we can start exploring the code and the questions it naturally suggests, like: “why restrict ourselves to rectangles?” Or “why even filling them?” “Can I add some stochastic noise here and there to make everything not so hearthless and rigid?” “What is the meaning of life?”
When you start raising these questions, a vast playground (or/and a deep existential pit) opens up.
Believe it or not, after very few adjustments to the Shape class here’s what I got

1

1

which I find having an interesting balance between structure and randomness. A nice surprise! :)

Noise, flows and generative art

posted by on 2016.03.26, under Processing
26:

What is and what isn’t generative art is a long standing debate, in which I do not want to enter here. Just to put things in context, though, I’ll share some words about it. For some people, a piece of art is generative if it is the product of some sort of system which, once set in motion, is left to itself, with no further interaction. These systems are also called autonomous. Though generative art is usually associated to algorithmic art, i.e. art generated by some computer algorithm, autonomous systems can be found in biology, chemistry, mechanics, etc. Personally, I find the constraint on the autonomy of the system a bit too tight. While I do think that autonomous systems produce fascinating pieces of art, hence showing the beauty of complexity and emergency, I’m also very much interested in the dichotomy creator/tool, which in this case manifests itself as a shadow of the interaction between human and machine. I’m thinking about art which is computer assisted, more specifically which arises from the interaction of some sort of (algorithmic) system with the creator/spectator. This interaction poses interesting questions. We can indeed consider the total generative system as the combination autonomous system + spectator. This would be an autonomous system itself, if not for one little detail: the spectator is aware (whatever that means) that he’s a tool of a machine which is producing a piece of art. A more concrete example would be the following. Consider a live art installation in which the movements of spectators are used to control some given parameters of a system, which is then used to draw on a big screen, or produce sounds. There is going to be a huge conceptual difference if the spectators are aware of the tracking of their movements or not. In the second case, we are in the presence of something which looks like an autonomous system, while in the first case the spectators could use their awareness to drive the artistic outcome. The topic is incredibly fascinating and worth thinking about: these few words were only meant as a support to the fact that I would consider as generative art* the piece that you are going to see in the following.
Since discussions surrounding art are not notoriously controversial enough, I’ll move to noise and randomness (yeeeih!). Let’s start with saying: you can’t generate random numbers with a computer. Behind any random number produced with a programming language, there is an algorithm: in general it is a very complicated one and takes as parameter physical parameters of your machine (say, the speed of the CPU at the moment you request a random number), but it is still an algorithm. It is just so complex that we (as humans) can’t see any logical pattern behind it. I already feel the objection coming: “Well, what is randomness anyway? Is there anything truly random?”. I’m going to skip this objection quickly, pointing here instead. Enjoy! 😉
Processing offers two functions to treat randomness and noise: one is random(), and the other is noise(). Actually, noise() is a function that reproduces Perlin noise, a type of gradient noise which is extremely useful to reproduce organic textures.
Though Perlin noise is great, no doubt about that, and since randomness for a machine is just a function which looks unpredictable, why not make one’s own noise? After all, one of the point of generative art is that it allows to build tools which can be played with and explored. The function customNoise() in the code below does exactly that: it is a function from -1 to 1 which behaves in an erratic enough way to be a good substitute for noise(). You have now got your very own noise function, well done! The question is: what are we going to do with that? That’s where the second noun in the title of this post enters the stage. Every time you have a nice function in two variables, you can build out of it a vector field. “What’s that?”, you might say. You can think of it as an assignment of a little arrow to each point of the screen, with the angle (in this case) respect to the x-axis determined by our noise function. Once we have such a little arrow, we can use it to tell a particle which is at a given position on the screen where to go next. You can imagine the vector field as being associated to a fluid which at each point moves exactly with velocity given by the value of the vector field. If you then drop tiny particles in the fluid, they will start moving along curves, which are called the flow curves of the vector field. Moreover, they will start accumulate along specific flow curves: I leave you to investigate why it is that. 😉
So, the following Processing code brings home all these ideas, plus a last one, which has to do with the beginning of this post. You will notice that the function customNoise() has a mouseX inside, and there’s a mouseY controlling the variable depth. This means that the function interacts with the mouse movement, and hence the output of the code can be driven by the user. In particular, the piece you get stays comfortably in that gray area between generative and nongenerative art, one of those interesting arguments you can entertain your friends with at the next vernissage or pub quiz you go. 😉
Here’s the code:

float[] x;
float[] y;
color[] col;
float s = 0.001;
float depth = 0.5;
PImage img;

void setup() {
  size(1000, 1000);
  background(0);
  int n = 1000;
  x = new float[n];
  y = new float[n];
  col = new color[n];
  img = loadImage(pathtoimage/image);
  img.resize(width, height);
  img.loadPixels();
  for (int i = 0; i < x.length; i++) {
    x[i]= random(0, width);
    y[i]= random(0, height);
    int loc = int(x[i]) + int(y[i])*width;
    col[i] = img.pixels[loc];
  }
}

void draw() {
  noStroke();
  depth = map(mouseY, 0, height, 0.5, 1.5);
  //fill(255, 4); //Uncomment if you don't want to use an image;
  for (int i = 0; i < x.length; i++) {
    float alpha = customNoise(x[i] * s, y[i] * s)*2*PI;
    x[i]+= depth * cos(alpha); // + random(-0.4, 0.4);
    y[i]+= depth * sin(alpha); // + random(-0.4, 0.4);
    if (y[i] > height) {
      y[i] = 0;
      x[i] = random(0, width);
    }
    x[i]= x[i]%width;
    fill(col[i], 4); //Comment if you don't want to use an image;
    ellipse(x[i], y[i], 2, 2);
  }
}


float customNoise(float x, float y) {
  return pow(sin(0.9*x + noise(x, y)*map(mouseX, 0, width, 0, 5)*y), 3);
}

You will get something like this

noise2

noise1

Notice that the first piece is obtained by commenting/uncommenting some few lines.
Finally, there is one last question you might ask, and it is the following:”How did you come up with those peculiar numbers for the parameters in the code?”. Well, the answer is: by a certain amount of trial and error. As I mentioned more than once, making art with code allows for a great degree of exploration: tweaking a parameter here and changing a line there can give you very different and unexpected results. That what you get at the end is artistically appealing or not, well, nobody else can tell but you. These highly subjective decisions are what transforms a bunch of programming lines into something meaningful and beautiful. So, go on tweaking and looking for something you find interesting and worth sharing, then! :)

*If you are post-modernly thinking “Who cares if it’s called generative or not?”, you definitely have all my sympathy.

Digital poetry and text glitching

posted by on 2016.03.21, under Processing
21:

Digital poetry is that part of literature which is concerned with poetic forms of expression which are mainly computer aided. I am using the term in a strong sense here, i.e. I am thinking about generative poetry, hypertext poetry, and for this occasion in particular digital visual poetry. In general, the relation between the (graphical) sign used to represent a word and its actual meaning in a poetic text is a very interesting (and crucial) one. Indeed, the way words are represented can be an integral part of the aesthetic value of a piece of literary text, poetry in this case. Just think about the beautiful art of Chinese calligraphy, for example. It is then not surprising that poetry, as many forms of digital art, can be glitched* too. I have written about glitch art already, and we can use a couple of ideas and methodology from there. One way to glitch a piece of poetry would be to introduce orthographic anomalies/errors in the text to get for instance something like**

“SnOww i% my s!hoooe
AbanNdo;^^ed
Sparr#w^s nset”

At this stage we are working mainly with the signifier, but in a way which doesn’t take into account the actual spatial representation of the text itself. (Yes, the text is actually represented already, I’m being a bit sloppy here.)
More in the direction of digital visual poetry, we can work with properties of the visualized text: the position of the actual characters involved, for instance. The graphical atoms will be then the characters forming the words in the text, in this case, and we introduce perturbations to their positions in space, and some other visual artifacts. To achieve this, we can regard the various lines of text, which are of type String, as array of characters, and display them. We have then to take care of the length in pixels of each character with the function textWidth(), in order to be able to display the various lines of text.
Here’s how a simple Processing code inspired by these ideas would look like:

PFont font;
String[] text = {"Oh you can't help that,",  "said the Cat.", "We're all mad here.",  "I'm mad. You're mad." };
int size = 48;
int index = 0;

void setup(){
  size(800, 800);
  background(0);
  textAlign(CENTER);
  font = loadFont("TlwgTypewriter-48.vlw"); //You can create fonts with Tools/Create Font in Processing
  textFont(font, size);
  for (int i = 0; i < text.length; i++){
    float posx = 200;
    float posy =  200 + i * 50;
    for (int j = 0; j < text[i].length(); j++){
    textSize(size);
    text(text[i].charAt(j), posx, posy);
    posx = posx + textWidth(text[i].charAt(j)) +random(-10, 10);
    if (random(0.0, 1.0) < 0.3){
       size = size + int(random(-10, 10));
       glitch(posx, posy);
      }
    }
  }
}

void draw(){
}

void glitch(float x, float y){
  char c = char(int(random(130, 255)));
  text(c, x + random(-10, 10), y + random(-10, 10));
}

You would get something like this

1

I have been lazy, and introduced the array of strings directly in the code: an easy (but instructive) variation would be to load a piece of text from a .txt file, and parse it to obtain the individual lines.
Finally, we could work at a third layer of graphic “deepness”: we could consider the whole text as an image, and use the ideas in the previous post to glitch it. This is left to you as an interesting exercise.
Most importantly: never contradict the Cheshire Cat, mind you. 😉

*I avoid using the term “hacked”, since nowadays it is practically and culturally meaningless. Someone suggested “hijacked”, which I prefer.
** Thanks to Kerouac for the raw material to esperiment with.

On the poetics of artistic suicide

posted by on 2016.03.14, under Uncategorized
14:

I have not updated this blog in several months now: various reasons, some more personal than others, have taken a role in this. I felt though the strong need to write again here.
No, I won’t show any code this time. Simply because there is no algorithm for the kind of artistic pieces I want to point your attention to.
Rather, I want to talk about what I consider one of the greatest performance acts of recent years. Italian street artist Blu has recently erased all the murals he had created in his very own city of Bologna during the last twenty years. The manifesto and the words explaining the reasons behind this decision have been left to Wu Ming, a collective of writers active since the mid ’90s.
You can read everything here. (There is an english translation as well.)
In an impressive act of artistic suicide, at the same time creative and destructive, aesthetic and poetic, Blu has reminded us that art has still the power to challenge and raise questions concerning urbanism, commonalities, public spaces, recuperation, power.
By subtracting those walls to a since too long ongoing process of cultural and economic reappropriation by a repressive establishment, he has left big “grayboards” for all to take part in the everyday struggle for freedom. The brush is in our hands, and it’s leaking.
These phenomena of reappropriation are not particular to street art.
“Coding”, the topic of this blog, is nowadays a buzzword everyone wants to pronounce. A piece of cake shot with low depth of field and Instagram filter everyone wants to taste. The latest golden egg chicken, or whatever. With various blogs and online magazines feeding you the latest technological wonders and the new emergent trends in this or that artistic field, where has the sense of exploration gone? How can we bring it back, and make it challenging once again, freeing it from the multiple petty boxes the corporate machine wants us to frame it?
Can we bring digital art “in the streets”, or will it always be confined to our smart and self-reassuring conventions and gatherings?

After a few hours from its repainting, the following writings have appeared on one of the gray walls left

La felicità che mi era sempre stata negata.
Avevo il diritto di viverla quella felicità.
Non me l’avete concesso.
Allora peggio per me, peggio per voi, peggio per tutti.
Rimpianti si, ma in ogni caso nessun rimorso.”

For a little while, we had a glimpse of a Blu sky.

openFrameworks: a primer

posted by on 2014.09.02, under openFrameworks
02:

Since a couple of weeks I have been looking at openFrameworks, an amazing C++ toolkit, which is used to do amazing things. Until now, I have only showed code in Processing (basically Java), and SuperCollider: C++ is a beast in its own, though, and in future posts I will try to talk about pointers, memory allocation, the way classes are defined, etc. For now, I’ll just briefly explain the main idea around this simple project. Basically, the webcam is taking frames, converting them into textures, which are then mapped on meshes, which are then deformed, etc. to get very interesting shapes, colors, etc. You are not seeing any change in the frame grabbed simply because I am not in the video, and the camera is fixed :).
Here it is what you get

You can download the source code here. (You will need to put the texture .png file in a bin/data/texture folder).
In the following posts I will try to talk about more basic examples using the techniques in the project above.

The Molecular Music Box in SuperCollider

posted by on 2014.08.19, under Supercollider
19:

Via Reaktor tutorials I came across this video. I have already talked about generative systems that create rich patterns (see here, and here), and how simple rules can give rise to emergent complexity.
Watch the video to know what the simple rules are in this case (and to see why it is called “molecular” :) ), or look at the following SuperCollider code, which, I must say, took me a bit more than I thought

MIDIClient.init;

~mOut = MIDIOut.new(3);

(
var seed = 48;
var degrees, deg;
var length = Pseq([4, 3], inf).asStream;
var dur = length.next();
var bars = 25;
var quant = 16;
var notes = [];
var loop = [];
var pos = [];
var next = 0;

degrees = [];

//Building the MIDI values for the white keys

9.do({|i|
degrees = degrees++([0, 2, 4, 5, 7, 9, 11] + (12*i));
});

//Starting notes from the seed

deg = Pseq(degrees, inf, degrees.indexOf(seed)).asStream;


(bars * quant).do({|i|
  var note;

    if((i%quant == 0) && (notes != []),
    {
     loop = loop.add(notes);
     notes = [];
    });

    if((i%quant == next) && (pos.includes(next) == false),{
      notes = notes.add([deg.next(), dur/4]);
      pos = pos.add(next);
      next = (next + dur)%quant;
    });

    if ( (i%quant == next) && (pos.includes(next) == true),{
     dur = length.next();
     notes = notes.add([deg.next(), dur/4]);
     next = (next + dur)%quant;
    });

   });

  loop.do({|patt, i|
      patt.postln;
      patterns = patterns++([i * 4, Pbind(*[\type,\midi,\chan,0,
         \midiout,~mOut,
        [\midinote, \dur]: Pseq(patt, inf),
        \legato: 1,
            \amp: rrand(0.1,0.5)])]);
      });

  Ptpar(patterns, 1).trace.play;
)

Notice that you can very easily change any of the rules (duration length, scale used, etc.) with a few keyboard strokes: the power of a text based programming language! :)
I have sent the output of this to the Grand Piano instrument in Ableton Live 9.
Here is the result for 4C3

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.


and here is the one for 9C14½

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

pagetop