Glitch Art and Shaders

posted by on 2017.02.11, under Processing
11:

It’s been a while since the last post. I have been busy with (finally!) starting to set up a website to collect some of my works, and I’ve been more or less finishing a couple of interactive installations. For this reason, interactivity and real-time processing have captured my attention recently. It turns out that when you want to interact with a piece of code which produces graphics, and as soon as what you are doing involves more than just a couple of pairs of colored circles, you run quickly into performance issues. So, unless you are one of those digital artists drawing a blinking white circle in the middle of the screen and call it art (it’s fine, don’t worry, go on with it), you need to find your way around these types of issues. In practice, this amounts to get comfortable with words like Vertex Buffer Object, C++, and shaders, to which this post is dedicated.
The story goes like this: modern graphic cards (GPU) have a language they use, called GLSL . For instance, when in Processing you draw a line or a circle, what is actually happening behind the curtains is a communication between the CPU and the graphic card: Processing informs the GPU about the vertices of the line, the fact that it has to be line, the color of the vertices, etc. There are several stages from when the vertices are comunicated to the final result that you see on your screen. Some of these stages are user programmable, and the little programs that take care of each of these stages are called “shaders”. Shaders are notoriously difficult to work with: you have to program them in C, basically, and they are quite unforgiving with respect to errors in the code. On the other hand, they are really really fast. If you want to know why it is so, and how a (fragment) shader operates, give a look here.
So, why the hell would you want to learn such a tool? Well, if you, like me, are fond of glitch art, you must have realized that interactive real-time glitch art is almost impossible if you try to work pixel by pixel: even at a resolution of 800×600, the amount of computations for the CPU to get a framerate of 30fps is impractical. Enter fragment shaders! If you delegate the work to the GPU, it becomes more than doable.
I can’t go into the detail of the code I present in the following, but there are very good tutorials on the web that slowly teach you how to tame shaders. In particular, give a look here. Rest assured: you really need to be programming friendly, and have a lot of patience to work with shaders!

PImage img;
PShader glitch;


void setup(){
  size(800, 600, P2D);
  background(0);
  img = loadImage(insert_link_to_image);
  img.resize(800, 600);
 
 
  glitch = loadShader("glitchFrag.glsl");
  glitch.set("iResolution", new PVector(800., 600., 0.0) );
 
}

 
}

void draw(){
 
  glitch.set("iGlobalTime", random(0, 60.0));
 
   if (random(0.0, 1.0) < 0.4){
  shader(glitch);
   }
 
  image(img, 0, 0);
 
  resetShader();
 
}

---------------

// glitchFrag.glsl

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif


#define PROCESSING_TEXTURE_SHADER

varying vec4 vertTexCoord;
uniform sampler2D texture;
uniform vec3      iResolution;          
uniform float     iGlobalTime;          



float rand(vec2 co){
    return fract(cos(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}


void main(){
   vec3 uv = vec3(0.0);
   vec2 uv2 = vec2(0.0);
   vec2 nuv = gl_FragCoord.xy / iResolution.xy;
   vec3 texColor = vec3(0.0);

   if (rand(vec2(iGlobalTime)) < 0.7){
    texColor = texture2D(texture, vertTexCoord.st).rgb;
}
 else{
   texColor = texture2D(texture, nuv * vec2(rand(vec2(iGlobalTime)), rand(vec2(iGlobalTime * 0.99)))).rgb;
}
       
    float r = rand(vec2(iGlobalTime * 0.001));
    float r2 = rand(vec2(iGlobalTime * 0.1));
    if (nuv.y > rand(vec2(r2)) && nuv.y < r2 + rand(vec2(0.05 * iGlobalTime))){
    if (r < rand(vec2(iGlobalTime * 0.01))){
       
   if ((texColor.b + texColor.g + texColor.b)/3.0 < r * rand(vec2(0.4, 0.5)) * 2.0){
       
        uv.r -= sin(nuv.x * r * 0.1 * iGlobalTime ) * r * 7000.;
        uv.g += sin(vertTexCoord.y * vertTexCoord.x/2 * 0.006 * iGlobalTime) * r * 10 *rand(vec2(iGlobalTime * 0.1)) ;
        uv.b -= sin(nuv.y * nuv.x * 0.5 * iGlobalTime) * sin(nuv.y * nuv.x * 0.1) * r *  20. ;
        uv2 += vec2(sin(nuv.x * r * 0.1 * iGlobalTime ) * r );
   
}
       
    }
}

  texColor = texture2D(texture, vertTexCoord.st + uv2).rgb;
  texColor += uv;
   gl_FragColor = vec4(texColor, 1.0);  
   
}

In the following, you can see the result applied to a famous painting by Caravaggio (yes, I love Caravaggio): it matches real time framerate.
If you want to apply the shader to the webcam, you just need to set up a Capture object, called, say, cam, and substitute img with cam in the Processing code. Enjoy some glitching! :)

Glitch Shader from kimri on Vimeo.

Digital poetry and text glitching

posted by on 2016.03.21, under Processing
21:

Digital poetry is that part of literature which is concerned with poetic forms of expression which are mainly computer aided. I am using the term in a strong sense here, i.e. I am thinking about generative poetry, hypertext poetry, and for this occasion in particular digital visual poetry. In general, the relation between the (graphical) sign used to represent a word and its actual meaning in a poetic text is a very interesting (and crucial) one. Indeed, the way words are represented can be an integral part of the aesthetic value of a piece of literary text, poetry in this case. Just think about the beautiful art of Chinese calligraphy, for example. It is then not surprising that poetry, as many forms of digital art, can be glitched* too. I have written about glitch art already, and we can use a couple of ideas and methodology from there. One way to glitch a piece of poetry would be to introduce orthographic anomalies/errors in the text to get for instance something like**

“SnOww i% my s!hoooe
AbanNdo;^^ed
Sparr#w^s nset”

At this stage we are working mainly with the signifier, but in a way which doesn’t take into account the actual spatial representation of the text itself. (Yes, the text is actually represented already, I’m being a bit sloppy here.)
More in the direction of digital visual poetry, we can work with properties of the visualized text: the position of the actual characters involved, for instance. The graphical atoms will be then the characters forming the words in the text, in this case, and we introduce perturbations to their positions in space, and some other visual artifacts. To achieve this, we can regard the various lines of text, which are of type String, as array of characters, and display them. We have then to take care of the length in pixels of each character with the function textWidth(), in order to be able to display the various lines of text.
Here’s how a simple Processing code inspired by these ideas would look like:

PFont font;
String[] text = {"Oh you can't help that,",  "said the Cat.", "We're all mad here.",  "I'm mad. You're mad." };
int size = 48;
int index = 0;

void setup(){
  size(800, 800);
  background(0);
  textAlign(CENTER);
  font = loadFont("TlwgTypewriter-48.vlw"); //You can create fonts with Tools/Create Font in Processing
  textFont(font, size);
  for (int i = 0; i < text.length; i++){
    float posx = 200;
    float posy =  200 + i * 50;
    for (int j = 0; j < text[i].length(); j++){
    textSize(size);
    text(text[i].charAt(j), posx, posy);
    posx = posx + textWidth(text[i].charAt(j)) +random(-10, 10);
    if (random(0.0, 1.0) < 0.3){
       size = size + int(random(-10, 10));
       glitch(posx, posy);
      }
    }
  }
}

void draw(){
}

void glitch(float x, float y){
  char c = char(int(random(130, 255)));
  text(c, x + random(-10, 10), y + random(-10, 10));
}

You would get something like this

1

I have been lazy, and introduced the array of strings directly in the code: an easy (but instructive) variation would be to load a piece of text from a .txt file, and parse it to obtain the individual lines.
Finally, we could work at a third layer of graphic “deepness”: we could consider the whole text as an image, and use the ideas in the previous post to glitch it. This is left to you as an interesting exercise.
Most importantly: never contradict the Cheshire Cat, mind you. 😉

*I avoid using the term “hacked”, since nowadays it is practically and culturally meaningless. Someone suggested “hijacked”, which I prefer.
** Thanks to Kerouac for the raw material to esperiment with.

Databending in Processing

posted by on 2013.04.02, under Processing
02:

Here’s a code in Processing that explores a simple databending technique for images. You can read about Glitch Art , and then start endelessy debate with your friends if this is art or not. :)
Also, here you can find some interesting series of videos exploring a bit also the philosophy behind databending, glitching and malfunctions as a form of awareness of the technology we are sorrounded by. (A simple example would be how the spelling mistakes in a chat messaging system make you indeed aware of the chat medium itself).
Here’s the code

PImage img;

int width=600;
int height=600;
int x1;
int y1;
int x2;
int y2;
int sx=10;
int sy=100;
int iter=100;

void setup(){
  size(width,height);

  img=loadImage("rain.jpg");
  image(img,0,0);
 
  for (int h=0;h<iter;h++)
  {
    sx=int(random(5,30));
    sy=int(random(50,130));
 
  loadPixels();
  x1=int(random(0,width-sx-1));
  y1=int(random(0,height-sy-1));
  x2=int(random(0,width-sx-1));
  y2=int(random(0,height-sy-1));
 
  for (int i=0; i<sx ;i++)
  {
    for (int j=0; j<sy;j++)
  {
    color temp=pixels[(x1+i)*width + (y1+j)];
    pixels[(x2+i)*width + (y2+j)]=pixels[(x1+int(random(0,i)))*width+(y1+int(random(0,j)))];
    pixels[(x1+i)*width + (y1+j)]=temp;
  }}
  updatePixels();
  }
};

The idea is very simple: we “swap” pixel strips of random width and height from the starting image, starting from two random points, and add some jitter to it.
We then iterate the process starting from the last modified image: the parameter iter controls the number of iterations.
Starting from this image (it’s a small piece from an image taken from the web)




I have got this


Of course, you can do a lot more interesting things by accessing directly to the pixels of an image: you can rotate areas, alter colors, etc..
And in particular, you can make animations in Processing using the same principle.
I’ll try to explores this more in future posts, trying maybe also to clear my mind about deconstructivism in art and communications…

pagetop