Ribbons

posted by on 2019.04.06, under openFrameworks
06:

Who doesn’t like ribbons? I know that I do, and that I always found them fascinating. There are tons of ways one can make ribbons which are dynamic, i.e. they interact with the user in some ways. For this simple code, I used openFrameworks, since I wanted to work with reacting meshes and 3D graphics. The idea is pretty simple: you basically add pair of vertices to the mesh by first having one of the vertex’s position follow the mouse position, and compute the position at the other end as the “head” of an orthogonal vector. Wait, what?! It sounds complicated, but it really isn’t. The interesting thing in the code is that there is a variable, namely N, which bounds the total number of vertices: otherwise, after a while you will find yourself with a HUGE mesh, the framerate will drop considerably, and you’ll think your life is miserable. So, let’s avoid that! :)
Here is the code

ofApp.h:

#pragma once

#include "ofMain.h"

class ofApp : public ofBaseApp{

    public:
        void setup();
        void update();
        void draw();
        ofMesh mesh;
        float t;
        float theta;
        ofVec2f pos;
        ofLight light;
        ofMaterial material;

        void keyPressed(int key);
        void keyReleased(int key);
        void mouseMoved(int x, int y );
        void mouseDragged(int x, int y, int button);
        void mousePressed(int x, int y, int button);
        void mouseReleased(int x, int y, int button);
        void mouseEntered(int x, int y);
        void mouseExited(int x, int y);
        void windowResized(int w, int h);
        void dragEvent(ofDragInfo dragInfo);
        void gotMessage(ofMessage msg);
       
};

ofApp.cpp

#include "ofApp.h"

#define N  1000
//--------------------------------------------------------------
void ofApp::setup(){
    ofBackground(ofColor::lightGray);
    ofToggleFullscreen();
    ofHideCursor();

    mesh.setMode(OF_PRIMITIVE_TRIANGLE_STRIP);
    pos = ofVec2f(ofRandom(-ofGetWidth() * 0.5, ofGetWidth() * 0.5), ofRandom(-ofGetHeight() * 0.5, ofGetHeight() * 0.5));

    ofEnableDepthTest();
    ofEnableLighting();

    material.setDiffuseColor(ofColor::white);
    material.setAmbientColor(ofColor::white);

    material.setShininess(128);
}

//--------------------------------------------------------------
void ofApp::update(){
    t += 10;
    theta += ofRandom(0.0, 0.01);

   
        ofVec2f target = ofVec2f(ofGetMouseX() - ofGetWidth() * 0.5, ofGetMouseY() - ofGetHeight() * 0.5);
        ofVec2f  dir = target - pos;
        float m = dir.length();
        dir.normalize();
        dir = dir.getScaled(20);
        pos = pos + dir;
        ofVec2f ort = ofVec2f(-pos.y, pos.x);
        ort.normalize();
        ort = ort.getScaled(20 + (m * (0.3 + 0.2 * sin(theta * 2 * PI))));

        int n = mesh.getNumVertices();
        if (n < N) {
            float c = 1;
            ofVec3f v0 = ofVec3f(pos.x, pos.y, t * c);
            ofVec3f v1 = ofVec3f(pos.x + ort.x, pos.y + ort.y, t * c);
            mesh.addVertex(v0);
            mesh.addVertex(v1);
        }
        else {
            mesh.removeVertex(0);
            mesh.removeVertex(0);
        }
   
   
}

//--------------------------------------------------------------
void ofApp::draw(){
    ofBackground(ofColor::lightGray);
    light.enable();
    material.setDiffuseColor(ofColor(255 * sin(theta * 0.04 * 2 * PI), 102, 102));
    material.setAmbientColor(ofColor(255 * sin(theta * 0.04 * 2 * PI), 12, 102));
    ofPushMatrix();
    ofTranslate(ofGetWidth() * 0.5, ofGetHeight() * 0.5, -t);
    material.begin();
    mesh.draw();
    material.end();
    ofPopMatrix();
    light.disable();
}

It should look this.

Exercise: implements normals in the mesh above.

Data retrieving and asynchronicity

posted by on 2019.03.09, under Processing
09:

I am very much fascinated by data, its retrieval and possible applications to art. We are constantly surrounded by data in text, visual and auditory form, and they tell something about us all, as a society and a species.
In this post I want to write about a simple scenario. Imagine we want to retrieve all the images appearing on a newspaper page, and do something with that. For this simple case, I have chosen The New York Times. We have then a couple of questions to which we want to answer. First of all, how do we get the urls of all the images present in the given page? And second: how do we get these images without compromising the animation happening? To answer these questions, we start at the beginning, and we stop at the end, like the judge suggests Alice during her trial. 😉
Data contained in a webpage is usually represented via a markup language: for instance, HTML is such a language. In a markup language, the different structural pieces of a webpage are “tagged”: each item might have a “title” tag, for instance, which tells us that its content will be a title of a sort. In the present case, we will use XML, since The New York Times provides a .xml file for its various pages. In XML parlance, a xml file can be thought as a collection of boxes called “children” that can contain objects which have “content”, or other boxes which have other children, and so on. Now, each XML file is structured in a slightly different way, so one has to investigate case by case. For instance, you could have problem lifting the very same code that will appear later to, say, The Guardian, since its xml file can have a different arrangement.
Processing offers a class XML to deal with XML files, and to search through its tags. Great! So, after spending some time investigating the RSS feed of the home page of The New York Times, we discover that the XML has a child called “channel”, which inside contains children tagged “item”, which themselves contain a child tagged “media:content”: finally, this child contains a url, which is what we are interested in. Pheeew! Once we get the list of urls, we can download the images with loadImage(), which accepts also urls. Here the problem addressed in the second question above appears. We have to talk about “asynchronicity”. Namely, both the function loadXML() and loadImage() are so called “blocking functions”: in other words, until they complete their task, the code doesn’t go forward. This means that any animation would stutter. If we need to load the images only once, this is not a great problem: we do everything in the setup() function, and forget about it. For the sake of fun, I have decided that I would like to randomly add a new image from some other page while the animation goes on. The way to circumnavigate the problem created by these blocking functions is to use a different “thread”. What does this mean? Java allows to “thread functions”, which means that the function is executed in parallel with the main thread, which in our case is given by the so called “animation” thread. By threading a function, we allow the main thread not to be affected by any slowing of the threaded function. In our case, the function getData() loads up another .xml file, grabs an image, and adds it to the list of images to display.
We can now look at the code

String[] urls ={ "http://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml",
  "http://rss.nytimes.com/services/xml/rss/nyt/Africa.xml", "http://rss.nytimes.com/services/xml/rss/nyt/ArtandDesign.xml",
  "http://rss.nytimes.com/services/xml/rss/nyt/Technology.xml", "http://rss.nytimes.com/services/xml/rss/nyt/Europe.xml"};

String url;

XML xml;
ArrayList<PImage> images;
int count;
PImage img;
boolean locked = false;

void setup() {
  size(1000, 1000);
  background(0);
  url = urls[int(random(0, urls.length))];
  images = new ArrayList<PImage>();

  xml = loadXML(url); //Loading the XML file;
  String[] names = {};

  XML[] children = xml.getChildren("channel"); //This is the first child of the XML file;

  for (int i = 0; i < children.length; i++) {
    XML[] items = children[i].getChildren("item");  //Images are cointained in items;

    for (int j = 0; j < items.length; j++) {
      XML media = items[j].getChild("media:content"); //Media:content is the tag that cointains images;
      if (media != null) {
        names = append(names, media.getString("url")); //This provides the url which appears as an option in the tag media:content;
      }
    }
  }

  for (int i = 0; i < names.length; i++) {
    images.add(loadImage(names[i]));
    println("Uploaded!");
  }
}

void draw() {
  PImage im = images.get(count % images.size());

  tint(255, int(random(30, 100)));

  image(im, random(0, width), random(0, height), im.width * 0.3, im.height * 0.3);


  count++;
  if ((random(0, 1) < 0.01) && !locked) {
    thread("getData");
  }
}

//Function to be threaded

void getData() {  
  locked = true;
  url = urls[int(random(0, urls.length))]; //Choose a random url among those available;
  xml = loadXML(url);
  String[] names = {};

  XML[] children = xml.getChildren("channel");

  for (int i = 0; i < children.length; i++) {
    XML[] items = children[i].getChildren("item");

    for (int j = 0; j < items.length; j++) {
      XML media = items[j].getChild("media:content");
      if (media != null) {
        names = append(names, media.getString("url"));
      }
    }
  }
  images.add(loadImage(names[int(random(0, names.length))])); //Add the new image to the main list;
  locked = false;
}

If you run the code, you should get something like

scree

As an exercise, try to do something similar with a different website, so to get comfortable with the process of understanding how the given XML file is organized.

Overlook

posted by on 2018.07.21, under Processing
21:

Combining some techniques from the previous posts on shaders, here’s the render of an audio reactive application which I used for a video of “Overlook”, a track of my musical alter ego

The code uses vertex and fragment shaders to create a glitchy environment which reacts to the audio in real time.
The track “Overlook” is available for listening here

Dust From A G String

posted by on 2018.06.27, under Processing, Uncategorized
27:

Here’s “Dust From A G String”, a piece about the corrosive power of passing time, and the beauty it leaves behind, just before the end.

The video was made in Processing, using a custom shader based on FBO techniques. The audio is a reworking of Bach’s “Air on the G String”.

Reactive applications, Shaders and all that

posted by on 2018.04.06, under Processing
06:

We have already discussed the advantage of using shaders to create interesting visual effects. This time we will have to deal with fragment shaders *and* vertex shaders. In a nutshell, a vertex shader takes care of managing the vertices position, color, etc. which are then passed as “fragments” to the fragment shader for rasterization. “OMG, this is so abstract!!”. Yeah, it is less abstract than it seems, but nevertheless it requires some know how. As previously, I really suggest this : I find myself going back and forth to it regularly, always learning new things.
Good, so, what’s the plan? The main idea in the following code is to use a PShape object to encode all the vertices: we basically are making a star shaped thing out of rectangles, which in 3d graphics parlance are referred to as “quads”. Once we have created such a PShape object, we will not have to deal with the position of vertices anymore: all the change in the geometry will be dealt by the GPU! Why is this exciting? It’s because the GPU is much much faster at doing such things than the CPU. This allows in particular for real-time reactive fun. Indeed, the code gets input from the microphone and the webcam, separately. More precisely, each frame coming from the webcam is passed to the shader to be used as a texture for each quad. On the other hand, the microphone audio is monitored, and its amplitude controls the variable t, which in turns control the rotation (in Processing) and more importantly the jittering in the vertex shader. Notice that the fragment shader doesn’t do anything out of the ordinary here, just apply a texture.
Here’s how the code looks like

import processing.video.*;
import processing.sound.*;

Amplitude amp;
AudioIn in;



PImage  back;
PShape mesh;
PShader shad;

float t = 0;
float omega = 0;
float rot = 0;
int count = 0;

Capture cam;


void setup() {
  size(1000, 1000, P3D);
  background(0);
 
  //Set up audio

  amp = new Amplitude(this);
  in = new AudioIn(this, 0);
  in.start();
  amp.input(in);

  //Set up webcam

  String[] cameras = Capture.list();

  cam = new Capture(this, cameras[0]);

  cam.start();

  textureMode(NORMAL);  

  mesh = createShape();
  shad = loadShader("Frag.glsl", "Vert.glsl");

  back = loadImage("back.jpg");


  //Generates the mesh;

  mesh.beginShape(QUADS);
  mesh.noStroke();

  for (int i = 0; i < 100; i++) {
    float phi = random(0, 2 * PI);
    float theta = random(0, PI);
    float radius = random(200, 400);
    PVector pos = new PVector( radius * sin(theta) * cos(phi), radius * sin(theta) * sin(phi), radius * cos(theta));
    float u = random(0.5, 1);

    //Set up the vertices of the quad with texture coordinates;

    mesh.vertex(pos.x, pos.y, pos.z, 0, 0);
    mesh.vertex(pos.x + 10, pos.y + 10, pos.z, 0, u);
    mesh.vertex(-pos.x, -pos.y, -pos.z, u, u);
    mesh.vertex(-pos.x - 10, -pos.y - 10, -pos.z, 0, u);
  }

  mesh.endShape();
}

void draw() {

    background(0);
    //Checks camera availability;

    if (cam.available() == true) {
      cam.read();
    }
 

    image(back, 0, 0); //Set a gradient background;

    pushMatrix();
    translate(width/2, height/2, 0);
    rotateX( rot * 10 * PI/2);
    rotateY( rot * 11 * PI/2);

    shad.set("time", exp(t) - 1); //Calls the shader, and passes the variable t;

    shader(shad);
    mesh.setTexture(cam); //Use the camera frame as a texture;
    shape(mesh);

    popMatrix();

    t += (amp.analyze() - t) * 0.05; //Smoothens the variable t;

    omega +=  (t  - omega) * 0.01; //Makes the rotation acceleration depend on t;

    rot += omega * 0.01;

    resetShader(); //Reset shader to display the background image;
   
}

// Frag.glsl

varying vec4 vertColor;
varying vec4 vertTexCoord;


uniform float time;
uniform sampler2D texture;

void main(){

gl_FragColor = texture2D(texture, vertTexCoord.st ) * vertColor;

}

// Vert.glsl

uniform mat4 transform;
uniform mat4 modelview;
uniform mat4 texMatrix;


attribute vec4 position;
attribute vec4 color;
attribute vec2 texCoord;

varying vec4 vertColor;
varying vec4 vertTexCoord;
varying vec4 pos;


uniform float time;


void main() {
  gl_Position = transform * position;

  gl_Position.x += sin(time * 2 * 3.145 * gl_Position.x) * 10 ;
  gl_Position.y += cos(time * 2 * 3.145 * gl_Position.y) * 10 ;

  vertColor = color;

  vertTexCoord = texMatrix * vec4(texCoord, 1.0, 1.0);


}

Notice the call to reset the shader, which allows to show a gradient background, loaded as an image, without it being affected by the shader program.
Here’s a render of it, recorded while making some continuous noise, a.k.a. singing.

Try it while listening to some music, it’s really fun!

Abstract expressionism: a generative approach

posted by on 2016.06.28, under Processing
28:

I have always been fascinated by abstract expressionism, and in particular the work of Jackson Pollock. The way paint, gravity and artistic vision play together was always for me very representative of that tension between chaos and structural patterns one often finds in art.
So, here it is a little homage to the drip-painting style of Pollock. The Processing code is not clean enough to be useful, and I don’t think I understand what it exactly does yet (yes, it happens more than often that what I code is a surprise to me!). Let me say that it incorporates many of the topics discussed in this blog: object oriented programming, noise fields, etc. I’ll update the post when I’ll get it (hopefully) cleaned up.
Meanwhile, enjoy. 😉

pagetop