Pixels flow on the GPU

posted by on 2019.04.10, under openFrameworks
10:

This post is about how to implement a vector field generated by Perlin noise directly on the GPU using GLSL. If you want, you can regard this as the step 0 before implementing fluid simulations with shaders. I have written elsewhere here about vector fields, and how to implement them on the CPU, but let’s recall the idea, and in particular the concept of a “lookup table”. Namely, in our case a lookup table is a two-dimensional array which encodes the vector field: you can imagine a grid with coordinates (i,j), such that to each of its cells a 2-dimensional vector is attached. The motion of a given particle in the vector field is obtained by displacing the particle in the cell (i,j) via the value of the lookup table at the given cell. To represent it on the screen, we compute the new position of the particle, and draw there. This seemingly redundant comment will be relevant later, just wait.
Okay, the procedure above is easy-peasy, if we want to move pixels, we just define a grid as big as the whole screen, so that the cell (i,j) will correspond to the pixel at position (i,j) indeed; we do the computations above, and go home. Boom! Ehm, no, not really. The point is that if you tried to do just that, the iteration over all the pixels in the screen would take so long that you would get barely a frame or two every couple of seconds (probably less). Sorry, about that.
Enter shaders! We can indeed use the GPU to perform all these calculations via a custom made fragment shader. First, we need to sort out how to send the information contained in the lookup table to the shader. Since the table is nothing else than a two-dimensional array, we can write the value of the field directly in a texture. On modern graphic cards, textures are very quick to upload, and we can upload more than one. Wait a minute, aren’t textures supposed to be used for… colors and stuff? Yes, but. A texture is nothing else that a carrier for data: more precisely, at each of its coordinate, it contains the value of red, green, blue and alpha which will be combined to provide the color for the pixel at the given coordinate. We can then use two of the color channels to provide the x and y component of a vector. In this case, I have chosen a unit vector field, i.e. at each point the vector is specified just by an angle, given by the value of Perlin noise at the given coordinate. This datum is written on the blue channel of the ofImage field in the code, from which we will obtain a texture. Another texture will contain the pixels we want to displace: I will refer to it as the “ink” texture. Finally to update the ink texture we will use a “ping-pong technique”, about which I have written here.
Now that we have sorted out a slick way to send data to the GPU, we have to deal with the elephant in the room. As I commented earlier, the CPU algorithm is based on the fact that we calculate the new position of the particle at (x,y) by obtaining the value of the vector field at the very same position, move the particle, and draw something (a circle, a pixel, etc.) “there”. Unfortunately, the fragment shader does not allow to “move” fragments, since everything it knows is the given fragment! This is encapsulated in my favourite motto concerning shaders “It is a lonely world, no one talks to no one else here!”: this means that everything we know about a vertex or a fragment can’t be shared. :(
Luckily, there is a way out, and it comes courtesy of textures. A texture can indeed be “looked up” from any fragment: so, instead of moving the particle, we trace back its motion. In other words, if we are at the fragment in position p, instead of saying “go to the fragment p + field(p) and draw my color”, we say “the color at p is the color of the ink texture at p-field(p)”. There is an explanation why this is a sensible idea, and it has to do with (partial) differential equations, and their (local) flows.
We can now look at the code in openFrameworks, where I have added some mouse interactivity for fun. Notice you need to provide an image to start with.

ofMain.cpp

#include "ofMain.h"
#include "ofApp.h"

//========================================================================
int main( ){
    ofGLFWWindowSettings settings;
    settings.setGLVersion(3, 2); //we define the OpenGL version we want to use
    settings.setSize(1024, 680);
    ofCreateWindow(settings);
    // this kicks off the running of my app
    ofRunApp(new ofApp());

}

ofApp.h

#pragma once

#include "ofMain.h"

class ofApp : public ofBaseApp{

    public:
        void setup();
        void update();
        void draw();
        ofImage field;
        ofImage ink;
        ofImage photo;
        ofTexture inkTex;
        ofTexture fieldTex;
        ofShader shader;
        ofFbo fbo;
        ofFbo main;

       
        float t = 0;
        float mouse_x;
        float mouse_y;

        void keyPressed(int key);
        void keyReleased(int key);
        void mouseMoved(int x, int y );
        void mouseDragged(int x, int y, int button);
        void mousePressed(int x, int y, int button);
        void mouseReleased(int x, int y, int button);
        void mouseEntered(int x, int y);
        void mouseExited(int x, int y);
        void windowResized(int w, int h);
        void dragEvent(ofDragInfo dragInfo);
        void gotMessage(ofMessage msg);
       
};

ofApp.cpp

#include "ofApp.h"

//--------------------------------------------------------------
void ofApp::setup(){

    ofBackground(0);
    ofHideCursor();
    ofToggleFullscreen();

    //Allocating images and textures

    ink.allocate(ofGetWidth(), ofGetHeight(), OF_IMAGE_COLOR);
    field.allocate(ofGetWidth(), ofGetHeight(), OF_IMAGE_COLOR);
    fbo.allocate(ofGetWidth(), ofGetHeight(), GL_RGB);
    main.allocate(ofGetWidth(), ofGetHeight(), GL_RGB);

   
    //Setting up the lookup table
    ofPixels pix = field.getPixelsRef();
   

    for (float x = 0; x < ofGetWidth(); x++) {
        for (float y = 0; y < ofGetHeight(); y++) {
            float st = ofNoise(x * 0.0001, y * 0.0001);
            pix.setColor(x, y, ofColor(0.0, 0.0, st * 2.0 ));

        }
    }

    field.update();
    ink.update();

    fieldTex = field.getTexture();

    photo.load(path_to_image);
    photo.resize(ofGetWidth(), ofGetHeight());

    inkTex = photo.getTexture();   

    main.begin();
    photo.draw(0, 0);
    main.end();

    shader.load("shader.vert", "shader.frag");
}

//--------------------------------------------------------------
void ofApp::update(){
    if (t < 2) {
        t += 0.001;
    }
    mouse_x = (ofGetMouseX() - mouse_x) * 0.1;
    mouse_y = (ofGetMouseY() - mouse_y) * 0.1;
}

//--------------------------------------------------------------
void ofApp::draw(){
       
       
        fbo.begin();
       
       
        shader.begin();
        shader.setUniformTexture("tex0", main.getTexture(), 0);
        shader.setUniformTexture("tex1", fieldTex, 1);
        shader.setUniform1f("windowWidth", ofGetWidth());
        shader.setUniform1f("windowHeight", ofGetHeight());
        shader.setUniform1f("mx", mouse_x/ ofGetWidth());
        shader.setUniform1f("my", mouse_y /ofGetHeight() );
        shader.setUniform1f("t", t);
       
        main.draw(0, 0);
       
        shader.end();
   
       
        fbo.end();
   
        swap(fbo, main);

       fbo.draw(0, 0);
}

//--------------------------------------------------------------
void ofApp::mousePressed(int x, int y, int button){
    main.begin();
    ofSetColor(ofRandom(0.0, 255.0), ofRandom(0.0, 255.0), ofRandom(0.0, 255.0));
    ofDrawCircle(ofGetMouseX(), ofGetMouseY(), 30, 30);
    main.end();
}

Here are is the vertex shader

shader.vert

#version 150


uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 textureMatrix;
uniform mat4 modelViewProjectionMatrix;

in vec4 position;
in vec4 color;
in vec4 normal;
in vec2 texcoord;

out vec2 varyingtexcoord;
uniform sampler2DRect tex0;

void main()
{
    varyingtexcoord = texcoord.xy;
    gl_Position = modelViewProjectionMatrix * position;
}

and the fragment shader

shader.frag

#version 150

// this is how we obtain the textures
uniform sampler2DRect tex0;
uniform sampler2DRect tex1;

in vec2 varyingtexcoord;
uniform float mx;
uniform float my;
uniform float windowWidth;
uniform float windowHeight;
uniform float t;
out vec4 outputColor;

void main()
{


    float x = gl_FragCoord.x / windowWidth;
    float y = gl_FragCoord.y / windowHeight;
    float l = sqrt((x - mx) * (x - mx) + (y - my) * (y - my) ) * t;
    vec2 xy = vec2(cos( ((texture(tex1, varyingtexcoord).z) + mx) * 2 * 3.14), sin(((texture(tex1, varyingtexcoord).z) + my) * 2 * 3.14));
    outputColor =  texture(tex0, (varyingtexcoord - l * xy) );
}

It looks like this

A better result can be obtained by interpolating on the output color, but I was lazy. :)
Why did I mention that this is step 0 in understanding how to implement a fluid simulation (like smoke, dust, etc.) on the GPU? This is related to the fact that the vector field in this case is *fixed*, i.e. it is an external vector field. In a fluid, the velocity field is a solution to the Navier-Stokes equations which can be exactly solved for very few cases, and it is “advected”, i.e. you have to imagine that it gets transported itself by the fluid. A nice article on these topics can be found here: if you get comfortable with the basic concepts in this post, you will able to follow it, modulo probably some of the maths.
Nowadays there are tons of libraries to do fluid simulations, like for instance ofxFluids, ofxFlowTools, etc. Why bother, then? I could say because understanding things is always better, but the truth is I do not have the answer to this question: you will have to find your own. :)

openFrameworks: a primer

posted by on 2014.09.02, under openFrameworks
02:

Since a couple of weeks I have been looking at openFrameworks, an amazing C++ toolkit, which is used to do amazing things. Until now, I have only showed code in Processing (basically Java), and SuperCollider: C++ is a beast in its own, though, and in future posts I will try to talk about pointers, memory allocation, the way classes are defined, etc. For now, I’ll just briefly explain the main idea around this simple project. Basically, the webcam is taking frames, converting them into textures, which are then mapped on meshes, which are then deformed, etc. to get very interesting shapes, colors, etc. You are not seeing any change in the frame grabbed simply because I am not in the video, and the camera is fixed :).
Here it is what you get

You can download the source code here. (You will need to put the texture .png file in a bin/data/texture folder).
In the following posts I will try to talk about more basic examples using the techniques in the project above.

pagetop