Voicings

posted by on 2018.04.21, under Supercollider
21:

Here’s a little exercise concerning voicings management in SuperCollider. The idea is very simple: we have a collection of samples we would like to trig randomly, but a retrig is allowed only if the whole sample has finished playing. To do this we have to keep track of active Synths (or “voices”), in order to avoid retriggering those. This role is played by the array ~voices: indeed, the index of the array identifies the buffer to be played, while a value of 0 or 1 denotes an available or unavailable voice, respectively. At the moment of the instantiation of a Synth on the server, SuperCollider allows us to assign a function to be executed when the given synth is free, which in our case sets the ~voices value corresponding to the given buffer to 0. In the infinite loop cycle we can then check for the value of ~voices at a random position i: if this value is 0, we create a new Synth with the corresponding buffer, and set the correspondent voice number to 1. Otherwise, we continue with the inf cycle. By changing the values in the rrand function you can decide how sparse the various instances will be.
You can use this technique with any type of SynthDef, in order to have a fixed voice system which does not allow retriggering or voice stealing. Also, the way I have done it is not the most elegant one: you can search for NodeWatcher (a way to monitor Nodes on the server) for an alternative approach.
Here’s the code

s.boot;



(
SynthDef(\voice, {|buff, trig = 1, out = 0, amp = 1|

    var sig = PlayBuf.ar(2, buff, 1, trig, doneAction: 2);

    Out.ar(out, sig *  amp);

 }).add;

SynthDef(\reverb, {|in = 0|
    var sig = In.ar(in, 2);
    sig = CombC.ar(sig, 0.5, 0.5, 3);
    sig = FreeVerb.ar(sig, 0.5, 0.5, 0.7);
    Out.ar(0, sig);
}).add;
)



(

fork({

var samplePath;
var ind;


    //Setting up reverb line

~rev = Bus.audio(s, 2);

y = Synth(\reverb, [\in: ~rev]);

~voices = [];

~buffers = [];

//Loading buffers
    samplePath = thisProcess.nowExecutingPath.dirname ++ "/sounds/*";
    ~buffers = samplePath.pathMatch.collect {|file| Buffer.read(s, file, 0, 44100 * 9);};

s.sync;


~buffers.do({
    ~voices = ~voices.add(0);
});

    ind = Prand(Array.fill(~buffers.size, {|i| i}), inf).asStream;

    inf.do({
        ~voices.postln;
        i = ind.next;
       
        z = ~voices[i];

            if( (z == 0),  {

            x = Synth(\voice, [\buff: ~buffers[i], \out: ~rev, \amp: rrand(0.8, 1.0)]);
                x.onFree({~voices[i] = 0});
            ~voices[i] = 1;
           
            }, {});
   
        rrand(0.1, 0.6).wait;
        });

}).play;
)

s.quit;

All the samples have to be in a folder called “sounds” inside the same folder your .scd file is. I have used some few piano samples from Freesounds.org, since I wanted to achieve a minimalist piano atmosphere. Here’s how it sounds

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Reactive applications, Shaders and all that

posted by on 2018.04.06, under Processing
06:

We have already discussed the advantage of using shaders to create interesting visual effects. This time we will have to deal with fragment shaders *and* vertex shaders. In a nutshell, a vertex shader takes care of managing the vertices position, color, etc. which are then passed as “fragments” to the fragment shader for rasterization. “OMG, this is so abstract!!”. Yeah, it is less abstract than it seems, but nevertheless it requires some know how. As previously, I really suggest this : I find myself going back and forth to it regularly, always learning new things.
Good, so, what’s the plan? The main idea in the following code is to use a PShape object to encode all the vertices: we basically are making a star shaped thing out of rectangles, which in 3d graphics parlance are referred to as “quads”. Once we have created such a PShape object, we will not have to deal with the position of vertices anymore: all the change in the geometry will be dealt by the GPU! Why is this exciting? It’s because the GPU is much much faster at doing such things than the CPU. This allows in particular for real-time reactive fun. Indeed, the code gets input from the microphone and the webcam, separately. More precisely, each frame coming from the webcam is passed to the shader to be used as a texture for each quad. On the other hand, the microphone audio is monitored, and its amplitude controls the variable t, which in turns control the rotation (in Processing) and more importantly the jittering in the vertex shader. Notice that the fragment shader doesn’t do anything out of the ordinary here, just apply a texture.
Here’s how the code looks like

import processing.video.*;
import processing.sound.*;

Amplitude amp;
AudioIn in;



PImage  back;
PShape mesh;
PShader shad;

float t = 0;
float omega = 0;
float rot = 0;
int count = 0;

Capture cam;


void setup() {
  size(1000, 1000, P3D);
  background(0);
 
  //Set up audio

  amp = new Amplitude(this);
  in = new AudioIn(this, 0);
  in.start();
  amp.input(in);

  //Set up webcam

  String[] cameras = Capture.list();

  cam = new Capture(this, cameras[0]);

  cam.start();

  textureMode(NORMAL);  

  mesh = createShape();
  shad = loadShader("Frag.glsl", "Vert.glsl");

  back = loadImage("back.jpg");


  //Generates the mesh;

  mesh.beginShape(QUADS);
  mesh.noStroke();

  for (int i = 0; i < 100; i++) {
    float phi = random(0, 2 * PI);
    float theta = random(0, PI);
    float radius = random(200, 400);
    PVector pos = new PVector( radius * sin(theta) * cos(phi), radius * sin(theta) * sin(phi), radius * cos(theta));
    float u = random(0.5, 1);

    //Set up the vertices of the quad with texture coordinates;

    mesh.vertex(pos.x, pos.y, pos.z, 0, 0);
    mesh.vertex(pos.x + 10, pos.y + 10, pos.z, 0, u);
    mesh.vertex(-pos.x, -pos.y, -pos.z, u, u);
    mesh.vertex(-pos.x - 10, -pos.y - 10, -pos.z, 0, u);
  }

  mesh.endShape();
}

void draw() {

    background(0);
    //Checks camera availability;

    if (cam.available() == true) {
      cam.read();
    }
 

    image(back, 0, 0); //Set a gradient background;

    pushMatrix();
    translate(width/2, height/2, 0);
    rotateX( rot * 10 * PI/2);
    rotateY( rot * 11 * PI/2);

    shad.set("time", exp(t) - 1); //Calls the shader, and passes the variable t;

    shader(shad);
    mesh.setTexture(cam); //Use the camera frame as a texture;
    shape(mesh);

    popMatrix();

    t += (amp.analyze() - t) * 0.05; //Smoothens the variable t;

    omega +=  (t  - omega) * 0.01; //Makes the rotation acceleration depend on t;

    rot += omega * 0.01;

    resetShader(); //Reset shader to display the background image;
   
}

// Frag.glsl

varying vec4 vertColor;
varying vec4 vertTexCoord;


uniform float time;
uniform sampler2D texture;

void main(){

gl_FragColor = texture2D(texture, vertTexCoord.st ) * vertColor;

}

// Vert.glsl

uniform mat4 transform;
uniform mat4 modelview;
uniform mat4 texMatrix;


attribute vec4 position;
attribute vec4 color;
attribute vec2 texCoord;

varying vec4 vertColor;
varying vec4 vertTexCoord;
varying vec4 pos;


uniform float time;


void main() {
  gl_Position = transform * position;

  gl_Position.x += sin(time * 2 * 3.145 * gl_Position.x) * 10 ;
  gl_Position.y += cos(time * 2 * 3.145 * gl_Position.y) * 10 ;

  vertColor = color;

  vertTexCoord = texMatrix * vec4(texCoord, 1.0, 1.0);


}

Notice the call to reset the shader, which allows to show a gradient background, loaded as an image, without it being affected by the shader program.
Here’s a render of it, recorded while making some continuous noise, a.k.a. singing.

Try it while listening to some music, it’s really fun!

pagetop