Voicings

posted by on 2018.04.21, under Supercollider
21:

Here’s a little exercise concerning voicings management in SuperCollider. The idea is very simple: we have a collection of samples we would like to trig randomly, but a retrig is allowed only if the whole sample has finished playing. To do this we have to keep track of active Synths (or “voices”), in order to avoid retriggering those. This role is played by the array ~voices: indeed, the index of the array identifies the buffer to be played, while a value of 0 or 1 denotes an available or unavailable voice, respectively. At the moment of the instantiation of a Synth on the server, SuperCollider allows us to assign a function to be executed when the given synth is free, which in our case sets the ~voices value corresponding to the given buffer to 0. In the infinite loop cycle we can then check for the value of ~voices at a random position i: if this value is 0, we create a new Synth with the corresponding buffer, and set the correspondent voice number to 1. Otherwise, we continue with the inf cycle. By changing the values in the rrand function you can decide how sparse the various instances will be.
You can use this technique with any type of SynthDef, in order to have a fixed voice system which does not allow retriggering or voice stealing. Also, the way I have done it is not the most elegant one: you can search for NodeWatcher (a way to monitor Nodes on the server) for an alternative approach.
Here’s the code

s.boot;



(
SynthDef(\voice, {|buff, trig = 1, out = 0, amp = 1|

    var sig = PlayBuf.ar(2, buff, 1, trig, doneAction: 2);

    Out.ar(out, sig *  amp);

 }).add;

SynthDef(\reverb, {|in = 0|
    var sig = In.ar(in, 2);
    sig = CombC.ar(sig, 0.5, 0.5, 3);
    sig = FreeVerb.ar(sig, 0.5, 0.5, 0.7);
    Out.ar(0, sig);
}).add;
)



(

fork({

var samplePath;
var ind;


    //Setting up reverb line

~rev = Bus.audio(s, 2);

y = Synth(\reverb, [\in: ~rev]);

~voices = [];

~buffers = [];

//Loading buffers
    samplePath = thisProcess.nowExecutingPath.dirname ++ "/sounds/*";
    ~buffers = samplePath.pathMatch.collect {|file| Buffer.read(s, file, 0, 44100 * 9);};

s.sync;


~buffers.do({
    ~voices = ~voices.add(0);
});

    ind = Prand(Array.fill(~buffers.size, {|i| i}), inf).asStream;

    inf.do({
        ~voices.postln;
        i = ind.next;
       
        z = ~voices[i];

            if( (z == 0),  {

            x = Synth(\voice, [\buff: ~buffers[i], \out: ~rev, \amp: rrand(0.8, 1.0)]);
                x.onFree({~voices[i] = 0});
            ~voices[i] = 1;
           
            }, {});
   
        rrand(0.1, 0.6).wait;
        });

}).play;
)

s.quit;

All the samples have to be in a folder called “sounds” inside the same folder your .scd file is. I have used some few piano samples from Freesounds.org, since I wanted to achieve a minimalist piano atmosphere. Here’s how it sounds

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

n-grams, Markov chains and text generators

posted by on 2016.11.03, under Processing, Supercollider
03:

An n-gram is a contiguous sequence of n elements of a text, where each element can be a phoneme, a syllable, a character, a word, etc. Given a text, the collection of its n-grams allows to infer some statistical correlation, and moreover to assemble the n-grams collected into a Markov chain . Right, what’s a Markov chain, then? A Markov chain describes a process in which the next step depends probabilistically only on the current step. For instance, a random walk is an example of a Markovian process. One way to assign a Markov chain to a text is to collect all its n-grams, and for each n-gram we keep track of the next n-gram. We then go through the collection of all the n-grams, and for each of them we choose randomly among the list of the subsequent n-grams. We form then the new n-gram, and proceed. Confusing? Let’s see an example. Suppose we have the text “this is a book and this is my pen”, and suppose we are interested in 2-grams, where a single gram is a word. We have then then the pair (This, is), the pair (is, a), etc. Then, we keep track of all the 1-grams which follow a single 2-gram: for instance, after (this, is) we can have (a) or (my), and we assign to each of them an equal probability. Suppose we start from (This, is): if we happen to choose (a), we form the pair (is, a), to which it must follow (a, book), and so on, until we reach the end of the text. In this way, we can generate a text which has similar statistical distribution of n-grams, in this case pairs of words. The greater n, the closer your generated text will be to the original one.
Inspired by this, I have written a code in p5.js, a set of libraries for Javascript, that generates text starting from n-grams in words. Here “word” only means “groups of characters separated by a whitespace”. Punctuation is not considered from the grammatical point of view, nor the role of articles, nouns, etc. are analysed; neverthless, the results are still quite interesting. The choice of Javascript is dictated by the fact that Processing/Java is not very friendly with dictionaries, and in this case they are really useful.
Here’s the code

var words;
var ngrams = {}
var order = 2;
var txt = "";
var a = 0.1;
var target = 255;

function setup() {
  createCanvas(600, 600);

  words = source.split(" ");

  for (var i = 0; i < words.length - order; i++){
    gram_temp = [];
    for (var j = 0; j < order; j++){
    gram_temp.push(words[i + j]);
      }
      gram = join(gram_temp," ");
      if (!ngrams[gram]){
     ngrams[gram] = [];
    }
      if (i < words.length - order){
     ngrams[gram].push(words[i + order])   
  }
}
  markovIt(ngrams);
  txt = spText(txt);
}

function draw() {
    background(255);
    a += (target - a) * 0.1;
    textSize(12);
        fill(0, a);
    textDisplay(txt);
    if (a  < 0.099 ){
     restart();
     }
}

function restart(){
    markovIt(ngrams);
    txt = spText(txt);
    a = 0;
    target = 255;
}


function textDisplay(ttext){
    textAlign(CENTER);
    text(ttext, 100, 60, width - 100, height - 60);
}

function spText(txt){
    return txt.split(".");
}

function mousePressed(){
   target = 0;
}

function markovIt(ngrams) {
    var index = int(random(0, words.length - order + 1));
    curr_temp = [];
    for (var j = 0; j < order; j++){
     curr_temp.push(words[index + j]);
      }
      current = join(curr_temp, " ");
      result = current;
      if (!ngrams[current]){
        return null;
    }
      var range = int(random(30, 500));
      for (var i = 0; i < range; i++){
        if (!ngrams[current]){
        break;
          }
        possibilities = ngrams[current];
        if (possibilities.length == 0){
          break;
        }
        next = possibilities[int(random(0, possibilities.length))];
        result = result + " " + next;
        tokens = result.split(" ");
        curr_temp = [];
        for (var j = order; j > 0; j--){
        curr_temp.push(tokens[tokens.length - j]);
    }
         current = join(curr_temp, " ");
         }
    txt = result;
   
}

Notice that you need to declare a variable source, which should contain the input text.
As a little homage to Noam Chomsky, a pioneer of grammar studies (and much more), here you can find a working version of the code above using 2-grams in words, and based on this. Click on the canvas to generate some new text.

OSC messaging, Processing and SuperCollider

posted by on 2016.10.15, under Supercollider
15:

OSC stands for Open Sound Control, and consists in a protocol for networking between computers, synths and various multimedia devices. For instance, it allows a software, like Ableton Live, say, to communicate with a hardware synth, whenever the latter supports OSC. You might think that you already now how to do this via MIDI, and you’d be partially right. The differences between OSC and MIDI are many: accuracy, robustness, etc. One of the most important, or rather most useful difference, though, is that OSC allows to send *any* type of messages at high resolution to any address. Differently, the MIDI protocol has its own specific messages, like note On, not Off, pitch, etc., with low resolution (0-127). This means that if you use MIDI to communicate between devices, you’ll be required to translate your original message, say for instance the position of a particle or the color of a pixel at mouse point, via the standard MIDI messages. And this is often not enough. An example is usually better than many principled objections, so here comes a little Processing sketch communicating to SuperCollider. The idea is quite simple: in the Processing sketch, systems of particles are spawned randomly, with the particles having a color, a halflife, and also a tag. The user can click on the screen and generate a circle. If a particle traverses one of the circles, it will tell SuperCollider to generate a synth, and it will pass by various information, like its position, velocity, color, etc. This data will affect the synth created in SuperCollider. Here’s the Processing code, which uses the library oscP5

//// Setup for OscP5;

import oscP5.*;
import netP5.*;
OscP5 oscP5;
NetAddress Supercollider;

//// Setting up the particle systems and the main counter;


ArrayList<Parsys> systems;
ArrayList<Part> circles;
int count = 0;

void setup(){
  size(800, 800);
  background(255);
  oscP5 = new OscP5(this,12000);
  Supercollider = new NetAddress("127.0.0.1", 57120);
 
  systems = new ArrayList<Parsys>();
  circles = new ArrayList<Part>();
 
}

void draw(){
  background(255);
  for (int i = systems.size() - 1; i>=0; i--){
    Parsys sys = systems.get(i);
    sys.update();
    sys.show();
    if (sys.isDead()){
      systems.remove(i);
    }
    for (int j = 0; j < circles.size(); j++){
      Part circ = circles.get(j);
      sys.interact(circ);
    }
  }
 
  for (int i = 0; i < circles.size(); i++){
    Part circ = circles.get(i);
    circ.show();
  }
  if (random(0, 1) < 0.04){
    Parsys p = new Parsys(random(0, width), random(0, height));
    systems.add(p);
  }
}

void mousePressed(){
  Part circ = new Part(mouseX, mouseY, 0, 0, 0, 30);
  circ.stuck = true;
  circ.halflife = 80;
  circles.add(circ);
}

/////Define the class Parsys

class Parsys {
  ArrayList<Part> particles;
 
  Parsys(float x, float y){
    particles = new ArrayList<Part>();
   
    int n = int(random(20, 80));
    for (int i = 0; i < n; i++){
      float theta = random(0, 2 * PI);
      float r = random(0.1, 1.2);
      Part p = new Part(x, y, r * cos(theta), r * sin(theta), int(random(0, 3)));
      p.tag = count;
      count++;
      particles.add(p);
    }
  }
 
  void update(){
    for (int i =  particles.size() - 1; i>=0; i--){
      Part p = particles.get(i);
      p.move();
      if (p.isDead()){
        particles.remove(i);
      }
    }
  }
 
  void show(){
    for (int i =  particles.size() - 1; i>=0; i--){
      Part p = particles.get(i);
      p.show();
    }
  }
 
  boolean isDead(){
    if (particles.size() <=0){
      return true;
    }
    else return false;
  }
 
  void interact(Part other){
    for (int i = 0; i < particles.size(); i++){
      Part p = particles.get(i);
      if (p.interacting(other)){
        float dist = (p.pos.x - other.pos.x)* (p.pos.x - other.pos.x) + (p.pos.y - other.pos.y)*(p.pos.y - other.pos.y);
        if (!p.active){
          float start = other.pos.x/width;
        p.makeSynth(start, dist/(other.rad * other.rad));
        p.active = true;
        }
        else {
          p.sendMessage(dist/(other.rad * other.rad));
        }
      }
      else
      {
        p.active = false;
      }
    }
  }
}

/////Define the class Part

class Part {
  PVector pos;
  PVector vel;
  int halflife = 200;
  int col;
  float rad = 2;
  boolean stuck = false;
  boolean active = false;
  int tag;

  Part(float x, float y, int _col) {
    pos = new PVector(x, y);
    col = _col;
  }

  Part(float x, float y, float vx, float vy, int _col) {
    pos = new PVector(x, y);
    vel = new PVector(vx, vy);
    col = _col;
  }

  Part(float x, float y, float vx, float vy, int _col, float _rad) {
    pos = new PVector(x, y);
    vel = new PVector(vx, vy);
    rad = _rad;
  }

  void move() {
    if (!stuck) {
      pos.add(vel);
      halflife--;
    }
  }

  void show() {
    noStroke();
    fill(255 / 2 * col, 255, 100, halflife);
    ellipse(pos.x, pos.y, rad * 2, rad * 2);
  }

  boolean isDead() {
    if (halflife < 0) {
      active = false;
      return true;
    } else return false;
  }

  boolean interacting(Part other) {
    if (dist(pos.x, pos.y, other.pos.x, other.pos.y) < rad + other.rad) {
      return true;
    } else return false;
  }

  void makeSynth(float start, float dist) {
      OscMessage message = new OscMessage("/makesynth");
      message.add(col);
      message.add(vel.x);
      message.add(vel.y);
      message.add(start);
      message.add(dist);
      message.add(tag);
      oscP5.send(message, Supercollider);
    }
   
    void sendMessage(float dist){
    OscMessage control = new OscMessage("/control");
    control.add(tag);
    control.add(dist);
    oscP5.send(control, Supercollider);
  }
}

Notice that the messaging happens in the functions makeSynth() and sendMessage() in the class Part. The OSC message is formed in the following way: first there is its symbolic name, like “/makesynth”, and then we add the various information we want to send. These can be integers, floats, strings, etc. The symbolic name allows the “listening” media device, in this case SuperCollider, to perform an action whenever the message with the specific symbolic name arrives. You need to specify an address where to send the message: in my case, I used the default incoming SuperCollider address. Here is the code on the SuperCollider side

s.boot;

(

fork{
SynthDef(\buff, {|freq = 10, dur = 10, gate = 1, buff, rate = 1, pos = 0, pan = 0, out = 0, spr|
    var trig = Impulse.ar(freq);
    var spread = TRand.ar((-1) * spr * 0.5, spr * 0.5, trig);
    var sig = GrainBuf.ar(2, trig, dur, buff, rate, pos + spread + spr, 0, pan);
        var env = EnvGen.kr(Env([0.0, 1, 1, 0], [Rand(0.05, 0.8), Rand(0.1, 0.4), Rand(0.01, 0.5)]), doneAction: 2);
    Out.ar(out, sig * env * 0.02 );
}).add;

SynthDef(\rev, {|out, in|
    var sig = In.ar(in, 2);
    sig = FreeVerb.ar(Limiter.ar(sig), 0.1, 0.5, 0.1);
    Out.ar(0, sig);
    }
).add;

~rev = Bus.audio(s, 2);
s.sync;
Synth(\rev, [\in: ~rev]);

~synths = Dictionary.new;


~buffers =  "pathFolder/*".pathMatch.collect({|file| Buffer.readChannel(s, file, channels:0)});

~total = 0;

OSCdef(\makesynth, {|msg|
    //msg.postln;
    if(~total < 60, {
        x = Synth(\buff, [\freq: rrand(0.1, 40), \dur: rrand(0.01, 0.5), \buff: ~buffers[msg[1]], \rate: msg[3] * rrand(-3, 3), \pan: msg[2], \pos: msg[4], \spr: msg[5], \out: ~rev]);
        ~synths.put(msg[6], x);
        x.onFree({~total = ~total - 1; ~synths.removeAt(msg[6]);});
        ~total = ~total + 1;
        //~total.postln;
    }, {});
}, "/makesynth");

 OSCdef(\control, {|msg|
    ~synths[msg[1]].set(\freq2, msg[2]);
 }, "/control");

}
)

The listening is performed by OSCdef(name, {function}, symbolic name), which passes as argument to the function the message we have sent via Processing. Notice that the first entry of the array msg will always be the symbolic name of the message. Pretty simple, uh? Neverthless, the SuperCollider code has some nuances that should be explained. First, the synth you create should better be erased when it finishes its job, otherwise you’ll have an accumulation of them which will eventually freeze your computer. Also, I’ve put a cap on the total number of synth which I allow at a given time, to avoid performances issues. We also want to be able to control various parameters of a given synth while the particle that generated it is still inside one of the circles. To do this, we have to keep track of the various synths which are active at a given moment. I’ve done this by “tagging” each new created particle with an integer, and by using the Dictionary variable ~synths. Recall that a Dictionary is a collection of data of the form [key, value, key, value,…]: you can retrieve the given value, in this case a synth node, via the associated key, in this case the particle tag. When the given synth node is freed, via the method onFree() we decrease the total number of active synths, and remove the key corresponding to the particle tag.
I hope the example above shows how powerful OSC communication is, and how nuanced one can be in the resulting actions performed.
Here it is an audio snippet.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

The Molecular Music Box in SuperCollider

posted by on 2014.08.19, under Supercollider
19:

Via Reaktor tutorials I came across this video. I have already talked about generative systems that create rich patterns (see here, and here), and how simple rules can give rise to emergent complexity.
Watch the video to know what the simple rules are in this case (and to see why it is called “molecular” :) ), or look at the following SuperCollider code, which, I must say, took me a bit more than I thought

MIDIClient.init;

~mOut = MIDIOut.new(3);

(
var seed = 48;
var degrees, deg;
var length = Pseq([4, 3], inf).asStream;
var dur = length.next();
var bars = 25;
var quant = 16;
var notes = [];
var loop = [];
var pos = [];
var next = 0;

degrees = [];

//Building the MIDI values for the white keys

9.do({|i|
degrees = degrees++([0, 2, 4, 5, 7, 9, 11] + (12*i));
});

//Starting notes from the seed

deg = Pseq(degrees, inf, degrees.indexOf(seed)).asStream;


(bars * quant).do({|i|
  var note;

    if((i%quant == 0) && (notes != []),
    {
     loop = loop.add(notes);
     notes = [];
    });

    if((i%quant == next) && (pos.includes(next) == false),{
      notes = notes.add([deg.next(), dur/4]);
      pos = pos.add(next);
      next = (next + dur)%quant;
    });

    if ( (i%quant == next) && (pos.includes(next) == true),{
     dur = length.next();
     notes = notes.add([deg.next(), dur/4]);
     next = (next + dur)%quant;
    });

   });

  loop.do({|patt, i|
      patt.postln;
      patterns = patterns++([i * 4, Pbind(*[\type,\midi,\chan,0,
         \midiout,~mOut,
        [\midinote, \dur]: Pseq(patt, inf),
        \legato: 1,
            \amp: rrand(0.1,0.5)])]);
      });

  Ptpar(patterns, 1).trace.play;
)

Notice that you can very easily change any of the rules (duration length, scale used, etc.) with a few keyboard strokes: the power of a text based programming language! :)
I have sent the output of this to the Grand Piano instrument in Ableton Live 9.
Here is the result for 4C3

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.


and here is the one for 9C14½

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Objects manipulation in SuperCollider and Synths generation

posted by on 2014.08.18, under Supercollider
18:

In the last post, I wrote about some object oriented techniques in Processing which simplyfied our code, and gave the possibilty to reason more abstractly. Also SuperCollider is an object oriented programming language, and in this little example below I want to explore a situation where we can really take advantage of this: it somehow belongs to the category of “things one (or I?) can’t really do in Max/Msp, Reaktor, etc.”. :)
Here’s the simple situation: suppose we want to create 100 different percussive sounds with FM synthesis, with random waveforms as carrier, random harmonics for the modulating signal, etc., and play them randomly. We don’t want to change the sounds later, just play them. So, the first strategy we could adopt is the dirty and cheap no-brainer approach, which works along the following lines: well, what I will do is to create a SynthDef, i.e. an audio graph on the server, which is “big” enough to encompass the various possibilities of sounds, store the various randomly created combinations of waveforms, frequencies, etc. in an array, and poll these data when needed. The SynthDef would look like

 SynthDef(\perc, { arg out = 0, freq = 0, decay = 0.5, pan = 0, depth = 1, waveform = 0, harm = 0;
            var sig = Select.ar(waveform, [SinOsc.ar(freq + (SinOsc.ar(harm * freq) * depth)), Saw.ar(freq + (SinOsc.ar(harm * freq) * depth))]);
        var env = EnvGen.kr(Env.perc(0.01, decay), doneAction: 2);
            Out.ar(out, Pan2.ar(LPF.ar(sig * env * 0.1, 3000), pan));
    }).add;

Setting the parameter waveform will choose among the waveforms, freq will determine the frequency, etc. A big array (or dictionary, if you prefer) with randomly generated values, and I’m done!
Well, yeah, this approach will work, but it’s quite ugly, and, more importantly, inefficient: indeed, for each Synth created, a copy is kept of the two waveforms, only one of which is heard. So, for each percussion sound, we allocate twice what we need. Remember that there is a difference between not having a sound generator on the audio server, and having one which is muted, since in the second case, the sample rate computations are performed anyways.
Let’s think about this situation from a different angle, which makes more use of the language capabilities of SuperCollider. What we really need are different percussion sounds: they are different enough to be considered not as different instances of the same SynthDefs, but rather as instances of different SynthDefs. Instead of having SuperCollider creating different Synth instances, we can tell it to create different SynthDefs! Indeed, consider the following code (assuming a given choice for the range of the parameters)

100.do({|i|
    var name;  
    name = "perc"++i;
    SynthDef(name, { arg out = 0;
                        var freq = [24, 57, 68, 29, 87, 39, 70].choose.midicps;
                        var decay =  rrand(0.1, 0.5);
                        var pan = rrand(-1.0, 1.0);
            var sig = [SinOsc, Saw].choose.ar(freq + (SinOsc.ar([0, 1, 2, 3, 4].choose*freq*rrand(1, 30))))*0.1;
        var env = EnvGen.kr(Env.perc(0.01, decay), doneAction: 2);
            Out.ar(out, Pan2.ar(LPF.ar(sig*env, 3000), pan));
    }).add;
    })
});

First, we are defining names with which we can refer to the SynthDef: this is easily done, since a name for a Synth is a string (or a symbol), and we can obtain each of them by concatenating a string (in this case “perc”) with the counter index. Next, we define different SynthDefs: the beautiful thing that makes this work in a nice way is contained in the part

[SinOsc, Saw].choose.ar()

Remember: SuperCollider is object oriented, and in most cases, even if we don’t think about it, we are dealing with objects. Indeed, when we write something like SinOsc.ar(440), we are actually creating an object of type SinOsc, and sending to it the value 440 via the method .ar (which stands for “audio rate”), which provides the necessary computations to produce audio on scsynth, the audio server. Since SinOsc, Saw, etc are an “extension” of the class UGen (more or less), we can make a list (or Collection) with objects of this type, and use the powerful collection manipulation methods provided by SuperCollider (in this case the method .choose). Since both SinOsc and Saw react in the same way to the method .ar (this is an example of “polymorphism”), we can have it follow the method .choose, and pass everything in the round brackets. In this way the SynthDef itself will only have one copy of the choosen waveform per name.
This way of reasoning might look a bit abstract at the beginning, but it’s based on a simple concept: everything that require repetitive actions is best left to a programming language! Moreover, since SuperCollider is heavily object oriented, it gives really serious manipulations possibilities, as we have seen. We used already an incarnation of this concept when we built graphical interfaces.
Finally, the percussive sounds can be played via something like this

fork{
inf.do({
    i = rrand(0, 99);
    Synth("perc"++i);
    rrand(0.1, 0.4).wait;
})
}

Here’s the complete code

s.boot;

(

var reverb, delay;
reverb = Bus.audio(s, 2);
delay = Bus.audio(s, 2);


SynthDef(\reverb, {arg out = 0, in;
    var sig = In.ar(in, 2);
    sig = FreeVerb.ar(sig);
    Out.ar(out, sig);
}).add;

SynthDef(\delay, {arg out = 0, in;
    var sig = In.ar(in, 2) + LocalIn.ar(2);
    LocalOut.ar(DelayL.ar(sig, 0.5, 0.1)*0.8);
    Out.ar(out, sig);
}).add;

fork{
   
100.do({|i|
    var name;  
    name = "perc"++i;
    SynthDef(name, { arg out = 0;
                        var freq = [24, 57, 68, 29, 87, 39, 70].choose.midicps;
                        var decay =  rrand(0.1, 0.5);
                        var pan = rrand(-1.0, 1.0);
            var sig = [SinOsc, Saw].choose.ar(freq + (SinOsc.ar([0, 1, 2, 3, 4].choose*freq*rrand(1, 30))))*0.1;
        var env = EnvGen.kr(Env.perc(0.01, decay), doneAction: 2);
            Out.ar(out, Pan2.ar(LPF.ar(sig*env, 3000), pan));
    }).add;
    });
   

s.sync;

        Synth(\reverb, [\in: reverb]);
    Synth(\delay, [\in: delay]);
   
    inf.do({
    i = rrand(0, 99);
        Synth("perc"++i, [\out: [reverb, delay].choose]);
    rrand(0.1, 0.4).wait;
})
}
)

I have added some reverb and delay to taste.
Notice that this code illustrate only the tip of the iceberg concerning what you can achieve with an object oriented approach to structure your code.
No audio this time, since it is not that interesting. :)

Drones, drones, drones

posted by on 2014.07.05, under Supercollider
05:

Who doesn’t like ambient drones? Ok, I guess some people might not like them.
Here’s a little code in SuperCollider to generate a kind of feedbacky ambient atmosphere. It does look involved, but it isn’t. The main thing is using Wavetable synthesis : what the method .sine1Msg does is to fill the buffer b with a sinewave (of 512 sample “period”), and its harmonics, whose number and amplitudes are specified in the array parameter of the method. Then we can “play” the wavetable with Osc.ar, which is a wavetable oscillator. Moreover, for added noisyness and ambience, I have added a bitcrusher with randomly varying samplerate and bit depth, and also a sample player playing the buffer c, which is not too loud, and just contributes to textures. You could modify it easily to load multiple samples, and choose between them. The last part involves a nice trick I learned here : basically, you stack a certain number of short time delays, and use a feedback line to simulate a reverberation effect. In particular, if the feedback parameter is 1, you get an “infinite reverb”. Very cool. Be careful not to keep feeding input in this situation, otherwise you will get the usual feedback headache! 😀

s.boot;

(
b = Buffer.alloc(s,512,1,{|z|z.sine1Msg(1.0/[1,3,5,7,9,11,13,15,17])});
c = Buffer.read(s, "/pathtothesample");

fork{
    s.sync;
~sound = {
    var sig;
    var local;
    var f = [30,60,15]*Lag.kr(TChoose.kr(Impulse.kr(0.05),[0.75, 0.5, 1]), 8);
    sig = Mix(COsc.ar(b.bufnum,f + SinOsc.ar(f*25, 0, LFTri.kr(0.01).range(0, 10)), [0.1, 0.1001, 0.2], 0.2))*0.1;
sig = sig;
sig = LeakDC.ar(Ringz.ar(sig, TChoose.kr(Impulse.kr(0.1),[88, 97, 99, 100].midicps), LFTri.kr([0.05, 0.051]).range(0.2, 0.5)));
sig = sig + Decimator.ar(sig, 48000*LFNoise0.kr(1).range(0.25, 1), TChoose.kr(Impulse.kr(4), [8, 12, 16, 24]), 0.4);
    sig = LPF.ar(sig, 3000*LFTri.kr(0.01).range(0.1, 1));
    sig = sig + (Splay.ar(Array.fill(4, {PlayBuf.ar(2, c, rrand(-0.8, 0.8), loop: 2)*0.01}), 0.5));
    sig = CombC.ar(sig, 1.0, [0.1, 0.2], LFTri.kr(0.05).range(5, 9));
       
    local = sig + LocalIn.ar(2);
    15.do({
            local = AllpassN.ar(local, 0.06, Rand(0.001, 0.06), 3)
          });
    LocalOut.ar(local*0.4);
       
    Out.ar(0, Limiter.ar(LPF.ar(local, 4000), 0.8)*EnvGen.kr(Env([0, 1, 1, 0],[3, 100, 10])));
}.play;
}
)

s.quit;

You can listen to the result here

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Tides on the Sea of Tranquillity

posted by on 2014.03.04, under Supercollider
04:

A quick post, since it is a while I’m not updating this.
Here’s a short track, “Tides on the Sea of Tranquillity”, built around one of my favourite UGens in SuperCollider: GrainBuf. GrainBuf is a buffer granulizer which is very flexible.
The idea for the track is quite simple: there are four granulizers, which get activated at a certain point, whose parameters are controlled by the loops in the Task. In particular, the position parameter for each of them is “lagged”, so that at each change it will start from the middle of the buffer, and reach the new value in a given time. Also, the spread, pitch, and pan are jittered, to give a spacey feeling. The other important part is that the rate for the grains are controlled via the .midiratio method, which allows to vary the pitch according to a temperate scale, hence giving some harmonic consistency. To end, everything is passed to a feedback delay line with a simple compressor.
One technical point: make sure you load buffers which are mono in GrainBuf, otherwise it will fail silently! Some times ago it took me ages to figure this out (I know, the documentation says so…).
Enjoy. 😉

s.boot;

(
fork({

//Define SynthDefs for the granulizers and delay

SynthDef(\granular, {arg out = 0, buff, pos = 0, spread = 0.05, dur = 0.40, p = 0.5, t_trig = 0, rate= 1, t = 300, a = 0.1, l = 1, pa = 0, att = 6, f = 100;
    var trigger = Impulse.ar(t);
    var sp = TRand.ar((-1)*spread, spread, trigger);
    var pan = TRand.ar((-1)*p, p, trigger);
    var sig = GrainBuf.ar(1, trigger, dur, buff, rate + LFNoise0.ar(f).range(-0.01, 0.01), Lag.kr(pos, l) + sp, pan:pan);
    var env = EnvGen.kr(Env.linen(att, 4, 8), gate: t_trig, doneAction:0);
    Out.ar(out, Pan2.ar(sig*env*a, pa));
}).add;

SynthDef(\del, {arg out = 0, in, feed = 0.93;
    var sig = In.ar(in, 2) + LocalIn.ar(2);
    var delL = DelayL.ar(sig[0], 2.0, 0.2);
    var delR = DelayL.ar(sig[1], 2.0, 0.5);
    LocalOut.ar([delL, delR]*feed);
    Out.ar(out, Compander.ar(sig, sig, 0.5, 1, 0.5, 0.01, 0.3));
}).add;


// Load samples in mono buffers and create a audio bus for delay
~path = PathName(thisProcess.nowExecutingPath).pathOnly;

~del = Bus.audio(s, 2);

~buff = Buffer.readChannel(s, ~path ++ "sample/sample1.wav", channels: [0]);
~buff2 = Buffer.readChannel(s, ~path ++ "sample/sample2.wav", channels: [0]);
~buff3 = Buffer.readChannel(s, ~path ++ "sample/sample3.wav", channels: [0]);
~buff4 = Buffer.readChannel(s, ~path ++ "sample/sample4.wav", channels: [0]);
~buff5 = Buffer.readChannel(s, ~path ++ "sample/sample5.wav", channels: [0]);

s.sync; //Wait for the buffers to be loaded;

y = Synth(\del, [\in: ~del]);


// Make an array of synths to be used as voices

~synths = Array.fill(3, {Synth(\granular, [\buff: ~buff, \t_trig:0, \pos: 0.5, \rate: ([0, 4, 5, 7, 9, 12, 16, 17] - 12).choose.midiratio, \spread: 0.05, \out:~del ])});

Task({

    10.do({
        ~synths[0].set(\buff, [~buff, ~buff2, ~buff3, ~buff4, ~buff5].wchoose([0.5, 0.3, 0.1, 0.08, 0.02]), \t_trig, 1, \t, rrand(10, 400), \dur, rrand(0.020, 0.40), \pos, rrand(0.0, 0.8), \l, rrand(0.4, 4), \rate, ([0, 4, 5, 7, 9, 12, 16, 17] - 12).choose.midiratio, \spread, rrand(0.005, 0.1), \p, rrand(-0.3, 0.3), \out, ~del, \pa, rrand(-0.5, 0.5), \att, rrand(6, 9), \a, rrand(0.05, 0.1), \f, rrand(50, 150));

        rrand(4, 9).wait;
    });

        30.do({
        ~synths[0].set(\buff, [~buff, ~buff2, ~buff3, ~buff4, ~buff5].wchoose([0.5, 0.3, 0.1, 0.08, 0.02]), \t_trig, 1, \t, rrand(10, 400), \dur, rrand(0.020, 0.40), \pos, rrand(0.0, 0.8), \l, rrand(0.4, 4), \rate, ([0, 4, 5, 7, 9, 12, 16, 17] - 12).choose.midiratio, \spread, rrand(0.005, 0.1), \p, rrand(-0.3, 0.3), \out, ~del, \pa, rrand(-0.5, 0.5), \att, rrand(6, 9), \a, rrand(0.05, 0.1), \f, rrand(50, 150));

        ~synths[1].set(\buff, [~buff, ~buff2, ~buff3, ~buff4, ~buff5].wchoose([0.5, 0.3, 0.1, 0.08, 0.02]), \t_trig, 1, \t, rrand(10, 400), \dur, rrand(0.020, 0.40), \pos, rrand(0.0, 0.8), \l, rrand(0.4, 4), \rate, ([0, 4, 5, 7, 9, 12, 16, 17]).choose.midiratio, \spread, rrand(0.005, 0.1), \p, rrand(-0.3, 0.3), \out, ~del, \pa, rrand(-0.5, 0.5), \att, rrand(6, 9), \a, rrand(0.01, 0.05), \f, rrand(50, 150));

        rrand(0.1, 4).wait;

    });

            20.do({
            ~synths[0].set(\buff, [~buff, ~buff2, ~buff3, ~buff4, ~buff5].wchoose([0.5, 0.3, 0.1, 0.08, 0.02]), \t_trig, 1, \t, rrand(10, 400), \dur, rrand(0.020, 0.40), \pos, rrand(0.0, 0.8), \l, rrand(0.4, 4), \rate, ([0, 4, 5, 7, 9, 12, 16, 17] - 12).choose.midiratio, \spread, rrand(0.005, 0.1), \p, rrand(-0.3, 0.3), \out, ~del, \pa, rrand(-0.5, 0.5), \att, rrand(6, 9), \a, rrand(0.05, 0.1), \f, rrand(50, 150));

            ~synths[1].set(\buff, [~buff, ~buff2, ~buff3, ~buff4, ~buff5].wchoose([0.5, 0.3, 0.1, 0.08, 0.02]), \t_trig, 1, \t, rrand(10, 400), \dur, rrand(0.020, 0.40), \pos, rrand(0.0, 0.8), \l, rrand(0.4, 4), \rate, ([0, 4, 5, 7, 9, 12, 16, 17]).choose.midiratio, \spread, rrand(0.005, 0.1), \p, rrand(-0.3, 0.3), \out, ~del, \pa, rrand(-0.5, 0.5), \att, rrand(6, 9), \a, rrand(0.05, 0.1), \f, rrand(50, 150));

            ~synths[2].set(\buff, [~buff, ~buff2, ~buff3, ~buff4, ~buff5].wchoose([0.5, 0.3, 0.1, 0.08, 0.02]), \t_trig, 1, \t, rrand(10, 400), \dur, rrand(0.020, 0.40), \pos, rrand(0.0, 0.8), \l, rrand(0.4, 4), \rate, ([0, 4, 5, 7, 9, 12, 16, 17] + 12).choose.midiratio, \spread, rrand(0.005, 0.1), \p, rrand(-0.3, 0.3), \out, ~del, \pa, rrand(-0.5, 0.5), \att, rrand(6, 9), \a, rrand(0.05, 0.1), \f, rrand(50, 150));

        rrand(0.1, 3).wait;

    });
                10.do({
            ~synths[0].set(\buff, [~buff, ~buff2, ~buff3, ~buff4, ~buff5].wchoose([0.5, 0.3, 0.1, 0.08, 0.02]), \t_trig, 1, \t, rrand(10, 400), \dur, rrand(0.020, 0.40), \pos, rrand(0.0, 0.8), \l, rrand(0.4, 4), \rate, ([0, 4, 5, 7, 9, 12, 16, 17] - 12).choose.midiratio, \spread, rrand(0.005, 0.1), \p, rrand(-0.3, 0.3), \out, ~del, \pa, rrand(-0.5, 0.5), \att, rrand(6, 9), \a, rrand(0.05, 0.1), \f, rrand(50, 150));

        rrand(1, 3).wait;

});



}).play(quant: 4);
})
)

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

A sequenced fx machine

posted by on 2014.01.19, under Supercollider
19:

Ever wondered how to make a sequenced fx machine, where the effects cut each other out (like dbGlitch, say)? Here’s a very simple example

SynthDef(\fx, {arg out = 0, in, trig = 0, lag = 0.1, grid = 1/2, bit = 7, sample = 5000;
    var sig, del;
    sig = In.ar(in,2)*0.5 + Select.ar(trig, [In.ar(in, 2)*0.5, LocalIn.ar(2), LPF.ar(Decimator.ar(In.ar(in,2)*0.5, sample, bit), 6000)]);
    del = DelayL.ar(sig, 1.0, Lag.kr(grid, lag));
    LocalOut.ar(HPF.ar(del*(trig.clip(0, 1)), 2000));
    Out.ar(out, Pan2.ar(sig[0], TRand.kr(-0.5, 0.5, trig)*trig));
}).add;

which uses the Ugen Select.ar, which functions as a “one-to-many” audio switch*. The delay part works as a simple beat repeater, and the other effect is a bit-crusher. After instantiating an audio bus, you can then delegate the sequencing to a Pmono

~rep = Bus.audio(s, 2);

Pmono(\fx, *[\trig : Pwrand([0, 1, 2], [0.4, 0.3, 0.3], inf),
    \grid: Prand([1/2, 1/4, 1/8, 1/16], inf),
    \bit: Prand([7, 10, 8, 24], inf),
    \sample: Pwhite(1000, 10100, inf),
    \lag: Pwhite(0, 0.1, inf),
    \in: ~rep,
    \dur: 1/8]).play(quant: 4);

Whatever you send to ~rep will be processed by the fx SynthDef (be careful with the order of execution, or use Group to be on the safe side). If you have already SynthDefs for the single effects you would like to use, it is more convenient to send the audio to single busses, and then add a SynthDef at the tail of the synth chain containing a Select.ar Ugen which will choose among the different effects. You will have to run additional Pmonos to control the various parameters, though.
Notice that this approach to sequenced fx machines is quite expensive, since the various effects will run constantly on the server.
Anyway, in this simple case it sounds like

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

*You can use a similar approach in Max/Msp, Reaktor, etc.

First L!vE K@ding session!

posted by on 2014.01.13, under Supercollider
13:

As promised, here’s my first live coding screencast! The video unfortunately turned out to be not very good: ffmpeg on Linux failed me miserably. I thought it was worth sharing anyways, so, here it is. 😉
Some comments: I have used some SynthDefs that I have already prepared and some samples that I have loaded beforehand. This is achieved by the tabs “initialize” and “synths” that you see in the video. This is not a “blank page” approach to live coding, but it’s what I have realized works for me, since I am more interested in improvising patterns than the actual sound synthesis (which I also did in this session, by the way). In particular, one of the SynthDefs which I am really liking is \looper, a custom made looper which allows me to capture audio from other synths and control its parameters to get nice glitchy patterns. I’m really liking it. :) On the other end, \pad_fm is a very simple pad with fm modulation. Of course, if anybody is interested in these SynthDefs I will certainly share.
Oh, also almost everything happens in ProxySpace.
Enjoy 😉

Finite fields and musical phrasings

posted by on 2013.12.20, under Supercollider
20:

In this post I want to talk about musical phrasings in algorithmic composition and algebra over finite fields. Yep, algebra, so brace yourself  :-).
I am pretty sure you have been exposed to clock arithmetic: basically, you consider integer numbers modulo a natural number p. Now, in the case p is a prime, the commutative ring \mathbb{Z}_{p} is actually a field with finite elements. There is a name for that, and it is finite field, or Galois* field.
Now consider a pxp-matrix A with entries in \mathbb{Z}_{p}\:: A will give us a map \varphi_{A} from \mathbb{Z}_{p} to itself via

\varphi_{A}(i):=\sum_{j=0\ldots p-1}(A_{ij}\:j)\:{\rm mod}\:p

where A_{ij} are the entries of A. Now, consider a function f from \mathbb{Z}_{p} to a finite set S, and consider the subgroup P of the group of permutations of p objects given by elements \pi satifying

f(\pi(i)) = f(i)

If the function f is not injective, this subgroup will be not empty. Notice that for any function f of the type above, the matrix A gives us another function A^{*}f defined as**

A^{*}f(i):=f(\varphi_{A}(i))

Now we can ask ourselves: given a function f, can we find a matrix A such that

(A^{*})^{k}f = f

for some natural number k? What this means is that we want to find a matrix such that applied k-times to the function f returns the function itself. This problem is equivalent to finding a matrix A such that

A^{k} = \pi,\quad\pi\in{P}

In the case in which the function f is injective, this coincides with the problem of finding idempotents matrices in a finite field: the fantastic thing is that they are known to exist***, and even better their number is known in many situations!
“Ok, now, what all of this has to do with musical phrasing?! No, seriously, I’m getting annoyed, what really? ”
I hear your concern, but as with almost everything in life, it’s just a matter of perspective. So, consider the set S as a set of pairs (pitch, duration) for a note. A function f above will tell us in which order we play a note and what is the duration of each note: in other words, it is a musical phrasing. By picking a matrix A, we can generate from f a new phrasing, given by the function A^{*}f, and we can reiterate the process. If A is such that it satisfies the condition above, after a finite number of steps, the phrasing will repeat. Hence, the following Supercollider code

s.boot;

SynthDef(\mall,{arg out=0,note, amp = 1;
    var sig=Array.fill(3,{|n| SinOsc.ar(note.midicps*(n+1),0,0.3)}).sum;
    var env=EnvGen.kr(Env.perc(0.01,1.2), doneAction:2);
    Out.ar(out, sig*env*amp!2);
}).add;

(
var matrix, index, ind, notes, times, n, a;

notes = [48, 53, 52, 57, 53, 59, 60] + 12;
times = [1/2, 1/2, 1, 1/2, 1/2, 1, 1]*0.5;
n = 7;
matrix = Array.fill(n,{Array.fill(n, {rrand(0, n-1);})});
index = (0..(n-1));

a = Prout({
     inf.do({
            ind = [];
           
            matrix.collect({|row|
               ind = ind ++ [(row * index).sum % n];
               ((row * index).sum % n).yield;
            });
           index = ind;
         
     });
   }); 
       
   

Pbind(*[\instrument: \mall, \index: a, \note: Pfunc({|ev| notes[ev[\index]];}), \dur:Pfunc({|ev|
 times[ev[\index]];})]).trace.play;
)

s.quit;

represents a “sonification” of the probability distribution of finding a matrix A with the properties above, for p=7. We are assuming that the various matrices have equal probability to be generated in the code above.
So, how does this sound?
Like this

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

*By the way, you should check out the life of Galois, just to crush all your stereotypes about mathematicians. 😉
**Mathematicians love these “dual” definitions.
***Apart from the identity matrix and elements of P themselves, clearly.

pagetop