﻿ February « 2013 « Coding, Sounds and Colors | A blog about algorithmic experiments in music and visual art. Sort of.

posted by on 2013.02.19, under Processing
19:

The “triadic” discrete time dynamical system, i.e. the one obtained by iterating the function f(x)=3*x mod 1, is a very interesting one, showing a chaotic behaviour for given values of the starting point. I have used a bidimensional version of this in the following code, done with Processing: the starting point is given by (0.12,2.13), and at each iteration you have a line connecting the point at step n with the one at step n-1.
The number of points per frame is dictated by the x-position of the mouse, while the y-position controls the transparency of the lines. The x-position also controls their thickness.

float x;
float y;
float a=0.12;
float b=2.13;
float points=500;
float s;
boolean paused=true;

void setup(){
size(400,400);
background(255);
}

void draw(){
if (!paused){
points=map(mouseX,0,width,0,500);
s=map(mouseY,0,height,0,100);
background(255);
translate(width/2,height/2);
stroke(0,int(s));
strokeWeight(int(points/100));
for (int i=0;i<points;i++){
x=(3*a) % 1;
y=(3*b) % 1;
line(map(a,0,1,-width/2,width/2),map(b,0,1,-height/2,height/2),map(x,0,1,-width/2,width/2),map(y,0,1,-height/2,height/2));
a=x;
b=y;
};
};
};

void mousePressed() {
// if paused == true make it false
if(paused) {
paused = false;
}
// otherwise make it true
else {
paused = true;
}
};

Click on the white square to start/stop the animation. 😉

Voices

posted by on 2013.02.17, under Supercollider
17:

Here’s a little code inspired by a yoga session. During the meditation part, it is customary to sing some words which have the effect of a long note (a variation of the popular “oooom”, so to say): the different voices are not in phase, though, and at each iteration not all the partecipants sing. Neverthless, you have a nice harmonizing effect, sort of a long drone.
The code below explores this very simple idea: I have also added some very slight inharmonicity to the VarSaws. This makes for an almost unnoticeable beating effect, which I think suits well.
You may notice that I’m using Ndef and Tdef: I always like to use the JitLib extension when experimenting in Supercollider, since it makes the process much more flexible and fun.

s.boot;

SynthDef(\voice,{arg out=0,n=0,p=0,d=10,r=10;
var sig=Array.fill(3,{|i| VarSaw.ar(n.midicps*(i+1.0001),mul:0.05/(i+1))}).sum;
var sig2=Ringz.ar(WhiteNoise.ar(0.0003),TRand.ar(n.midicps,(n+1).midicps,Impulse.ar(10)));
var env=EnvGen.kr(Env.linen(d,1,r),gate:1,doneAction:2);
Out.ar(out,Pan2.ar((sig+sig2)*env*(0.8+SinOsc.kr(0.1,0,0.2)),p));

Ndef(\rev,{
Out.ar(0,Limiter.ar(FreeVerb.ar(LPF.ar(In.ar([0,1]),10000),mix:0.33),0.7));
};
);

Tdef(\voices,{
inf.do{
10.do{
if ((0.8).coin,{
Synth(\voice,[\n:[24,28,29,48,36,40,41,52,53,60,64,65].choose,\p:{rrand(-0.5,0.5)},\d:{rrand(5,13)},\r:{rrand(8,14)}]);
});
rrand(0.1,1).wait;
};
18.wait;
};
});

Tdef(\voices).play;
Tdef(\voices).stop;

s.quit;

Piano, patterns and gestures

posted by on 2013.02.12, under Supercollider
12:

I always loved piano as a kid, but for life circumstances I could never study it.  Ended studying guitar instead. Here’s a little code in Supercollider, exploring piano improvisation and “gestural” phrasing.

MIDIClient.init;

~mOut = MIDIOut.new(3);

//Set the scale to be Cmajor
~scale=[0,2,4,5,7,9,11];

//Define pattern proxies which will be modified by the task t below

a=PatternProxy(Pxrand([3,3,3,1,3,3],inf));
b=PatternProxy(Pseq([1/2],inf));
r=PatternProxy(Pseq([12],inf));
n=Prand([4,8,16],inf).asStream;

Pdef(\x,Pbind(\type,\midi,\chan,0,
\midiout,~mOut,
\scale,~scale,
\root,-12,
\degree,Pxrand([[0,3,5],[3,5,7],[4,6,8],[5,7,11]],inf),
\legato,1,
\amp,[{rrand(0.6,0.8)},{rrand(0.5,0.6)},{rrand(0.5,0.6)}]*0.7,    \dur,Prand([Pseq([1,1,1,1],1),Pseq([1,1,2],1),Pseq([1,2,1],1)],inf))).play(quant:1);

Pdef(\y,Pbind(\type,\midi,\chan,0,
\midiout,~mOut,
\scale,~scale,
\root,r,
\degree,a,
\legato,1,
\amp,{rrand(0.5,0.6)},
\dur,b)).play(quant:1);

10.wait;

inf.do({
if (0.7.coin,{
c=[[3,0,7,1,9,11,0,4],[[3,7],0,7,Rest,9,[0,11],0,4]].choose.scramble;
r.source=Pseq([[12,24].wchoose([0.7,0.3])],inf);
d=n.next;
a.source=Pseq([Pxrand(c,d),Pxrand([3,3,3,1,3,3],inf)]);
b.source=Pseq([Pseq([1/8],d),Pseq([1/2],inf)]);
});
rrand(3,4).wait;})}).play(quant:1);
};
).play(quant:1);

I’ve used a PatternProxy for the various notes degrees, velocity and duration, so to be able to modify it on the fly via the Task t, which controls the improvised part.

I came later to realize that it would be probably better to use Pfindur, instead that a Pseq to release the phrasing… I’ll try that soon. 😉

The MIDI has been routed to Ableton Live, and what you can hear in the following is its standard piano instrument.

For something amazing about  coding, piano and improvisation check Andrew Sorensen work with Impromptu. Superb.

Hello World!

posted by on 2013.02.11, under Processing, Supercollider
11:

Here’s some cheers from the main programming environments I like to experiment with, namely Supercollider and Processing

“Hello World! Let’s make some noise”.postln

and

println(“Hello Word! Coloring pixels, anyone?”)