finalI've been posting a fair few "cube" images over on Flickr and thought I'd do a quick write-up about what they're doing there.

They're a by-product of trying to piece together a toolchain that will allow me to create high quality renders with Sunflow, but using JavaScript as the interface for building and manipulating the scene before it gets sent off.

You can see the this project running over here: CAT826-sunflow-filemaker (latest version of Chrome please) and find all the source code on github here.

step2

There's a couple of tools that I've picked up recently that have allowed this to happen.

The first is THREE.js which does all the work of rendering 3D stuff using webGL in the browser. To start with I thought it was just ok, until I found the switch to turn native hardware accelerated webGL and now it's amazing.

The second is Node.js to act as glue between all the stuff JavaScript can't normally get access to, such as the file system, shelling out to other applications and so on. Sticking socket.io in there gives it all realtime awesomeness.

An example would be something like this...

Kinetic/audio -> node.js -> browser -> Three.js -> 3D scene -> node.js -> Sunflow.

sketch1

However, for this small cubes tangent I'm just using the middle bit...

Browser -> Three.js -> 3D scene

...specifically using the webcam to receive an image.

Creating a scene

We're taking the stream from a webcam and copying it over to a tiny canvas element, in the code on GitHub it's 40 x 30 pixels. Here, where I don't care about it looking impressively fast I run that at 80 x 60 pixels. A scene is created in THREE.js and then I read though the pixels in the canvas image turning them into cubes.

sketch2

The code looks pretty much like this...

for (var y = 0; y < this.baseHeight; y++) {
 for (var x = 0; x < this.baseWidth; x++) {
  r = cubes.imageData.data[(ythis.baseWidth+x)4]/255;
  g = cubes.imageData.data[(ythis.baseWidth+x)4+1]/255;
  b = cubes.imageData.data[(ythis.baseWidth+x)4+2]/255;
  color.setRGB(r, g, b);
  newCube = new THREE.Mesh( cube, new THREE.MeshLambertMaterial( { color: color, ambient: color, side: THREE.DoubleSide } ) );
  newCube.position.x = ((x-(this.baseWidth/2)+1)cubeSize)-(cubeSize/2);
  newCube.position.y = (((this.baseHeight-y)-(this.baseHeight/2))cubeSize)-(cubeSize/2) + 300;
  newCube.position.z = ((r+g+b)/3)*50;
  newCube.scale = {x: newScale, y: newScale, z: newScale};
  control.scene.add(newCube);
  }
 }
...it loops through each row and column of the image in turn, works out the RGB values of each pixel and creates a cube with that colour. The x,y position of the cube is the pixel position, the z position (how close it is to the camera) is based on the brightness, the brighter the closer it is to us. The size is also based on the brightness, the lighter the smaller the cube.

For all subsequent frames coming from the webcam we updating the colour, size and positions of the cube instead. I'm doing something fairly bad here in that I'm directly accessing and modifying the properties of each cube. I should probably be using methods or some such as future updates to THREE.js may break what I'm doing here...

for (var y = 0; y < this.baseHeight; y++) {
 for (var x = 0; x < this.baseWidth; x++) {
  r = cubes.imageData.data[(ythis.baseWidth+x)4]/255;
  g = cubes.imageData.data[(ythis.baseWidth+x)4+1]/255;
  b = cubes.imageData.data[(ythis.baseWidth+x)4+2]/255;
  newCube = control.scene.__objects[(ythis.baseWidth)+x];
  newCube.material.color.r = r;
  newCube.material.color.g = g;
  newCube.material.color.b = b;
  newCube.material.ambient.r = r;
  newCube.material.ambient.g = g;
  newCube.material.ambient.b = b;
  newCube.position.z = ((r+g+b)/3)50;
  newScale = ((1-((r+g+b)/3))*1.5)+0.8;
  newCube.scale = {x: newScale, y: newScale, z: newScale};
 }
}
...as you can see there's not much difference between the two chunks of code.

This is also the point where you can do all sorts of different stuff such as messing around with rotations instead of scaling and positioning, rather like this...

example

...and then of course you throw the camera controls in.

Aren't the camera controls a bit weird?

As I mentioned this is kind of a detour from what I'm actually trying to do. And what I'm generally trying to is have an object created and responding to external influences, such as sound and movement. Imagine a 3D vase that changes its shape based on music and the colours based on Kinetic input.

Now imagine that 3D vase on a pedestal, and a camera on a circular track around it. The camera can move around the vase as well as moving up and down. It can also be tilted to look further up and down the pedestal. Finally the circular track around the vase can be made larger or smaller to move the camera closer or further away. A bit like this diagram...

sketch3

...as you can see the focus is all about pointing the camera at the thing in the middle as opposed to flying round some flat object or scene.

And this is why the controls are a bit weird for the cube demo. But put a vase or a bust or a sofa in the middle and it all makes a bit more sense.

And now the rendering.

Now this is the bit where I was really focusing. THREE.js, as we can see is pretty awesome at setting up a scene and allow us to control it. Meanwhile Sunflow is pretty awesome at rendering silky smooth images. I wanted a way of getting the view I was looking at in THREE.js into Sunflow.

This is where a magic function comes it. It basically loops through all the objects in THREE's scene collection, checking to see if they are a sphere or a mesh. Sphere's are easy for Sunflow, they are basically a position, radius and colour. All other objects are meshes made up of points joined together into faces.

Turns out it was pretty easy to throw together the code that turned all of those into a scene description for Sunflow.

sketch4

Which means, ignoring textures and lights, I can throw pretty much any THREE.js scene I want at the function and get a Sunflow description out the other end. For example here are some Cobra MrkIIIs from Elite.

cobra

Extending, going back to the intended toolchain.

As I said, this is a small part of the larger plan. Imagine a canvas element that allows you to draw on it. Those drawings, lines, scrawls etc get turned into a 3D shape, which in turn can then become a high quality render.

Javascript is becoming incredibly rich, there's audio Digital Signal Processing for analysing audio. You can also create audio: dubstep [warning audio] as well as text-to-speech. There's Physijs a physics engine for THREE. Not forgetting the js port of Processing and Pure Data.

And as previously mentioned by bolting on Node.js with socket.io you can essentially get access to all sorts of sensors, including the Kinetic and RaspberryPi.

Then using the familiarity of other javascript libraries like jQuery you can interact with all of the above setting up the scene, responding to events and so on before firing off the scene to be rendered.

And that is what I'm slowly plodding my way towards, while spitting out cubes and such like along the way.

flats