I’ve avoided weighing in on the New Aesthetic thing, being happy just to watch it unfurl from the edges, but since I’ve seen this come up a couple of times I just can’t resist. Tom Coates as always poses a good question, although I’ve seen other people also mention it in comments and so on.
The thing about the #newaesthetic I don’t completely get is why it’s so heavily influenced by the colors and 8 bit graphics of the 1980s.
— Tom Coates (@tomcoates) April 6, 2012
I mean, things being pixellated – digital in the real world – if anything, pixellation is on its way *out* with retina screens et al…
— Tom Coates (@tomcoates) April 6, 2012
Pixellation, tessellation, patterns and artefacting seem to me to be already retro. We’re going HD, algorithmically generated, retina screen
— Tom Coates (@tomcoates) April 6, 2012
The answer is this (and I’ll expand later) images, fashion and so on in the 80s was inspired by the computer graphics of the time, and ideas around what computer graphics would look like. So lots of 8bit stuff, neon and Tron like faff.
The New Aesthetics, or at least the aspect I’m looking at, is inspired by computer vision. And computer vision is at the point now that computer graphics was at 30 years ago. The New Aesthetics isn’t concerned with retro 8bit graphics of the past, but the 8bit graphics designed for machines of the now.
Maybe this image will help, before I go and justify my above statement…
I’ll come back to Vision vs Graphics in a moment.
But if, for example a fashion designer took inspiration from the motion capture outfits above, and made clothes featuring large 1bit “pixels” I’d count that as New Aesthetic. But how are new pixels different from old pixels?
The Robot Readable World
A lot of the New Aesthetic is directly related to (or descended from) the Robot Readable World (RRW). You can read way more about that over at the BERG: The Robot-Readable World. And even earlier in Matt’s slides from 2007: The Vernacular of the Spectacular – Playing with Bits of The City, in which you can see obvious precursors to what we’re now called NA (slides 25, 30, 34 & 45).
Because machine/computer vision isn’t very advanced, to exist with machines in the real world we need to mark up the world to help them see. The above image is imagining how this could work, but below is it occurring in the real world…
…QR codes (or something somewhat similar) are placed on a factory floor in a grid solely to aid robots, there is nothing here for us humans to use. And yet the passage of the robots create a pattern, tracks from the wheels and circles where the robots turn and turn and turn. This is computer vision spilling out into the real world.
It’s been there for a while now, bar codes have been around for years, but we can expect to see more of it. The testing and slow introduction of computer driven cars will most likely see special markings on roads & signs giving the cars instructions. All throughout shops, malls, streets and cities markings for machines are appearing.
The first part of NA I’ve been paying attention to has been the examples of this spilling out, the bits of the city not meant for us. The second part of NA I paid attention to was what happened when artists began to deconstruct and respond to that encroachment, how can designing for robots influence our own design?
8bit retro computer vision
So how do I know that computer vision is lagging behind computer graphics? Well firstly I know I can easily buy two consumer grade specifically design graphics cards with GPUs on running in parallel to install in my desktop PC. I don’t seem to be able to buy specifically design computer vision cards to stick in my machine. In the consumer space one is definitely leading the other.
Secondly, games can create all sorts of worlds, spaceships, fantasy lands, renditions of hell, cities from the 80s and so on with their fancy graphics, reflections, fog, shaders, realistic light sources and particle effects all adding to the effect.
Computer vision can recognise faces (badly), blocky Augmented Reality (AR) surfaces and very specifically registered AR surfaces.
Consumer desktops can produce pretty good environments in real-time (heck the new iPad can) and render photorealistic images given a bit longer. Meanwhile they can see bugger all, unless we’ve specifically designed it for them.
Here’s a quick list of the progression of computer game graphics, and a screenshot from Sentinel because I like it…
…actually now’s a good time to jump over to The Virtual Art of 80′s Game Worlds for a quick read, and then come back.
That list, in sort-of kinda easily disputable chronological order (fit Alone in the Dark in where you like)
- 2D graphics
- Wireframe 3D
- Solid 3D
- Textured 3D/Gradient filled 3D
- Particle effects
- Level of Details 3D models & shaders
- Dynamic lighting
- Procedurally generated environments
- Retina displays/3D displays
- Augmented Reality (AR)
- Photo realistic realtime AR
That seems like a not unreasonable list. And getting back to the tweet posted by Coates above, if New Aesthetics was aping early computer graphics, the 2D, Isometric, Solid 3D, 8Bit colours stuff, then indeed it’s being retro for no good reason, when it could take its cue from photo realistic AR & retina displays, moving beyond the pixel.
Now a list of computer vision, in similar not-quite but almost correct order…
- Motion detection
- Edge detection
- “Blob” detection
- Shape detection
- Face detection
- Depth & Joint detection (XBox Kinetic, motion capture)
- Colour detection
- Gait detection
- Gradient/Shade detection (ie. understanding colour in different lighting conditions)
- Recognising humanoid forms against real world backgrounds
- Recognising individuals against real world backgrounds
- Recognising specific items against real world background
- Understanding all nearby items within a real world environment
3D Wireframes were around 30 years ago, solid & textured 3D shortly after and still all done in software. 20 years ago some of these calculations moved onto GPUs on dedicated 3D graphics cards.
Computer vision it’s all still done in software, and we’re roughly up-to depth, joints, colour & shading detection, if the evolution was on par with graphics we’d start to see the first few dedicated vision cards appearing on the market for consumer use.
Or put another way, current computer vision can probably “see” computer graphics from around 20-30 years ago.
Which in turn means to design for machine eyes we need to be at the level of computer graphics from the 8bit era, and so we have QR codes all over the place.
Chris Heathcote wrote about NA fashion: a new fashion aesthetic…
“Sometimes it’s just the right colours, or the cut. It’s more gradient fill than pixels. It’s things that couldn’t be made 5 years ago. Supersymmetry and asymmetry. It’s not about the ‘machine vision’ that the New Aesthetic references, but it’s hard to see how that will not be appropriated and re-emerge into fashion as something not necessarily technically correct but aesthetically interesting.”
I’m going to disagree with the “not about the ‘machine vision’” part, and wrap it up with markings for machines spilling out into the real world I mentioned above. If the markings machines can understand are currently at the level of edge detection, outline/shape, depth, joint & colour shading/gradient and that’s out there in the world, being all odd and not for us, but inspiring and guiding designs, art and designers, then we get the gradient fills, sharp cuts & polygones.
And I can see how that’s easy to confuse with retro 8bit 80s style, but as I mentioned before NA is about what machines can see now, rather than what they could produce back then, it’s just that vision is 20-30 years behind creation and so there are many similarities.
New Aesthetic is about the polygones and edges and pixels of now vision, not the polygones and edges and pixels of back then creation.
A few last words
I wanted to write this because I’ve seen a number of occasions where the question “But isn’t NA just retro?” has come up, and I can see why it comes up and where it comes from. I just wanted to explain that the NA pixels do come from some other place, the RRW. It isn’t all just glitch, but actually a response to new markings appearing in our world.
There are also other angles to the NA that do involve retro 8bit, such as this pixel Whale…
…which is a representation of pixels in the real world. And as Coates says, as we’re moving to retina displays, pixels as presentation is something that’s going away, making this image distinctly about nostalgia and yet still feels like NA.
And then on-top of that there’s art/design inspired by what’s been collected together so far, which gives us…
- Found art that’s actually stuff for machines
- Art based on found art that’s actually stuff for machines
- Art based on the art that’s based on found art that’s actually stuff for machines
Number 1 is the markings on the factory floor for robots, number 2 is a lot of the New Aesthetic that’s been gathered that’s taken it’s inspiration from the factory floor markings.
I’ve partaken of number 3, I wrote some code to turn images into patterns, rather like this…
I probably wouldn’t have made these design decisions if I hadn’t recently been immersed in the thinking about exactly what NA is (*spoiler* I still don’t really know). The look and feel is supposed to be that of NA, but I’m not sure this makes what I’ve done NA.
But what I’ve done here is no longer about machine vision and looks far closers to a variation on pixels and so reinforcing the retro confusion. It’s supposed to be modern not retro.
New pixels, not old pixels.
I should probably mention drones, abet quickly, and my understanding of drones.
When I think of a drone I think of an autonomous agent, not something that’s being controlled remotely by people in a bunker with laptops (or whatever). Something with GPS, altimeter, navigation, satellite images, vision.
Something that left to its own devices can operate itself, sensing, watching, reacting. It is machine vision actualised.