Making the World Invisible
March 16th, 2012
With the release of the iPhone 4S and iPad we are seeing a world in which technology, although ever present, is making its constituent parts invisible. A fundamental shift in priority from screen size to pixel density has big ramifications both for the web industry and for consumers.
For web designers I feel that the way forward, where possible, is to use SVG for graphics instead of relying on ever bigger images for brands, logos, and icons (edit: having said that, this article is definitely worth the read). However, what I would like to talk about, the crux of this post, is current trends in technology and what the future might hold for us.
We're trying to hide the things that actually make a computer what it is, converging onto a single point, the screen. This has been happening for generations but we've now approached the point at which a tablet is simply a screen, nothing but glass between your finger and a world of information and entertainment. From a usability and UX perspective this is brilliant (probably).
It's my concern that if this trend of hiding what things are continues that we'll end up with a society who, by and large, don't feel they need to know how things work simply because they do. One of the things I used to like doing was upgrading my computer and getting to grips with the hardware, this is an increasingly rare thing.
Annie Lowrey recently wrote a good article on learning Ruby as a first programming language:
Your 2-year-old can play with an iPad. But the technology behind such marvels is complex and invisible, abstracted away from the human controlling it. Nor do these technologies offer us many ready chances to do basic programming on them. For nearly all of us, code, the language that controls these objects and in a way controls our world, is mysterious and indecipherable.
Of course you can open Terminal and off you go but this all requires the right knowhow and knowing that Terminal exists! It was an eye-opening article and made me think about programming from a slightly different angle.
So we have this current ecosystem where to program you pretty much have to be a programmer, from my experience Annie's case was fairly unique, I've not met anyone that wanted to code for the fun of it when coming from a different profession.
On the other side of the spectrum we have projects like Raspberry Pi that aim to provide people with powerful but cheap, barebones systems to make their own. Again though, it's only until developers make things with it and repackage it that it would be a consideration for 'mass' consumers.
I'm undecided on this, on one hand I love the beauty and simplicity of having everything encased in a smooth, screen-only design. On the other I feel it (necessarily) hides a lot of what makes computers great. Maybe I'm just getting nostalgic.
If you have any opinions on this I'd love to hear them.