The Future of Interaction?

Recently I’ve been having a number of conversations about the future of interfaces and how the way we interact with computers and information will likely change in the near future. These conversations are mostly in reference to Kinect.

We’ve come a long way from the days of the DOS prompt, although I do miss that blinking underscore C:\_ – it was like the pulse of the machine, the computer’s heartbeat, listening, waiting for instructions, trying to entice instructions out of us.

As visual interfaces became richer so did the methods we used to interact with them. Back in 1945, Vanevaar Bush introduced the world to the idea of The Memex, a desk that could store information and catalogue it for us, linking images and papers so we could search and view relationships between the information we photographed or read. His Memex was a desk, and it is that metaphore that carried through to modern computers and gave us the notion of a desktop. The desktop computer. Find 15 minutes and take a read of his paper from 1945 titled ‘As we may think.’ It is truly enlightening when you consider the age it was written in and helps frame thinking for our potential future interactions.

Kinect outside of gaming

There have been many ‘hacks’ of Kinect. There was a bounty put up for the first person to hack the hardware when it was released, I recall reports saying it took about 3 hours for the first person to decode a signal out of the Kinect box. That wasn’t really surprising, if you make a rule (such as with all cases of DRM) then you’re essentially creating a game and that game is to break the DRM, to break the rules so the rules can be re-written and the game can start all over again. The recent announcement that an SDK for Windows will be released has caused a lot of excitement in the developer community. Here’s some of the hacks that have used Kinect to date:

These are hardware hacks as in, they capture the imaging data that comes out of the Kinect unit, they don’t use the magic of the software inside the Xbox to decode that signal into a skeletal reconstruction of a person – that takes a great deal of effort to do properly which is what really excites me about the release of the SDK. If Microsoft allow developers access to that software then developers can concentrate their efforts on building on top of that platform.

Technologists are traditionally quite bad at coming up with new ideas of their own, instead preferring to concentrate on bettering things they’ve already seen. I’m not saying that Kinect is a uniquely new concept, the remarkable contribution that Kinect makes is the affordability of the device – it is a very cheap set of eyes and ears for a robot as seen in some of the hacks above. This is a stable and solid platform that can be built upon.

Everyone has their own focus in life, if you can release tools like this to a creative and passionate community of developers and researchers who have their own goals, their own focus, then the products of that passion and creativity are, in some cases going to be unique and globally fulfilling. That, I find very exciting, but I’m sure it won’t come with it’s own problems along the way…

the drum machine of the 21st century?

When any new technology is introduced, there’s always a great deal of experimentation, often that can be borderline hallucinogenic and far from grounded in reality. When the drum machine was introduced to musicians in the 80’s they went hog wild using it in any way they could. Even artists like Joni Mitchell stood up from her bar stool and experimented with it – if your ears can manage it, take a listen to her Dog Eat Dog album which is IMHO a prime example of using technology in the fashion of Edmund Hilary – just because it’s there. Dog Eat Dog is an awful album, it should never have been released, but oftentimes we need to experience the bad so we can then appreciate the good.

When the Kinect SDK is given over to the academic and enthusiast community, alongside the fantastic, I expect to see all manner of weird implementations. Clients will stop saying ‘I want my logo to spin’ and start saying ‘I want to be able to spin my logo’ and their marketing dollars will be able to buy them that experience. Users will be forced mimic a puking cat, nodding their heads to select the red triangle just so they can turn it into a blue square, rather than using the more natural voice or mouse input. We are going to see some baaaaaaaaaaaaad things.

But that is a actually a good thing. Surely?!?

We need to experience the Dog Eat Dogs of the gesture world so we know to avoid them. Then we might be able to appreciate a naturalistic fitting experience in a clothing shop window, browsing retail products at 2am, on our own, without entering a changing room, and then purchase without even entering the shop. I would be in heaven. No crowds, no human interaction, just me, my avatar and huge shop window display. I wouldn’t even have to carry anything home.

Roll on the future.