SergePostTake the train or subway to work and you’ll see what I see: riders packed tight on their morning commute, all “linked” to a bigger world through mobile devices, watching videos, listening to music, or absorbed in something through their screen. Our daily lives have changed greatly in a short time through the connectivity of our mobile devices.

What digital natives might not realize is that they’re holding miniaturized versions of what we longer-term technologists once called “supercomputers” right there in their hands. While I’m still wowed when I watch a video on a small handheld, most of my fellow commuters clearly take this reality for granted.

Largely, it’s video that they watch. No surprise: it’s rising as the planet’s preferred content. Analysts predict that Internet traffic will be predominantly video within three to four years. Yesterday’s debut of a 360° Star Wars clip – you can actually play with it on a touch screen – shows we’re only at the beginning of innovation and immersion with digital video as a powerful content medium.

While this is great news, it’s time to add another dimension to our thinking: contextual data. Recent advances in technology and the explosion of IoT devices (30 billion devices will be connected by 2020), it’s time think differently. The fusion of video and metadata is inevitable. In fact, video can – and should – also be considered as data. As technology weaves more context and data into video new forms of immersion will rise, letting users experience layers of information beyond what their eyes could normally see.

Imagine watching a video where a simple swipe could reveal deeper context: anything from the location and time where a specific thing in the video was captured…maybe even the “signature” of the person who captured it. You could glide down into information on the actors – maybe even special things you have to engage with in order to see. You might learn, through user generated input or emerging technology, how other viewers responded to the content. A sensor on your wrist might even send you a reminder to check your posture if you’ve been staring at the screen too long. Maybe the video even reminds you when it’s time to look up and gather your things as you approach your exit.

For lack of better term, I call this “AV” – “augmented video” –  and like anything with exponential impact innovation around this is going to grow rapidly in years to come. If a picture is worth a thousand words I would say that this new AV is worth some exponential power beyond that.

Where do you think this new power will take us? What have you read that explores this thinking? Tweet your thoughts to @aernow1 – we welcome your insights.

 

 

 

Serge Eby

Serge Eby

VP Ecosystem at AerNow
VP Ecosystem. 15+ years of global engineering impact. In-depth expertise in design, development, QA, implementation and operations of large-scale solutions in telecom and consulting. Focused on the end-to-end AerNow ecosystem: strategy, IoT, network, platform and integration.
Serge Eby

Latest posts by Serge Eby (see all)