In the recent years we've seen a rising interest of the industry for virtual reality (VR) and lately for augmented reality (AR) as well. No wonder since it's the closest thing to holograms we've been dreaming of and drooling over in works of fiction for the past five decades or so. While VR brings 'total' immersion and a set of problems caused by it, AR promises future full of holograms all around us, our physical and virtual worlds finally collided and merged. It’s hard to even imagine, let alone predict all the ways it will affect content creation, user experience and overall interaction with the world around us. Advancements in processing power of mobile platforms and image sensors fueled this recent surge of interest towards AR, but there is still a long road ahead of us from having contact lens-like, inconspicuous AR devices capable of rendering augmentation indistinguishable from the reality.
This talk will focus more on some of the techniques used behind AR: computer vision algorithms, its usage in combination with computer graphics to achieve digital augmentation and evolution from simple marker-based and natural feature tracking to utilizing simultaneous localization and mapping (SLAM) algorithms. Basically, what it takes to come from matrix of bytes received from image sensor to superimposing another matrix of computer generated bytes that our eyes perceive as augmented reality.