Segmented and Gaze Controlled decompression for streaming displays such as VR
Application of block-based motion detection and compression using DCT transform by adaptively encoding/decoding the content based on sematics in the video. Emulated the VR effect using view dependent gaze to control decoding.
To formally describe the project:
- Sematic Layering of video: There is a pre processing step where the tool will analyze the input video, break it into foreground/background layers.
- Compressing of layers: Each of the foreground/background layers will need to be stored in a compressible manner so that later access can decide what needs to be read/streamed to the player and displayed.
- Displaying of video: The player should read the compressed video file and display the video per quantization inputs to simulate bandwidth distribution among layers. Foreground layers should be ideally more clearly visible than background layers .
- Applying gaze control: We don't have Head mounted VR displays to work with to see the value of this effect, but we can certainly simulate gaze based control by using a mouse pointer. With your mouse location acting as "gaze direction" you want decode/display a local area around the mouse location will the best clarity (no quantization) compared to other layered areas.