Multi-view Video Coding with Reflectance Modelling


Multi-view Video Coding with Reflectance Modelling

Professor Manzur Murshed (Principal)
Assoc Prof Shyh Wei Teng
Professor Guojun Lu
Dr Jiangang (Mike) Ma (New Staff)

Contact person and email address:

Professor Manzur Murshed (

A brief description of the Project:
Compression efficiency of multi-view video coding depends on the existence of strong statistical redundancy between the frames and views. Such redundancy is expected to exist when scene surfaces follow the Lambertian reflectance model, according to which the radiant flux reflected from a scene surface is independent of the viewing angle. In practice, however, the light reflected from scene surfaces generally consists of both diffuse (Lambertian) and specular components. This project will focus on decomposing the multi-view imagery into a sum of two components—diffuse and specular—by exploiting the diversity provided by multiple views and frames. In particular, differences between the inter-frame correlation observed over time (motion) and that observed over view-space (disparity) should provide a useful indication of the presence of specular highlights. Traditional transform-based techniques may be adapted to efficiently compressing the diffuse component, yielding a sparse representation in the transformed domain. The specular component, however, is expected to exhibit a very different form of sparsity. Specular reflectance tends to be spatially localised and largely free from scene texture that can be expensive to code; on the other hand, inter-view prediction of specular components is not expected to be helpful. In the first instance, therefore, we will focus on coding the specular component independently in each view, using motion relationships only.