Transparency

Scene reassembly after multimodal digitization and pipeline evaluation using photorealistic rendering

Transparent objects require acquisition modalities that are very different from the ones used for objects with more diffuse reflectance properties. Digitizing a scene where objects must be acquired with different modalities, requires scene reassembly after reconstruction of the object surfaces. This reassembly of a scene that was picked apart for scanning seems unexplored. We contribute with a multimodal digitization pipeline for scenes that require this step of reassembly. Our pipeline includes measurement of bidirectional reflectance distribution functions (BRDFs) and high dynamic range (HDR) imaging of the lighting environment. This enables pixelwise comparison of photographs of the real scene with renderings of the digital version of the scene. Such quantitative evaluation is useful for verifying acquired material appearance and reconstructed surface geometry, which is an important aspect of digital content creation. It is also useful for identifying and improving issues in the different steps of the pipeline. In this work, we use it to improve reconstruction, apply analysis by synthesis to estimate optical properties, and to develop our method for scene reassembly.


Introduction

Several research communities work on techniques for optical acquisition of physical objects and their appearance parameters. Thus, we are now able to acquire nearly any type of object and perform a computer graphics rendering of nearly any type of scene. The range of applications is broad and includes movie production, cultural heritage preservation, 3D printing, and industrial inspection. A gap left by these multiple endeavors is a coherent scheme for acquiring a scene consisting of several objects that have very different appearance parameters, together with the reassembly of a digital replica of such a scene. Our objective is to fill this gap for the combination of transparent and opaque objects, as many real world scenarios exhibit this combination. An example is a living room, like the rendering shown above. We propose a pipeline for acquiring and reassembling digital scenes from this type of heterogeneous real-world scenes. In addition, our pipeline closes the loop by rendering calibrated images of the digital scene that are commensurable with photographs of the original physical scene (see the results below). This allows for validation and fine-tuning of appearance parameters. The quantitative evaluation we get from pixelwise comparison of rendered images with photographs is a great improvement with respect to validation of the acquired digital representation of the physical objects. When addressing the problem of acquiring a heterogeneous scene, there is an infinite variety of scenes and object types to choose from. So, to make our task feasible, we focus on scenes that combine glassware and non-transparent materials, more specifically, white tablecloth and cardboard with a checkerboard pattern.

Glass Sphere Solid glass sphere
Glass Bowl Glass bowl with lid
Glass Teapot Glass teapot with lid
We made these choices as glass requires a different acquisition modality, the tablecloth BRDF is spatially uniform but not necessarily simple, and the cardboard has simple two-color variation. The latter is particularly useful for observing how light refracts through the glass. The chosen case is also of particular interest, since glass is present in many intended applications of optical 3D acquisition. Considering the highly multidisciplinary nature of our work, we have released our dataset (Links for dataset download is provided on this page). This facilitates further investigation by other researchers of the different steps of our pipeline with the possibility of a quantitative feedback at the end of the process.

Pipeline

Overview of our digitization pipeline in four main stages: acquisition, reconstruction, reassembly, and rendering. Colored arrows show the path through the pipeline of transparent objects (dotted blue), non-transparent objects (dashed red), and both types together (dotted-dashed magenta).

Pipeline Diagram
Workflow for scanning the geometry of non-transparent objects and collecting reference images (left), for scanning the geometry of transparent objects (middle), and for measuring material reflectance properties (right).

Acquisition Diagram
Video presentation of our pipeline:

Dataset

We provide our dataset for download, please cite if you use the data. README files are included to describe the data. Additional description will be provided soon.

Scene

Results

In the results below, rendered images (top row) are compared with photographs (bottom row). The scenes were digitized using our pipeline and include both glass objects and non-transparent objects (tablecloth and backdrop).

Object 3
Overlay comparrison of the photograph and the rendered image of the teapot:

rendering
reference
Drag the cursor over the image to compare the rendered image with the reference image.
The rendering below exemplify the use of our pipeline for virtual product placement using our digitized glass objects, with estimated optical properties and artifact-reduced removal of markers.

Transparency

BibTex Reference

@article{Stets:17,
author = {Jonathan Dyssel Stets and Alessandro Dal Corso and Jannik Boll Nielsen and Rasmus Ahrenkiel Lyngby and Sebastian Hoppe Nesgaard Jensen and Jakob Wilm and Mads Brix Doest and Carsten Gundlach and Eythor Runar Eiriksson and Knut Conradsen and Anders Bjorholm Dahl and Jakob Andreas B{\ae}rentzen and Jeppe Revall Frisvad and Henrik Aan{\ae}s},
journal = {Appl. Opt.},
keywords = {Three-dimensional sensing; Optical properties; Color; BSDF, BRDF, and BTDF ; Calibration ; Multisensor methods},
number = {27},
pages = {7679--7690},
publisher = {OSA},
title = {Scene reassembly after multimodal digitization and pipeline evaluation using photorealistic rendering},
volume = {56},
month = {Sep},
year = {2017},
url = {http://ao.osa.org/abstract.cfm?URI=ao-56-27-7679},
doi = {10.1364/AO.56.007679},
abstract = {Transparent objects require acquisition modalities that are very different from the ones used for objects with more diffuse reflectance properties. Digitizing a scene where objects must be acquired with different modalities requires scene reassembly after reconstruction of the object surfaces. This reassembly of a scene that was picked apart for scanning seems unexplored. We contribute with a multimodal digitization pipeline for scenes that require this step of reassembly. Our pipeline includes measurement of bidirectional reflectance distribution functions and high dynamic range imaging of the lighting environment. This enables pixelwise comparison of photographs of the real scene with renderings of the digital version of the scene. Such quantitative evaluation is useful for verifying acquired material appearance and reconstructed surface geometry, which is an important aspect of digital content creation. It is also useful for identifying and improving issues in the different steps of the pipeline. In this work, we use it to improve reconstruction, apply analysis by synthesis to estimate optical properties, and to develop our method for scene reassembly.},
}

Publications

Scene reassembly after multimodal digitization and pipeline evaluation using photorealistic rendering [2017]

J. D. Stets, A. D. Corso, J. B. Nielsen, R. A. Lyngby, S. H. N. Jensen, J. Wilm, M. B. Doest, C. Gundlach, E. R. Eiriksson, K. Conradsen, A. B. Dahl, J. A. Bærentzen, J. R. Frisvad, H. Aanæs
Applied Optics , 56, 7679-7690

Our 3D Vision Data-Sets in the Making [2015]

H. Aanæs, K. Conradsen, A. D. Corso, A. B. Dahl, A. D. Bue, M. Doest, J. R. Frisvad, S. H. N. Jensen, J. B. Nielsen, J. D. Stets, G. Vogiatzis
Conference on Computer Vision and Pattern Recognition 2015 , Institute of Electrical and Electronics Engineers

Contact

Jonathan Dyssel Stets
[Ph.D. Student]
Technical University of Denmark
Alessandro Dal Corso
[Ph.D. Student]
Technical University of Denmark