ArCSEM: Artistic Colorization of SEM Images via Gaussian Splatting

Takuma Nishimura1      Andreea Dogaru1      Martin Oeggerli2      Bernhard Egger1
1Friedrich-Alexander-Universität Erlangen-Nürnberg      2Micronaut

Abstract

Scanning Electron Microscopes (SEMs) are widely renowned for their ability to analyze the surface structures of microscopic objects, offering the capability to capture highly detailed, yet only grayscale, images. To create more expressive and realistic illustrations, these images are typically manually colorized by an artist with the support of image editing software. This task becomes highly laborious when multiple images of a scanned object require colorization. We propose facilitating this process by using the underlying 3D structure of the microscopic scene to propagate the color information to all the captured images, from as little as one colorized view. We explore several scene representation techniques and achieve high-quality colorized novel view synthesis of a SEM scene. In contrast to prior work, there is no manual intervention or labelling involved in obtaining the 3D representation. This enables an artist to color a single or few views of a sequence and automatically retrieve a fully colored scene or video.

Grayscale (video)

Colorization (video)

Method

Given a sparse set of SEM images of a microscopic scene and a few artist-colorized views, our method can synthesis hiqh-quality colored views of the scene from arbitrary viewing angles. We employ a two-stage training approach: grayscale 3D scene optimization and colorization. The SEM images are first calibrated using RealityCapture which uses a very large focal length to simulate the ortographic projection described by the parallel electron beams. We obtian the initial scene representation by fitting 2DGS to the calibrated set of grayscale images. To handle the view-dependent illumination variations, we apply an afine color transformation to the decoded color using image-specific weights and biases. In the second stage, we use the grayscale model to generate depth maps and project the artist-provided colors into 3D space. For views without color data, we use a nearest-neighbor search to obtain pseudo-colors. Finally, we fine-tune the initial grayscale model using the color images and the computed pseudo-colors by keeping the geometry fixed and optimizing the spherical harmonics coefficients for all degrees using losses inspired from Ref-NPR.

Method overview

Results

Grayscale novel views

Plenoxels 3DGS 2DGS Ours
Plenoxels 3DGS 2DGS Ours
Plenoxels 3DGS 2DGS Ours
(click and drag to swipe)

Colorized novel views

Ref-NPR Ours
Ref-NPR Ours
Ref-NPR Ours
(click and drag to swipe)

Citation

      
        
        To be published.
    
    

Acknowledgement

We would like to thank Maximilian Weiherer for valuable discussions and support with the camera calibration. Andreea Dogaru was funded by the German Federal Ministry of Education and Research (BMBF), FKZ: 01IS22082 (IRRW). The authors are responsible for the content of this publication. The authors gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) under the NHR project b112dc IRRW. NHR funding is provided by federal and Bavarian state authorities. NHR@FAU hardware is partially funded by the German Research Foundation (DFG) – 440719683. This project was supported by the special fund for scientific works at the Friedrich-Alexander-Universität Erlangen-Nürnberg.