LTT-GAN: Looking
Through Turbulence by Inverting GANs

teaser

Abstract

In many applications of long-range imaging, we are faced with a scenario where a person appearing in the captured imagery is often degraded by atmospheric turbulence. However, restoring such degraded images for face verification is difficult since the degradation causes images to be geometrically distorted and blurry. To mitigate the turbulence effect, in this paper, we propose the first turbulence mitigation method that makes use of visual priors encapsu- lated by a well-trained GAN. Based on the visual priors, we propose to learn to preserve the identity of restored images on a spatial periodic contextual distance. Such a distance can keep the realism of restored images from the GAN while considering the identity difference at the network learning. In addition, hierarchical pseudo connections are proposed for facilitating the identity-preserving learning by introducing more appearance variance without identity changing. Extensive experiments show that our method significantly outperforms prior art in both the visual quality and face verification accuracy of restored results.

Spatial Periodic Contextual Distance

In our method we treat each image as a collection of multiple sub-images extracted spatial periodically, and we then consider the contextual distance between the two sub-image collections as the identity preserving loss. Here the decomposing, called PK, is illustrated below.

Spatial Periodic Contextual Distance

Results

Synthesized Turbulence Mitigation

Turbulence vs. GFPGAN
Turbulence vs. Ours
GFPGAN vs. Ours

Real-world Turbulence Mitigation

Turbulence vs. GFPGAN
Turbulence vs. Ours
GFPGAN vs. Ours

Citation

Acknowledgements

The website template was borrowed from Mip-NeRF 360.