This paper presents an innovative framework designed to train an image deblurring algorithm tailored to a specific camera device. This algorithm works by transforming a blurry input image, which is challenging to deblur, into another blurry image that is more amenable to deblurring. The transformation process, from one blurry state to another, leverages unpaired data consisting of sharp and blurry images captured by the target camera device. Learning this blur-to-blur transformation is inherently simpler than direct blur-to-sharp conversion, as it primarily involves modifying blur patterns rather than the intricate task of reconstructing fine image details. The efficacy of the proposed approach has been demonstrated through comprehensive experiments on various benchmarks, where it significantly outperforms state-of-the-art methods both quantitatively and qualitatively.
Qualitative results when using deblurring model NAFNet[1] - was pretrained on the GoPro dataset (Known Blur) for various Unknown Blur datasets. The Blur column shows original inputs and their Blur2Blur conversions. The Deblurred column displays the results of applying the pretrained model to both the original and the Blur2Blur-converted versions.
REDS dataset
(Synthetic Blur)
Blur
Deblurred
Groundtruth
Blur
Deblurred
Groundtruth
Blur
Deblurred
Groundtruth
RSBlur dataset
(Real Blur)
Blur
Deblurred
Groundtruth
Blur
Deblurred
Groundtruth
Blur
Deblurred
Groundtruth
RB2V dataset
(Real Blur)
Blur
Deblurred
Groundtruth
Blur
Deblurred
Groundtruth
Blur
Deblurred
Groundtruth
We aim to evaluate a real-world scenario by using different existing Known Blur datasets along with our Blur2Blur model to deblur Real Unknown Blur images (using NAFNet pretrained deblurring model). These real images are captured by a Samsung Galaxy Note 10 Plus, forming the PhoneCraft dataset (more details are available in our paper).
The video showcases a comparison of deblurring results on the original images versus our Blur2Blur-converted versions.
(Click to Play or Resume all videos)
To further assess the enhancement in hand movement recognition, we validated the deblurred videos using the Hand Pose Estimation model.
Given a camera, we aim to develop an algorithm to deblur its captured blurry images. We assume access to the camera to collect unpaired sets of blurry images () and sharp images (). Furthermore, to concentrate on capturing the blur kernel, we create a known-blur images set () by ultilizing the Blur Kernel Extractor[2] - that can isolate and transfer blur kernels from randomly existing blurry-sharp image pairs to the targeted sharp inputs ().
The key component in our proposed system is a blur translator that converts unknown-blur images captured by the camera to have the target known-blur presented in (). This translator is trained using reconstruction and adversarial losses. The converted images have known blur and can be successfully deblurred using the previously trained deblurring model.
Comparison of different deblurring methods on various datasets. For each test, we report PSNR↑ and SSIM↑ scores as evaluation metrics (higher is better for both). The best scores are in bold and the second best score are in underline. For a supervised method, NAFNet or Restormer, we assess its upper-bound of deblurring performance by training it on the training set of the source dataset*.
@inproceedings{pham2024blur2blur,
title = {Blur2Blur: Blur Conversion for Unsupervised Image Deblurring on Unknown Domains},
author = {Pham, Bang-Dang and Tran, Phong and Tran, Anh and Pham, Cuong and Nguyen, Rang and Hoai, Minh},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2024}
}