Image deblurring aims to restore the latent sharp images from the corresponding blurred ones. In this paper, we present an unsupervised method for domain-specific single-image deblurring based on disentangled representations. The disentanglement is achieved by splitting the content and blur features in a blurred image using content encoders and blur encoders. We enforce a KL divergence loss to regularize the distribution range of extracted blur attributes such that little content information is contained. Meanwhile, to handle the unpaired training data, a blurring branch and the cycle-consistency loss are added to guarantee that the content structures of the deblurred results match the original images. We also add an adversarial loss on deblurred results to generate visually realistic images and a perceptual loss to further mitigate the artifacts. We perform extensive experiments on the tasks of face and text deblurring using both synthetic datasets and real images, and achieve improved results compared to recent state-of-the-art deblurring methods.
Jun-Cheng Chen currently is an assistant research fellow at the research center of information technology innovation, Academia Sinica. He received his bachelor’s and master’s degrees in 2004 and 2006, respectively, both from Department of Computer Science and Information Engineering, National Taiwan University, Taipei. He received his Ph.D. degree from the University of Maryland, College Park, in 2016. He is a postdoctoral research fellow at the University of Maryland Institute for Advanced Computer Studies from 2017 to 2019. His current research interests include computer vision and machine learning with applications to face recognition and facial analysis. He was a recipient of the 2006 Association for Computing Machinery Multimedia Best Technical Full Paper Award.