CycleGAN-based deep learning technique for artifact reduction in fundus photography

Abstract
Purpose A low quality of fundus photograph with artifacts may lead to false diagnosis. Recently, a cycle-consistent generative adversarial network (CycleGAN) has been introduced as a tool to generate images without matching paired images. Therefore, herein, we present a deep learning technique that removes the artifacts automatically in a fundus photograph using a CycleGAN model. Methods This study included a total of 2206 anonymized retinal images including 1146 with artifacts and 1060 without artifacts. In this experiment, we applied the CycleGAN model to color fundus photographs with a pixel resolution of 256 × 256 × 3. To apply the CycleGAN to an independent dataset, we randomly divided the data into training (90%) and test (10%) datasets. Additionally, we adopted the automated quality evaluation (AQE) to assess the retinal image quality. Results From the results, we observed that the artifacts such as overall haze, edge haze, lashes, arcs, and uneven illumination were successfully reduced by the CycleGAN in the generated images, and the main information of the retina was essentially retained. Further, we observed that most of the generated images exhibited improved AQE grade values when compared with the original images with artifacts. Conclusion Thus, we could conclude that the CycleGAN technique can effectively reduce the artifacts and improve the quality of fundus photographs, and it may be beneficial for clinicians in analyzing the low-quality fundus photographs. Future studies should improve the quality and resolution of the generated image to provide a more detailed fundus photography.