Title: DIABETIC RETINOPATHY IMAGE SYNTHESIS USING DCGAN AND VAE MODELS
Authors: K. Shayam Kishore and Y Sravani Devi
Abstract: The amount of data, particularly in medical imaging, is one of the most important factors in classifying the image. Nevertheless, obtaining the datasets is the biggest obstacle in the healthcare industry. In this, we prepare a VAE (variational autoencoder) and another model known as the DCGAN (deep convolutional generative adversarial networks), on almost 3662 retinal images that have been captured from a dataset known as the APTOS- Blindness dataset, to display the images of the synthesized retinal fundus. The advantage of this method is that retinal pictures can be produced without the preceding vessel segmentation technique. As a result, the system can become autonomous. The models that are acquired are the image synthesizers that are adept at synthesizing resized retinal images of any amount from a fundamentally regular distribution. Furthermore, more images than this have been used in literature for training purposes than for any other endeavor. Giving an output to a CNN model allows for the evaluation or appraisal of a synthetic image, and the average squared error between the average 2-dimensional hologram of actual and synthetic images was also calculated. by examining the average loss and latent space of the images later. The analysis’s successful results suggested that DCGAN, as opposed to Variational Auto Encoders, has less loss in general images.
Keywords:  Data Augmentation, DC-GAN, Variational Auto Encoder (VAE), Diabetic Retinopathy, Generative Adversarial Networks, CNN.
PDF Download