FaceGAN: Robust Face Recognition using Generative Adversarial Networks (GAN) Algorithm
Abstract
Generative Adversarial Networks (GANs) are a type of neural network that can generate synthetic images that are often indistinguishable from real ones. The article explores GAN to augment existing datasets or generate new ones for training classifiers. The competitive training process of GANs results in a generator network that can produce increasingly realistic images to create more diverse and balanced datasets for training classifiers. The article discusses several successful applications of GANs in image classification, including object recognition, face classification, and medical image analysis. The datasets used in this article are CelebA and FER2013. The CelebA dataset consists of 202,599 celebrity images with 40 attributes, such as gender, age, and facial hair. The FER2013 dataset consists of 35,887 images of faces with seven other emotions, including anger, disgust, fear, happiness, sadness, surprise, and neutral. The dataset is divided into training, validation, and test sets. We resized the images to 64x64 pixels and normalized the pixel values between -1 and 1, then trained a GAN model using the dataset. We evaluate the performance of our approach and compare it with several state-of-the-art methods, including Support Vector Machines (SVM) and Convolutional Neural Networks (CNN). We evaluate the performance of our approach and compare it with several state-of-the-art methods, including Support Vector Machines (SVM) and Convolutional Neural Networks (CNN), with the results that our approach outperforms SVM and CNN methods on both datasets, achieving a classification accuracy of 89.2% on CelebA and 72.5% in FER2013. Meanwhile, classification accuracy on SVM was 82.3% on CelebA and 65.4% on FER2013. Classification accuracy on CNN is 87.9% on CelebA and 70.8% on FER2013.
Downloads
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
IJICOM is an open-access journal. Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.