A System For Identifying Synthetic Images Using Lstm: A Deep Learning Approach |
||
|
<1--International Journal of Computer Trends and Technology (IJCTT)--> | |
© 2021 by IJCTT Journal | ||
Volume-69 Issue-2 |
||
Year of Publication : 2021 | ||
Authors : Hemanth Somasekar, Dr. Kavya Naveen | ||
DOI : 10.14445/22312803/IJCTT-V69I2P110 |
How to Cite?
Hemanth Somasekar, Dr. Kavya Naveen, "A System For Identifying Synthetic Images Using Lstm: A Deep Learning Approach," International Journal of Computer Trends and Technology, vol. 69, no. 2, pp. 64-67, 2021. Crossref, 10.14445/22312803/IJCTT-V69I2P110
Abstract
In the current scenario Generative Adversarial Network (GAN) is generating more exhilaration in various fields with an amazing growth of it can be seen over a few years. It is very much successful in generating synthetic images over natural images. These are unsupervised neural networks that are capable of creating new image samples based on the training process they have adapted from the information that has been fed to them. On the other hand, Long Short Term Memory (LSTM) is one type of Recurrent Neural Network (RNN) mainly used in the domain facing sequence prediction issues. In this paper, the GAN is considered a Generator, and the LSTM is considered a Discriminator. The work of the generator is to produce synthetic images out of random samples. Based on the fine-tune training, it can produce a perfect fake image that is difficult to identify as a real one. The same is fed to the LSTM network along with the real images, and the fine-tune training is performed to get more perfect synthetic images. Both facial datasets, as well as abstract art dataset available open-source, is taken for training and testing. From this research, it is proven that Generative Adversarial Network (GAN) and Long Short-Term Memory (LSTM) are the networks utilized, and the accuracies were found to be 58.53% and 72.68%, respectively, which explicitly proves that synthetic images are more clearly identified by the LSTM over GANs.
Keywords
Generative Adversarial Network, Long ShortTerm Memory.
Reference
[1] Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with recurrent adversarial networks. arXiv:1602.05110v5 [cs.LG] 13 Dec (2016).
[2] Hengyue Pan and Hui Jiang. Supervised Adversarial Networks for Image Saliency Detection. arXiv:1704.07242v2 [cs.CV] 26 (2017).
[3] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang and Dimitris Metaxas. StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks. arXiv:1612.03242v2 [cs.CV] 5 (2017).
[4] Jingkuan Song. Binary Generative Adversarial Networks for Image Retrieval. arXiv:1708.04150v1 [cs.CV] 8 Aug (2017).
[5] Hao Zhou, Jin Sun, Yaser Yacoob, and David W. Jacobs. Label Denoising Adversarial Network (LDAN) for Inverse Lighting of Face Images. arXiv:1709.01993v1 [cs.CV] 6 (2017).
[6] Andrew Kyle Lampinen, David So, Douglas Eck and Fred Bertsch. Improving generative image models with human Interactions. arXiv:1709.10459v1 [cs.CV] 29 (2017).
[7] Jianmin Bao, Dong Chen, Fang Wen, Houqiang Li, Gang Hua. CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training. arXiv:1703.10155v2 [cs.CV] 12 Oct (2017).
[8] Dimitrios Marmanis, Wei Yao, Fathalrahman Adam, Mihai Datcu, Peter Reinartz, Konrad Schindler, Jan Dirk Wegner and Uwe Stilla. Artificial generation of big data for improving image classification: a generative adversarial network approach on sar data. arXiv:1711.02010v1 [cs.CV] 6 (2017).
[9] Han Zhang, Tao Xu, Hong sheng Li, Shaoting Zhang, Xiaogang WangXiaolei, HuangDimitris, and N. Metaxas. StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks. arXiv:1710.10916v3 [cs.CV] 28 (2018).
[10] Zili Yi, Hao Zhang, Ping Tan, and Minglun Gong. DualGAN: Unsupervised Dual Learning for Image-to-Image Translation. arXiv:1704.02510v4 [cs.CV] 9 Oct (2018).
[11] Vaibhav Kumar, "Recurrent Neural Network based Language Modeling for Punjabi ASR" SSRG International Journal of Computer Science and Engineering 7.9 (2020): 7-13.
[12] Yongyi Lu, Yu-Wing Tai, and Chi-Keung Tang. Attribute-Guided Face Generation Using Conditional CycleGAN. arXiv:1705.09966v2 [cs.CV] 14 (2018).
[13] Tero Karras, Samuli Laine and Timo Aila. A Style-Based Generator Architecture for Generative Adversarial Networks. arXiv:1812.04948v3 [cs.NE] 29 (2019)
[14] Yu Zeng, Huchuan Lu, and Ali Borji. Statistics of Deep Generated Images. arXiv:1708.02688v5 [cs.CV] 24 (2019).
[15] Zhengwei Wang, Qi She, Tom´as E. Ward. Generative Adversarial Networks in Computer.