10.57647/spre.2025.0904.24

Infrared and Visible Image Fusion Using a Generative Adversarial Network Based on Fuzzy Logic Trained with the Harris Hawks Optimization Algorithm

  1. Department of Electrical Engineering, ST.C, Islamic Azad University, Tehran, Iran
  2. Department of Electrical and Computer Engineering, SR.C., Islamic Azad University, Tehran, Iran

Received: 2025-06-19

Revised: 2025-08-18

Accepted: 2025-09-13

Published in Issue 2025-12-31

How to Cite

Zarimeidani, M., Amirabadi, A., Amiri, N., Ahanian, I., & Es'haghi, S. (2025). Infrared and Visible Image Fusion Using a Generative Adversarial Network Based on Fuzzy Logic Trained with the Harris Hawks Optimization Algorithm. Signal Processing and Renewable Energy (SPRE), 9(4). https://doi.org/10.57647/spre.2025.0904.24

PDF views: 91

Abstract

Infrared and visible image fusion integrates complementary data from multi-sensor imagery into a single, information-rich composite, enhancing scene understanding across diverse applications. Despite recent advances in machine learning-driven fusion, challenges such as artifacts, blurred features, and poorly enhanced critical regions persist. We propose a novel Fusion Generative Adversarial Network (FGAN) that synergistically combines a fuzzy logic- based generator with a support vector machine (SVM)-powered discriminator. The generator leverages a Mamdani-type fuzzy logic system, optimized via the Harris Hawks Optimization (HHO) algorithm, targeting entropy, PSNR, and SSIM metrics to refine fusion quality. Concurrently, the discriminator employs the Frechet Inception Distance (FID) to robustly distinguish real and synthetic images. Evaluated on the TNO dataset using MATLAB, our FGAN delivers superior subjective visual quality and objective performance, outperforming state-of-the-art methods and setting a new benchmark for image fusion. 

Keywords

  • Infrared and visible image fusion,
  • FGAN Fuzzy Neural Network,
  • Harris Hawks Optimization (HHO) Algorithm,
  • Image sensors,
  • Fuzzy logic system,
  • Support Vector Machine (SVM)

References

  1. Zhou, X.;Wang,W.; Liu, R.A. Compressive sensing image fusion algorithm based on directionlets. EURASIP J. Wirel. Commun. Netw. 2014, 2014, 19.
  2. Smith, M.I.; Ball, A.N.; Hooper, D. Real-Time Image fusion: A vision aid for helicopter pilotage. Proc. SPIE 2002, 4713, 30–41.
  3. Zhou, Z.Q.; Dong, M.J.; Xie, X.Z. Fusion of infrared and visible images for night-vision context enhancement. Appl. Opt. 2016, 55, 6480–6490. [PubMed]
  4. Li, H.;Wu, X.J.; Durrani, T.S. Infrared and visible image fusion with ResNet and zero-phase component analysis. Infrared Phys. Technol. 2019, 102, 103039.
  5. Lin, H.; Tian, Y.; Pu, R. Remotely sensing image fusion based on wavelet transform and human vision system. Int. J. Signal Process. Image Process. Pattern Recognit. 2015, 8, 291–298.
  6. Feng, C.Q.; Li, B.L.; Liu, Y.F.; Zhang, F.; Yue, Y.; Fan, J.S. Crack assessment using multi-sensor fusion simultaneous localization and mapping (SLAM) and image super-resolution for bridge inspection. Autom. Constr. 2023, 155, 105047.
  7. Cai, J., Cheng, Q., Peng, M., & Song, Y. (2017). Fusion of infrared and visible images based on nonsubsampled contourlet transform and sparse K-SVD dictionary learning. Infrared Physics & Technology, 82, 85-95.
  8. G. Bhatnagar, Q.M.J. Wu, Z. Liu, Directive contrast based multimodal medical image fusion in NSCT domain, IEEE Trans. Multimed. 15 (5) (2013) 1014–1024.
  9. Yang, K., Xiang, W., Chen, Z., Zhang, J., & Liu, Y. (2024). A review on infrared and visible image fusion algorithms based on neural networks. Journal of Visual Communication and Image Representation, 104179.
  10. Li, J., Zhang, J., Yang, C., Liu, H., Zhao, Y., & Ye, Y. (2023). Comparative analysis of pixel-level fusion algorithms and a new high-resolution dataset for SAR and optical image fusion. Remote Sensing, 15(23), 5514.
  11. Yang, X., Huo, H., Li, J., Li, C., Liu, Z., & Chen, X. (2022). DSG-fusion: Infrared and visible image fusion via generative adversarial networks and guided filter. Expert Systems with Applications, 200, 116905.
  12. Liu, G., Liu, Y., Tang, L., Bavirisetti, D. P., & Wang, X. (2023). A Generative Adversarial Network for infrared and visible image fusion using adaptive dense generator and Markovian discriminator. Optik, 288, 171139.
  13. Wu, J., Liu, G., Wang, X., Tang, H., & Qian, Y. (2024). GAN-GA: infrared and visible image fusion generative adversarial network based on global awareness. Applied Intelligence, 1-21.
  14. Zou, X., & Tang, J. (2025). Guided fusion of infrared and visible images using gradient-based attentive generative adversarial networks. The Visual Computer, 1-18.
  15. Chen, Y., Zheng, W., & Shin, H. (2023). Infrared and visible image fusion using a feature attention guided perceptual generative adversarial network. Journal of Ambient Intelligence and Humanized Computing, 14(7), 9099-9112.
  16. Tian, X., Xianyu, X., Li, Z., Xu, T., & Jia, Y. (2024). Infrared and visible image fusion based on multi-level detail enhancement and generative adversarial network. Intelligence & Robotics, 4(4), 524-543.
  17. Wu, J.; Yang, S.;Wang, X.; Pei, Y.; Wang, S.; Song, C. Hierarchical Fusion of Infrared and Visible Images Based on Channel Attention Mechanism and Generative Adversarial Networks. Sensors 2024, 24, 6916. https://doi.org/10.3390/s24216916
  18. K. Zhang, Y. Huang, X. Yuan, C. Zhao, Infrared and Visible Image Fusion Based on Intuitionistic Fuzzy Sets, Infrared Physics & Technology (2019), doi: https://doi.org/10.1016/j.infrared.2019.103124.
  19. Jiayi Ma, Wei Yu, Pengwei Liang, Chang Li, Junjun Jiang, FusionGAN: A generative adversarial network for infrared and visible image fusion, Information Fusion (2018), doi: https://doi.org/10.1016/j.inffus.2018.09.004.
  20. Hou, J.; Zhang, D.;Wu,W.; Ma, J.; Zhou, H. A Generative Adversarial Network for Infrared and Visible Image Fusion Based on Semantic Segmentation. Entropy 2021, 23, 376. https://doi.org/10.3390/e23030376.
  21. LiuY, Liu S,Wang Z (2015) Ageneral framework for image fusion based on multi-scale transform and sparse representation. Inform Fus 147–164. https://doi.org/10.1016/j.inffus.2014.09.004
  22. Li H, Wu X-J (2017) Multi-focus image fusion using dictionary learning and low-rank representation, pp 675–686. https://doi.org/10.1007/978-3-319-71607-7_59.
  23. Houssein, E. H., Neggaz, N., Hosney, M. E., Mohamed, W. M., & Hassaballah, M. (2021). Enhanced Harris hawks optimization with genetic operators for selection chemical descriptors and compounds activities. Neural computing and applications, 33, 13601-13618.
  24. Goodfellow IJ, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks. Commun ACM 2020;63:139-44.
  25. Soloveitchik, M., Diskin, T., Morin, E., & Wiesel, A. (2021). Conditional frechet inception distance. arXiv preprint arXiv:2103.11521.
  26. W. Wang, F. Chang, A multi-focus image fusion method based on Laplacian pyramid, J. Comput. 6 (12) (2011) 2559–2566.
  27. S. Li, J.T. Kwok, Y. Wang, Combination of images with diverse focuses using the spatial frequency, Infusionstherapie 2 (3) (2001)169–176.
  28. A.M. Eskicioglu, P.S. Fisher, Image quality measure and their performance, IEEE Trans. Commun. 43 (12) (1995) 2959–2965.
  29. Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE Transactions on Image Processing, 13 (2004), pp. 600–612. http://dx.doi.org/10.1109/TIP.2003.819861.
  30. A. Horé and D. Ziou, "Image Quality Metrics: PSNR vs. SSIM," 2010 20th International Conference on Pattern Recognition, Istanbul, 2010, pp. 2366-2369, doi: 10.1109/ICPR.2010.579.
  31. P, Jagalingam & Hegde, Arkal. (2015). A Review of Quality Metrics for Fused Image, doi: 10.1016/j.aqpro.2015.02.019.
  32. Yu, L.; Xun, C.; Ward, R.K.; Wang, Z.J. Image Fusion With Convolutional Sparse Representation. IEEE Signal Process. Let. 2016, 23, 1882–1886.
  33. Ma, J.; Chen, C.; Li, C.; Huang, J. Infrared and visible image fusion via gradient transfer and total variation minimization. Inf. Fusion 2016, 31, 100–109.
  34. Zhou, Z.; Wang, B.; Li, S.; Dong, M. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters. Inf. Fusion 2016, 15–26.
  35. Ma, J.; Zhou, Z.;Wang, B.; Zong, H. Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys. Technol. 2017, 82, 8–17.
  36. Duan, C.; Xing, C.; Liu, Y.; Wang, Z. Fusion of Infrared and Visible Images Using Fast Global Smoothing Decomposition and Target-Enhanced Parallel Gaussian Fuzzy Logic. Sensors 2022, 22, 40. https://doi.org/10.3390/s22010040
  37. Zhang, Q.; Guo, B. Multifocus image fusion using the nonsubsampled Contourlet transform. Signal Process. 2009, 89, 1334–1346.
  38. Li, H.;Wu, X.; Kittler, J. Infrared and Visible Image Fusion using a Deep Learning Framework. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR) , Beijing, China, 20–24 August 2018; pp. 2705–2710.
  39. Li, H.;Wu, X.J.; Durrani, T.S. Infrared and Visible Image Fusion with ResNet and zero-phase component analysis. Infrared Phys. Technol. 2019, 102, 103039.
  40. Chen, J.; Li, X.; Luo, L.; Mei, X.; Ma, J. IInfrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inf. Sci. 2020, 508, 64–78.
  41. Liu, Y.; Chen, X.; Cheng, J.; Peng, H.; Wang, Z. Infrared and visible image fusion with convolutional neural networks. Int. J. Wave. Mult. Inf. Process. 2018, 16, 1850018.
  42. Li, H.;Wu, X.J.; Durrani, T.S. Infrared and Visible Image Fusion with ResNet and zero-phase component analysis. Infrared Phys. Technol. 2019, 102, 103039.
  43. Toet, A. The TNO multiband image data collection. Data in brief, 2017, 15, p.249.
  44. Ma, J.; Zhao, J.; Ma, Y.; Tian, J. Non-rigid visible and infrared face registration via regularized Gaussian fields criterion. Pattern Recognit. 2015, 48, 772–784.
  45. Raza, S.E.A. Registration of Thermal and Visible Light Images of Diseased Plants using Silhouette Extraction in the Wavelet Domain. Pattern Recognit. 2015, 48, 2119–2128.