Automatic Extraction of Buildings From UAV-Derived Orthophotos Using a Deep Neural Network Approach

R. A. Nsiah, S. Mantey, Y. Y. Ziggah

Abstract


Buildings are the most common element in urban environments, and their accurate and efficient extraction from remotely sensed data is essential for various applications, such as urban planning and monitoring, population estimation, disaster planning, management and response, and updating geographic databases. However, conventional techniques for extracting buildings from remotely sensed data pose challenges due to the complexity and difference in buildings, changes in scenery, imaging sensors, and conditions. They require expert knowledge and expertise, thus undermining the applicability of these conventional approaches and making them time-consuming. The aim of the research is to evaluate the applicability and performance of deep neural networks (DNNs) in accurately identifying and delineating buildings within the Ghanaian context. To accomplish this objective, a supervised learning approach was adopted, and the U-Net model and its variant UResNet-34 were trained on a labelled dataset. The dataset comprised labelled UAV-derived orthophotos capturing urban areas with diverse architectural styles and building patterns in Ghana. The evaluation results indicated that U-Net and UResNet-34 models achieved promising performance in building extraction tasks. Remarkably, the U-ResNet-34 model, benefiting from the residual connections of ResNet-34, exhibited improved performance compared to the original U-Net model. The implications of these findings are significant, as they contribute to identifying informal settlements and estimating population density. Additionally, it aids in disaster response planning and post-event damage assessment. In conclusion, this study highlights the effectiveness of DNNs for automatically extracting buildings from UAV orthophotos in the Ghanaian context, offering valuable insights for informed decision-making in urban and environmental domains.


Keywords


Buildings, Extraction, UAVs, Deep Neural Networks, U-Net, ResNet-34

Full Text:

PDF

References


Abdollahi, A. and Pradhan, B. (2021), “Integrating semantic edges and segmentation information for building extraction from aerial images using UNet”, Machine Learning with Applications, Elsevier Ltd., Vol. 6, pp. 1–10.

Alshehhi, R., Marpu, P. R., Woon, W. L. and Mura, M. D. (2017), “Simultaneous extraction of roads and buildings in remote sensing imagery with convolutional neural networks”, ISPRS Jo-urnal of Photogrammetry and Remote Sensing, Vol. 130, pp. 139–149.

El Asri, S. A., El Adib, S., Negabi, I. and Raissouni, N. (2022), “A Modular System Based on U-Net for Automatic Building Extraction from very high-resolution satellite images”, E3S Web of Conferences, Vol. 351, p. 01071.

Bittner, K., Cui, S. and Reinartz, P. (2017), “Buil-ding extraction from remote sensing data using fully convolutional networks”, International Archives of the Photogrammetry, Remote Sens-ing and Spatial Information Sciences - ISPRS Archives, Vol. 42, No. 1W1, pp. 481–486.

Dey, M. S., Chaudhuri, U., Banerjee, B. and Bha-ttacharya, A. (2022), “Dual-Path Morph-Unet for Road and Building Segmentation from Sate-llite Images”, IEEE Geoscience and Remo-te Sensing Letters, IEEE, Vol. 19, pp. 1–5.

Erdem, F. and Avdan, U. (2020), “Comparison of Different U-Net Models for Building Extraction from High-Resolution Aerial Imagery”, Inter-national Journal of Environment and Geoinf-ormatics, Vol. 7, No. 3, pp. 221–227.

Guo, M., Liu, H., Xu, Y. and Huang, Y. (2020), “Bu-ilding extraction based on U-net with an atten-tion block and multiple losses”, Remote Sens-ing, Vol. 12, No. 9, pp. 1–17.

He, K., Zhang, X., Ren, S. and Sun, J. (2016), “Deep residual learning for image recognition”, Proce-edings of the IEEE Computer Society Confer-ence on Computer Vision and Pattern Recogn-ition, Vol. 2016-Decem, pp. 770–778.

Hu, Q., Zhen, L., Mao, Y., Zhou, X. and Zhou, G. (2021), “Automated building extraction using satellite remote sensing imagery”, Automation in Construction, Elsevier BV, Vol. 123, No. Se-ptember 2020, p. 103509.

Jin, Y., Xu, W., Zhang, C., Luo, X. and Jia, H. (2021), “Boundary-aware refined network for automatic building extraction in very high-res-olution urban aerial images”, Remote Sensing, Vol. 13, No. 4, pp. 1–20.

Li, C., Fu, L., Zhu, Q., Zhu, J., Fang, Z., Xie, Y., Guo, Y. and Gong, Y. (2021), “Attention enh-anced u-net for building extraction from farml-and based on google and worldview-2 remote sensing images”, Remote Sensing, Vol. 13, No. 21, available at:https://doi.org/-10.3390/rs132-14411.

Li, C., Liu, Y., Yin, H., Li, Y., Du, P., Zhang, L. and Guo, Q. (2021), “Hybrid attention cascaded U-net for building extraction from aerial images”, Proceedings - 2021 7th International Confere-nce on Big Data Computing and Communicati-ons, BigCom 2021, pp. 294–301.

Liu, W., Yang, M. Y., Xie, M., Guo, Z., Li, E. Z., Zhang, L., Pei, T. and Wang, D. (2019), “Accu-rate building extraction from fused DSM and UAV images using a chain fully convolutional neural network”, Remote Sensing, Vol. 11, No. 24, pp. 1–18.

Liu, Z., Chen, B. and Zhang, A. (2020), “Building segmentation from satellite imagery using U-Net with ResNet encoder”, Proceedings - 2020 5th International Conference on Mechanical, Control and Computer Engineering, ICMCCE 2020, pp. 1967–1971.

Long, J., Shelhamer, E. and Darrell, T. (2015), “Fully Convolutional Networks for Semantic Segmentation”, IEEE Conference on Computer Vision and Pattern (CVPR), Boston, MA, USA, 7–12 June 2015, pp. 3431–3440.

Maggiori, E., Tarabalka, Y., Charpiat, G. and Alliez, P. (2017), “Convolutional Neural Networks for Large-Scale Remote-Sensing Image Classifi-cation”, IEEE Transactions on Geoscience and Remote Sensing, Vol. 55, No. 2, pp. 645–657.

Pan, Z., Xu, J., Guo, Y., Hu, Y. and Wang, G. (2020), “Deep learning segmentation and class-ification for urban village using a worldview satellite image based on U-net”, Remote Sens-ing, Vol. 12, No. 10, pp. 1–17.

Ronneberger, O., Fischer, P. and Brox, T. (2015), “U-net: Convolutional networks for biomedical image segmentation”, In International Confe-rence on Medical Image Computing and Com-puter-Assisted Intervention, MICAI 2015, pp. 234–241.

Saito, S., Yamashita, T. and Aoki, Y. (2016), “Multiple object extraction from aerial imagery with convolutional neural networks”, Journal of Imaging Science and Technology, Vol. 60, No. 1, pp. 0104021–0104029.

Sumer, E. and Turker, M. (2013), “An adaptive fuzzy-genetic algorithm approach for building detection using high-resolution satellite imag-es”, Computers, Environment and Urban Syste-ms, Vol. 39, pp. 48–62.

Temenos, A., Temenos, N., Doulamis, A. and Doulamis, N. (2022), “On the Exploration of Automatic Building Extraction from RGB Sat-ellite Images Using Deep Learning Archit-ectures Based on U-Net”, Technologies, Vol. 10, No. 1, p. 19.

Wagner, F. H., Dalagnol, R., Tarabalka, Y., Segantine, T. Y. F., Thomé, R. and Hirye, M. C. M. (2020), “U-net-id, an instance segmentation model for building extraction from satellite images-Case study in the Joanopolis City, Brazil”, Remote Sensing, Vol. 12, No. 10, pp. 1–14.

Wei, X., Li, X., Liu, W., Zhang, L., Cheng, D., Ji, H., Zhang, W. and Yuan, K. (2021), “Building outline extraction directly using the u2-net semantic segmentation model from high-res-olution aerial images and a comparison study”, Remote Sensing, Vol. 13, No. 16, pp. 1–20.

Xu, L., Liu, Y., Yang, P., Chen, H., Zhang, H., Wang, D. and Zhang, X. (2021), “HA U-Net: Improved Model for Building Extraction from High-Resolution Remote Sensing Imagery”, IEEE Access, IEEE, Vol. 9, pp. 101972–101984.

Yang, H., Wu, P., Yao, X., Wu, Y., Wang, B. and Xu, Y. (2018), “Building extraction in very high-resolution imagery by dense-attenti netw-orks”, Remote Sensing, Vol. 10, No. 11, pp. 1–16.

Zhang, P., Du, P., Lin, C., Wang, X., Li, E., Xue, Z. and Bai, X. (2020), “A hybrid attention-aware fusion network (Hafnet) for building extraction from high-resolution imagery and lidar data”, Remote Sensing, Vol. 12, No. 22, pp. 1–20.


Refbacks

  • There are currently no refbacks.