Refine Search

New Search

Results: 10

(searched for: doi:10.13176/11.253)
Save to Scifeed
Page of 1
Articles per Page
by
Show export options
  Select all
Published: 19 January 2021
by MDPI
Abstract:
Recurrent floods are one of the major global threats among people, particularly in developing countries like India, as this nation has a tropical monsoon type of climate. Therefore, flood susceptibility (FS) mapping is indeed necessary to overcome this type of natural hazard phenomena. With this in mind, we evaluated the prediction performance of FS mapping in the Koiya River basin, Eastern India. The present research work was done through preparation of a sophisticated flood inventory map; eight flood conditioning variables were selected based on the topography and hydro-climatological condition, and by applying the novel ensemble approach of hyperpipes (HP) and support vector regression (SVR) machine learning (ML) algorithms. The ensemble approach of HP-SVR was also compared with the stand-alone ML algorithms of HP and SVR. In relative importance of variables, distance to river was the most dominant factor for flood occurrences followed by rainfall, land use land cover (LULC), and normalized difference vegetation index (NDVI). The validation and accuracy assessment of FS maps was done through five popular statistical methods. The result of accuracy evaluation showed that the ensemble approach is the most optimal model (AUC = 0.915, sensitivity = 0.932, specificity = 0.902, accuracy = 0.928 and Kappa = 0.835) in FS assessment, followed by HP (AUC = 0.885) and SVR (AUC = 0.871).
Published: 4 November 2020
by MDPI
Remote Sensing, Volume 12; https://doi.org/10.3390/rs12213620

Abstract:
The Rarh Bengal region in West Bengal, particularly the eastern fringe area of the Chotanagpur plateau, is highly prone to water-induced gully erosion. In this study, we analyzed the spatial patterns of a potential gully erosion in the Gandheswari watershed. This area is highly affected by monsoon rainfall and ongoing land-use changes. This combination causes intensive gully erosion and land degradation. Therefore, we developed gully erosion susceptibility maps (GESMs) using the machine learning (ML) algorithms boosted regression tree (BRT), Bayesian additive regression tree (BART), support vector regression (SVR), and the ensemble of the SVR-Bee algorithm. The gully erosion inventory maps are based on a total of 178 gully head-cutting points, taken as the dependent factor, and gully erosion conditioning factors, which serve as the independent factors. We validated the ML model results using the area under the curve (AUC), accuracy (ACC), true skill statistic (TSS), and Kappa coefficient index. The AUC result of the BRT, BART, SVR, and SVR-Bee models are 0.895, 0.902, 0.927, and 0.960, respectively, which show very good GESM accuracies. The ensemble model provides more accurate prediction results than any single ML model used in this study.
Jebaveerasingh Jebadurai, J Dinesh Peter
Published: 1 July 2017
Pattern Recognition Letters, Volume 94, pp 144-153; https://doi.org/10.1016/j.patrec.2017.04.013

The publisher has not yet granted permission to display this abstract.
International Journal of Remote Sensing, Volume 37, pp 4201-4224; https://doi.org/10.1080/01431161.2016.1209314

Abstract:
Recently, compressive sensing (CS) has offered a new framework whereby a signal can be recovered from a small number of noisy non-adaptive samples. This is now an active area of research in many image-processing applications, especially super-resolution. CS algorithms are widely known to be computationally expensive. This paper studies a real time super-resolution reconstruction method based on the compressive sampling matching pursuit (CoSaMP) algorithm for hyperspectral images. CoSaMP is an iterative compressive sensing method based on the orthogonal matching pursuit (OMP). Multi-spectral images record enormous volumes of data that are required in practical modern remote-sensing applications. A proposed implementation based on the graphical processing unit (GPU) has been developed for CoSaMP using computed unified device architecture (CUDA) and the cuBLAS library. The CoSaMP algorithm is divided into interdependent parts with respect to complexity and potential for parallelization. The proposed implementation is evaluated in terms of reconstruction error for different state-of-the-art super-resolution methods. Various experiments were conducted using real hyperspectral images collected by Earth Observing-1 (EO-1), and experimental results demonstrate the speeding up of the proposed GPU implementation and compare it to the sequential CPU implementation and state-of-the-art techniques. The speeding up of the GPU-based implementation is up to approximately 70 times faster than the corresponding optimized CPU.
, Tingrong Yuan, Wei Wang, , Qingmin Liao
IEEE Transactions on Systems, Man, and Cybernetics: Systems, Volume 47, pp 1-11; https://doi.org/10.1109/tsmc.2016.2523947

Abstract:
In this paper, we present a new learning-based single-image super-resolution (SR) approach, inspired by existing sparse representation-based methods. As a promising image modeling theory, sparse representation has been effectively applied to solve the image SR problem, usually with the use of pretrained coupled or semi-coupled dictionaries. In our proposed method, we train independent dictionaries for high-resolution (HR) and low-resolution (LR) image patches to endow them more flexibility of expression. We use local subdictionaries to adaptively code image patches, which can characterize image local structures better and ensure the sparsity property of the image. Furthermore, we use kernel regression to relate HR and LR coding coefficients to capture and map the intrinsic nonlinear relationship between them. Such mapping is of central importance in the image SR problem, because high-order statistics play a significant role in the reconstruction of the detail structure of an HR image. The proposed model is generic for image SR in terms of two categories of blurring kernel. Experimental results show that our method can effectively reconstruct image details and outperform state-of-the-art algorithms in both quantitative and visual comparisons.
Yih-Lon Lin, Yu-Min Chiang, Yi-Ling Tsai
2015 International Conference on Machine Learning and Cybernetics (ICMLC), Volume 1, pp 104-109; https://doi.org/10.1109/icmlc.2015.7340906

Abstract:
In this paper, a new approach is proposed for image super-resolution by combining morphological component analysis and least squares support vector machines. The proposed approach for image super-resolution consists of three steps. First, under morphological component analysis, the high resolution and low resolution images are individually decomposed into high and low frequency components, respectively. Second, the weights of two least squares support vector machines are trained by the low frequency components of the low/high resolution images and the high frequency component of low/high resolution images, respectively. Finally, the high resolution image is then reconstructed via sum of the predicted outputs from two least squares support vector machines. Experimental results show that the proposed super-resolution method performs better than the traditional bi-cubic interpolation method.
Jie Xu, , , Dacheng Tao,
2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp 5799-5803; https://doi.org/10.1109/icassp.2014.6854715

Abstract:
Existing support vector regression (SVR) based image superresolution (SR) methods always utilize single layer SVR model to reconstruct source image, which are incapable of restoring the details and reduce the reconstruction quality. In this paper, we present a novel image SR approach, where a multi-layer SVR model is adopted to describe the relationship between the low resolution (LR) image patches and the corresponding high resolution (HR) ones. Besides, considering the diverse content in the image, we introduce pixel-wise classification to divide pixels into different classes, such as horizontal edges, vertical edges and smooth areas, which is more conductive to highlight the local characteristics of the image. Moreover, the input elements to each SVR model are weighted respectively according to their corresponding output pixel's space positions in the HR image. Experimental results show that, compared with several other learning-based SR algorithms, our method gains high-quality performance.
Page of 1
Articles per Page
by
Show export options
  Select all
Back to Top Top