Design of A Water and Soil Classification Model for Satellite Images Via The Machine Learning Process
Abstract
The transmission of a variety of information about the target location in satellite photos is made possible by panchromatic and multispectral layers (ROI). Researchers will find it easier to identify related and distinctive site aspects thanks to these layers, which each highlight a particular part of the location. The enormous range of crop varieties, land cover types, soil types, and other associated features found in these locations may be identified using a variety of machine-learning algorithms. Each strategy differs from the others in terms of how effectively it works, how widely it can be applied, how challenging it is to put into practice, and how much money it will cost. In this study, we provide a brand-new deep-learning model for the on-the-fly, real-time region-based classification of satellite pictures. This model aims to advance upon the previous one. The large-scale temporal datasets for a particular region were first obtained using Google Earth Engine and then included in the ADLRBSAS model. The initial filtering and elimination of outliers from these datasets are aided using an application-specific segmentation model. Using 19 Vignette classification models, an improved CNN model is used to evaluate segmented images. The suggested model can classify crop types with 95.4% accuracy, land types with 97.2% accuracy, and water and urban cover types with 95.9% accuracy. The accuracy of the suggested model was evaluated using data from several locations and eras; the results showed that the model was consistently reliable. The accuracy, recall, AUC, and latency of the model were assessed and compared to those of several other state-of-the-art methods. In terms of accuracy, recall, and area under the curve levels, the recommended model outperformed the older ones by 2.4%, 3.5%, and 4.5%.